My upgrade from Lenovo X1 Carbon 4gen to 6gen

Image result for lenovo x1 carbon 6th generation

My Confguration (20KH006MMC)

  • Intel Core i7 8550U Kaby Lake Refresh (Intel UHD Graphics 620)
  • 14″ LED 2560×1440 IPS HDR antireflex, 500 nits
  • 1TB SSD (M.2 PCIe NVMe)
  • 4G LTE

First Impressions

My first impressions of 6th gen in comparison to 4th gen:

  • The X1 Carbon 6gen looks great, the quality is almost perfect.
  • It is a little bit smaller and super-light (1,2kg).
  • The display is glossy. I usually prefer matte, but this one seems to be exceptional. It looks much more “bright”, although the red color is weird (over-saturated?).
  • It has rubber/soft texture (for me already known from my former T440s – looks good but is difficult to maintain + the edges will probably get bare soon).
  • The keyboard is softer and does not make any weird noises (MyX1-4gen produced rattling noises when typing). The typing seems to be more comfortable than the X1-4gen. The keyboard rattling was in fact the major deficiency I had with 4gen.
  • It produces more heat. Noticeably less comfortable to have it on a lap.
  • The 6gen battery gets exhausted pretty quickly. No improvement in comparison to 4gen.
  • Wake-up from sleep is pretty quick. It is ready right as you open the lid.
  • In Sleep mode it stays quite warm (= consumes battery).

Sleep Mode

Well, the last two items seems to be interconnected. The X1 6gen comes with so called “Standby (S0 Low Power Idle) Network Connected” power profile which keeps the notebook half-alive (able to download updates etc.). After switching the Sleep mode to S3 (switch sleep mode from “Windows 10” to “Linux” in BIOS) it is cold during Sleep. The wake-up is a little bit slower in S3 than in S0, but still pretty fast.

Redux: reducing boilerplate

There’s no doubt Redux is pretty verbose. Don’t get me wrong, I know there is a good reason for that, but the price is still quite high. Fortunately, there is a way to reduce boilerplate without losing any fundamental parts of Redux.

I’m going to focus on one of these parts – reducers.

Let’s examine a very common implementation of a reducer which you can find on every article on the internet.

function todoApp(state = initialState, action) {
	switch (action.type) {
			return {
				visibilityFilter: action.visibilityFilter
		case ADD_TODO:
			return {
				todos: [
						text: action.text,
						completed: false
			return state

Cool, but where is the single responsibility principle? A reducer is just a function, but this function knows too much. In this case our reducer implements a logic for adding a new todo item and handling filtering.

Also, official documentation of Redux pointed it out, so keep in mind general good practices in order to write a clean code.

Writing functions with single level of abstraction and single responsibility principle is much better practice. A function should do only one thing and every statement of a function should be on the same level of abstraction. By obeying these two laws (that are valid in object oriented and functional paradigm too, by the way) we can achieve cleaner and more readable functions.

Reducer is responsible for creating a new state based on the current state and the data that are coming through as an action payload.

As I mentioned before, you can find the way to solve this problem in Redux documentation. Extracting that low-level logic from every case statement into a separated handler function and creating a createReducer factory function with mapping between action types and handlers is way better.

But don’t forget that reducers are pure functions, so we can’t deal with the side effects here. The right place for the side effects is a middleware and therefore we have to split our logic between the middleware and the handler functions. Sometimes, it would not be so easy to decide which logic we should put in the middleware and which one belongs to the handler functions.

I’m going to show you slightly different approach.

First of all, let’s get rid of the low-level logic from our reducer as well!

In my opinion there exists even more straightforward way: we can simply put the whole logic into the middleware.   Now, reducers can only receive actions with final data which we are about to store in the state. We can simply merge current state and incoming data to create a new state.

Let’s make our reducer look like this.

function todoApp(state = initialState, action) {
	switch (action.type) {
			return {
				visibilityFilter: action.visibilityFilter
		case ADD_TODO:
			return {
				todos: action.todos

			return state

The Todo reducer is now much cleaner. But sooner or later we come across another issue. In every case in the switch statement we are forced to do the same thing. We just take a payload of an incoming action and merge it with the current state over and over again. Actually, I don’t like the switch statement at all. I believe we have (in OOP and also in Functional paradigm) more advanced way to solve this kind of situations.

We can create higher order function which serves us as a factory function for creating reducers. Similar to a createReducer from Redux documentation.

We have no longer to take care of reducers. The only thing we have to do is just call the factory function in order to get a particular reducer which will handle given actions for us.

const createReducer = (actionTypes, initialState) => (state = initialState, action) => {
	if(actionTypes.some(actionType => actionType === action.type)) {
		const { type, ...actionData } = action;
		return {
	return state;

Now, we can create a todoApp reducer by calling a createReducer function.

const todoApp = createReducer([SET_VISIBILITY_FILTER, ADD_TODO], initialState);

That factory function accepts two parameters: an Array of action types and an initial state,  and returns a new function – reducer. Based on the given actionTypes array, we can easily find out if the incoming action belongs to this reducer and if so, we take all fields (excluding the type field) of the action and merge them with the current state. Otherwise, we just return the current state or the initial state.

Important thing is, that the fields of the action have to be named as the fields of the state which we are about to change.


We don’t need to take care of reducers anymore. A single line of code to create one. No big deal. Less code means less work and less room for mistakes. The logic is centralized in a middleware and actions carry final data for reducers. Reducers do one thing and know nothing about an implementation in a middleware.

I would love to know what you think about it. Don’t hesitate to give me some feedback.

SQL: Index statistics update date

Simple query can help you get basic insights on when the index statistics where updated:

SELECT AS TableName, AS IndexName,
		STATS_DATE(i.object_id, i.index_id) AS StatisticsUpdate
	FROM sys.objects o
		INNER JOIN sys.indexes i ON (o.object_id = i.object_id)
		(i.type > 0)
	ORDER BY TableName, IndexName
	-- ORDER BY StatisticsUpdate

See also:

SQL LocalDB: Upgrade to 2017 (14.0.1000)

For me it was quite confusing to find the 2017 version of LocalDB and it is not a streamline process to upgrade your local default instance. The link for “SQL Server 2017 Express LocalDB” on official website ( leads to “SQLServer2016-SSEI-Expr.exe” which runs a SQL Server 2016 with SP2 installer. Now what?

The easiest way to upgrade your LocalDB instance to 2017 is:

  1. Download the LocalDB 2017 installer directly:
  2. Before running the installer, delete your current MSSQLLocalDB instance:
    sqllocaldb stop MSSQLLocalDB
    sqllocaldb delete MSSQLLocalDB
  3. Run the LocalDB 2017 installer. It will create a new MSSQLLocalDB instance.
  4. [OPTIONAL] If you did not delete the older instance before running the installer, you can delete it now and recreate the instance. It will be created as new version:
    sqllocaldb stop MSSQLLocalDB
    sqllocaldb delete MSSQLLocalDB
    sqllocaldb create MSSQLLocalDB
  5. Now you can re-attach your original databases using SQL Server Management Studio (RClick + Attach…)
  6. Done.


Word: Replace hyphens with non-breaking ones

Word has a non-breaking hyphen (Ctrl+Shift+-). If you use it in a word, it does not break the line (in opposite to the regular hyphen).

If you want to mass-replace your regular hyphens in whole document (in my case, I wanted to print a morse-code quiz :-D) you can use the Replace (Ctrl+H) dialog.

Unfortunately the Replace dialog does not accept the Ctrl+Shift+- keyboard shortcut. You have to type ^~ to represent the non-breaking hyphen:2018-09-03_10-31-18


Azure App Service scheduled restart

If you want to restart your App Service on a scheduled basis, you can do that using simple PowerShell:

Stop-AzureRmWebApp -Name '_App Service Name_' -ResourceGroupName '_Resource Group Name_'
Start-AzureRmWebApp -Name '_App Service Name_' -ResourceGroupName '_Resource Group Name_'

Basically you have to solve two issues:

  1. How to schedule such Powershell script to run automatically at given times? We can use a simple WebJob for that.
  2. How to authenticate the execution? We should use a Service Principal Id for that.

Let’s start from the second one.

Credits: This is an updated and fixed version of a procedure originally published by Karan Singh – a Microsoft Employee on his MSDN blog.

Getting a Service Principal Id for authentication

It is not a good idea to use a real user-account for authentication of such a job. If you have an Organizational Account with 2-Factor Authentication, forget it.

The right way of authenticating your jobs is to use a Service Principal Id which allows you to proceed with silent authentication.

To create one, save and run following Powershell script from your PC (one-off task):

    [Parameter(Mandatory=$true, HelpMessage="Enter Azure Subscription name. You need to be Subscription Admin to execute the script")]
    [string] $subscriptionName,

    [Parameter(Mandatory=$true, HelpMessage="Provide a password for SPN application that you would create")]
    [string] $password,

    [Parameter(Mandatory=$false, HelpMessage="Provide a SPN role assignment")]
    [string] $spnRole = "owner"

$ErrorActionPreference = "Stop"
$VerbosePreference = "SilentlyContinue"
$userName = $env:USERNAME
$newguid = [guid]::NewGuid()
$displayName = [String]::Format("VSO.{0}.{1}", $userName, $newguid)
$homePage = "http://" + $displayName
$identifierUri = $homePage

#Initialize subscription
$isAzureModulePresent = Get-Module -Name AzureRM* -ListAvailable
if ([String]::IsNullOrEmpty($isAzureModulePresent) -eq $true)
    Write-Output "Script requires AzureRM modules to be present. Obtain AzureRM from Please refer for recommended AzureRM versions." -Verbose

Import-Module -Name AzureRM.Profile
Write-Output "Provide your credentials to access Azure subscription $subscriptionName" -Verbose
Login-AzureRmAccount -SubscriptionName $subscriptionName
$azureSubscription = Get-AzureRmSubscription -SubscriptionName $subscriptionName
$connectionName = $azureSubscription.SubscriptionName
$tenantId = $azureSubscription.TenantId
$id = $azureSubscription.SubscriptionId

#Create a new AD Application
Write-Output "Creating a new Application in AAD (App URI - $identifierUri)" -Verbose
$secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
$azureAdApplication = New-AzureRmADApplication -DisplayName $displayName -HomePage $homePage -IdentifierUris $identifierUri -Password $secpasswd -Verbose
$appId = $azureAdApplication.ApplicationId
Write-Output "Azure AAD Application creation completed successfully (Application Id: $appId)" -Verbose

#Create new SPN
Write-Output "Creating a new SPN" -Verbose
$spn = New-AzureRmADServicePrincipal -ApplicationId $appId
$spnName = $spn.ServicePrincipalName
Write-Output "SPN creation completed successfully (SPN Name: $spnName)" -Verbose

#Assign role to SPN
Write-Output "Waiting for SPN creation to reflect in Directory before Role assignment"
Start-Sleep 20
Write-Output "Assigning role ($spnRole) to SPN App ($appId)" -Verbose
New-AzureRmRoleAssignment -RoleDefinitionName $spnRole -ServicePrincipalName $appId
Write-Output "SPN role assignment completed successfully" -Verbose

#Print the values
Write-Output "`nCopy and Paste below values for Service Connection" -Verbose
Write-Output "***************************************************************************"
Write-Output "Connection Name: $connectionName(SPN)"
Write-Output "Subscription Id: $id"
Write-Output "Subscription Name: $connectionName"
Write-Output "Service Principal Id: $appId"
Write-Output "Service Principal key: <Password that you typed in>"
Write-Output "Tenant Id: $tenantId"
Write-Output "***************************************************************************"

You will be asked for a Subscription Name and Password for the Service Principal Id. You will also need to be an admin on your Azure Active Directory to be able to proceed.

Save the results securely, you can use the created Service Principal Id which gets the Owner role (or any other you specify) for many other administrative tasks (although it is a good idea to create a separate Service Principal for every single task).


Scheduling the restart using PowerShell WebJob

Use any WebJob deployment procedure of your taste to create a scheduled Powershell WebJob executing following script:

$ProgressPreference= "SilentlyContinue"
$password = '_Service Principal Key/Password_'
$secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
$mycreds = New-Object System.Management.Automation.PSCredential ("_Service Principal Id_", $secpasswd)
Add-AzureRmAccount -ServicePrincipal -Tenant '_Tenant Id_' -Credential $mycreds
Select-AzureRmSubscription -SubscriptionId '_Subscription Id_'
Stop-AzureRmWebApp -Name '_App Service Name_' -ResourceGroupName '_Resource Group Name_'
Start-AzureRmWebApp -Name '_App Service Name_' -ResourceGroupName '_Resource Group Name_'

For manual deployment, you can use the Azure Portal directly:


  1. Save the script as run.ps1 file and create a ZIP archive with it (with any name).
  2. Go to App Service / Web Jobs and Add a new WebJob there:


And it’s done. Just be sure to enable Always On for your App Service to execute the WebJobs on schedule.

You can Start the job manually from here (the Start button) if  you want to test it and you can verify the execution results using KUDU Dashboard (the Logs button).