Category Archives: Database

SQL LocalDB: Upgrade to 2019 (15.0.2000)

For me it was quite confusing to find the 2019 version of LocalDB and it is not a streamline process to upgrade your local default instance.

The easiest way to upgrade your LocalDB instance to 2019 is:

  1. Download the LocalDB 2019 installer by using the SQL Server Express installer.
    https://go.microsoft.com/fwlink/?linkid=866658
    1. Run the installer and select “Download Media”.
    2. Select “LocalDB” + click Download.
  2. Before running the SqlLocalDB.msi installer, delete your current MSSQLLocalDB instance:
sqllocaldb stop MSSQLLocalDB
sqllocaldb delete MSSQLLocalDB
  1. Run the new SqlLocalDB.msi (2019) installer. It will create a new MSSQLLocalDB instance.
  2. RESTART YOUR PC! (Otherwise MS SQL Management Studio will still tell you you have an old version running, etc. etc.)
  3. Now you can re-attach your original databases one by one using SQL Server Management Studio (RClick + Attach…)
  4. Done.

SQL: Index statistics update date

Simple query can help you get basic insights on when the index statistics where updated:

SELECT
		o.name AS TableName,
		i.name AS IndexName,
		STATS_DATE(i.object_id, i.index_id) AS StatisticsUpdate
	FROM sys.objects o
		INNER JOIN sys.indexes i ON (o.object_id = i.object_id)
	WHERE
		(i.type > 0)
		AND (o.type_desc NOT IN ('INTERNAL_TABLE', 'SYSTEM_TABLE'))
	ORDER BY TableName, IndexName
	-- ORDER BY StatisticsUpdate

See also:

SQL LocalDB: Upgrade to 2017 (14.0.1000)

For me it was quite confusing to find the 2017 version of LocalDB and it is not a streamline process to upgrade your local default instance. The link for “SQL Server 2017 Express LocalDB” on official website (https://www.microsoft.com/en-us/sql-server/sql-server-editions-express) leads to “SQLServer2016-SSEI-Expr.exe” which runs a SQL Server 2016 with SP2 installer. Now what?

The easiest way to upgrade your LocalDB instance to 2017 is:

  1. Download the LocalDB 2017 installer directly:
    https://download.microsoft.com/download/E/F/2/EF23C21D-7860-4F05-88CE-39AA114B014B/SqlLocalDB.msi
  2. Before running the installer, delete your current MSSQLLocalDB instance:
    sqllocaldb stop MSSQLLocalDB
    sqllocaldb delete MSSQLLocalDB
    
  3. Run the LocalDB 2017 installer. It will create a new MSSQLLocalDB instance.
  4. [OPTIONAL] If you did not delete the older instance before running the installer, you can delete it now and recreate the instance. It will be created as new version:
    sqllocaldb stop MSSQLLocalDB
    sqllocaldb delete MSSQLLocalDB
    sqllocaldb create MSSQLLocalDB
    
  5. Now you can re-attach your original databases using SQL Server Management Studio (RClick + Attach…)
  6. Done.

Credits:

Azure SQL: The database ‘tempdb’ has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. (Microsoft SQL Server, Error: 40544)

On one of the projects I’m involved in, whatever we wanted to do with our Azure SQL DB (V12), we got this error:

The database ‘tempdb’ has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. (Microsoft SQL Server, Error: 40544)

As there is not much to do in such situation we asked Microsoft Azure Support for assistance and this is their reply (saved here for further reference):

If you experience the TempDB full issue again, then please leverage the DMVs below and look for active transactions which are taking maximum TempDB space. For a long term fix, I would recommend tuning those queries to use less temp space and if the issue is being caused by multiple queries running concurrently then serialize the workload to not consume all available TempDB space. Here are the queries that will give you insight into which queries are causing this:

Query to get Used and Free Space in TempDB
Note: This can be misleading as it doesn’t include free space up to max size

SELECT
SUM(allocated_extent_page_count) AS [used pages],
(SUM(allocated_extent_page_count)*1.0/128) AS [used space in MB],
SUM(unallocated_extent_page_count) AS [free pages],
(SUM(unallocated_extent_page_count)*1.0/128) AS [free space in MB]
FROM tempdb.sys.dm_db_file_space_usage;

Retrieving information about which clients (connected to current database) are using TempDB with open transactions

SELECT top 5 a.session_id, a.transaction_id, a.transaction_sequence_num, a.elapsed_time_seconds,
b.program_name, b.open_tran, b.status
FROM tempdb.sys.dm_tran_active_snapshot_database_transactions a
join sys.sysprocesses b
on a.session_id = b.spid
ORDER BY elapsed_time_seconds DESC

Retrieving information about which active tasks (in current database) are using TempDB

SELECT es.host_name , es.login_name , es.program_name,
st.dbid as QueryExecContextDBID, DB_NAME(st.dbid) as QueryExecContextDBNAME, st.objectid as ModuleObjectId,
SUBSTRING(st.text, er.statement_start_offset/2 + 1,(CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),st.text)) * 2 ELSE er.statement_end_offset
END - er.statement_start_offset)/2) as Query_Text,
tsu.session_id ,tsu.request_id, tsu.exec_context_id,
(tsu.user_objects_alloc_page_count - tsu.user_objects_dealloc_page_count) as OutStanding_user_objects_page_counts,
(tsu.internal_objects_alloc_page_count - tsu.internal_objects_dealloc_page_count) as OutStanding_internal_objects_page_counts,
er.start_time, er.command, er.open_transaction_count, er.percent_complete, er.estimated_completion_time, er.cpu_time, er.total_elapsed_time, er.reads,er.writes,
er.logical_reads, er.granted_query_memory
FROM tempdb.sys.dm_db_task_space_usage tsu
inner join sys.dm_exec_requests er ON ( tsu.session_id = er.session_id and tsu.request_id = er.request_id)
inner join sys.dm_exec_sessions es ON ( tsu.session_id = es.session_id )
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) st
WHERE (tsu.internal_objects_alloc_page_count+tsu.user_objects_alloc_page_count) > 0
ORDER BY (tsu.user_objects_alloc_page_count - tsu.user_objects_dealloc_page_count)+(tsu.internal_objects_alloc_page_count - tsu.internal_objects_dealloc_page_count) DESC

Find tempDB data size

SELECT SUBSTRING(st.text, er.statement_start_offset/2 + 1,
(CASE WHEN er.statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(max),st.text)) * 2
ELSE er.statement_end_offset END - er.statement_start_offset)/2) as Query_Text,tsu.session_id ,tsu.request_id,
tsu.exec_context_id,
(tsu.user_objects_alloc_page_count - tsu.user_objects_dealloc_page_count) as OutStanding_user_objects_page_counts,
(tsu.internal_objects_alloc_page_count - tsu.internal_objects_dealloc_page_count) as OutStanding_internal_objects_page_counts,
er.start_time, er.command, er.open_transaction_count, er.percent_complete, er.estimated_completion_time,
er.cpu_time, er.total_elapsed_time, er.reads,er.writes,
er.logical_reads, er.granted_query_memory,es.host_name , es.login_name , es.program_name
FROM sys.dm_db_task_space_usage tsu INNER JOIN sys.dm_exec_requests er
ON ( tsu.session_id = er.session_id AND tsu.request_id = er.request_id) INNER JOIN sys.dm_exec_sessions es
ON ( tsu.session_id = es.session_id ) CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) st
WHERE (tsu.internal_objects_alloc_page_count+tsu.user_objects_alloc_page_count) > 0
ORDER BY (tsu.user_objects_alloc_page_count - tsu.user_objects_dealloc_page_count)+ (tsu.internal_objects_alloc_page_count - tsu.internal_objects_dealloc_page_count) DESC

Checking if tempDB has an active transaction

Examine log_reuse_wait_desc from sys.databases if you see ACTIVE_TRANSACTION this could mean there is a long running active transaction in tempdb that is blocking log flush =>

Check for active transaction on tempDB

select log_reuse_wait_desc, * from sys.databases where name = 'tempdb'

select d.name, d.log_reuse_wait_desc, t.* from sys.dm_tran_locks t inner join sys.databases d on t.resource_database_id = d.database_id

Few more docs about optimizing temp DB –
https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database
https://technet.microsoft.com/en-us/library/ms175527(v=sql.105).aspx
https://technet.microsoft.com/en-us/library/ms190768(v=sql.105).aspx

Additionally, the higher the pricing tier, your database would have more tempdb space as well.

SQL Query Store: Compare queries in two periods (execution counts, duration)

If you work with Azure SQL or Microsoft SQL Server 2016+, one of the new features there is the Query Store. It stores useful statistics on queries executed, their execution plans, etc. etc.

One of the scenarios you might find useful is the ability to compare queries executed within two periods:

DECLARE @Period1Start datetime, @Period1End datetime, @Period2Start datetime, @Period2End datetime

SET @Period1Start = '20171212 00:00'
SET @Period1End = '20171213 00:00'
SET @Period2Start = '20171214 00:00'
SET @Period2End = '20171215 00:00'

;WITH period1 AS
(
	SELECT
		query_id,
		AVG(avg_duration) AS avg_duration,
		SUM(count_executions) AS count_executions_total
	FROM sys.query_store_runtime_stats
		INNER JOIN  sys.query_store_runtime_stats_interval ON (query_store_runtime_stats_interval.runtime_stats_interval_id = query_store_runtime_stats.runtime_stats_interval_id)
		INNER JOIN sys.query_store_plan ON query_store_plan.plan_id = query_store_runtime_stats.plan_id
	WHERE
		(sys.query_store_runtime_stats_interval.start_time >= @Period1Start)
		AND (sys.query_store_runtime_stats_interval.end_time <= @Period1End)
	GROUP BY query_id
),
period2 AS
(
	SELECT
		query_id,
		AVG(avg_duration) AS avg_duration,
		SUM(count_executions) AS count_executions_total
	FROM sys.query_store_runtime_stats
		INNER JOIN  sys.query_store_runtime_stats_interval ON (query_store_runtime_stats_interval.runtime_stats_interval_id = query_store_runtime_stats.runtime_stats_interval_id)
		INNER JOIN sys.query_store_plan ON query_store_plan.plan_id = query_store_runtime_stats.plan_id
	WHERE
		(sys.query_store_runtime_stats_interval.start_time >= @Period2Start)
		AND (sys.query_store_runtime_stats_interval.end_time <= @Period2End)
	GROUP BY query_id
)
SELECT
		query_store_query.query_id,
		period1.avg_duration AS period1_avg_duration,
		period2.avg_duration AS period2_avg_duration,
		CASE WHEN period1.count_executions_total IS NOT NULL THEN (period2.avg_duration - period1.avg_duration) * 100.0 / period1.avg_duration ELSE NULL END AS avg_duration_increase_percent,
		period1.count_executions_total AS period1_count_executions_total,
		period2.count_executions_total AS period2_count_executions_total,
		CASE WHEN period1.count_executions_total IS NOT NULL THEN (period2.count_executions_total - period1.count_executions_total) * 100.0 / period1.count_executions_total ELSE NULL END AS count_execution_increase_percent,
		query_sql_text
	FROM period2
		LEFT JOIN period1 ON period1.query_id = period2.query_id
		LEFT JOIN sys.query_store_query ON query_store_query.query_id = period2.query_id
		LEFT JOIN sys.query_store_query_text ON query_store_query_text.query_text_id = query_store_query.query_text_id
	--ORDER BY period2.count_executions_total DESC
	ORDER BY count_execution_increase_percent DESC
	--ORDER BY avg_duration_increase_percent DESC

References

Azure: How to solve Azure SQL “database is not currently available”

Developers sometimes ask me about “database is not currently available” error when using Azure SQL and Entity Framework 6. In this article I explain why happens this error and how to solve it easily.

Case Study

Let’s start with the real life example from our customer:

„… we detected an error in our logs: „Database ‘XXX’ on server ‘XXX’ is not currently available.”. It happens usually once a week and sometimes it happens more times during a night. Should we worry about it? We want to move the database from US region do EU and move all databases under elastic pool, so that maybe we do not have to solve this issue now…“

Theory about Azure SQL

Azure SQL service is a typical PaaS service so that developer doesn’t have to worry about the platform. OS updates, security updates and other things are automatically solved on the provider side. The problem is that sometimes SQL Server reconfiguration is needed and in this case short downtime occurs. It happens during updating OS or when an issue occurred (process or load balancing failure). Most of the downtimes are very short (threshold about 60 seconds). Most of the developers do not know about this issue and just a few developers experienced this error but they do not solve it because the error is exceptional.

Solution

If you want to solve this issue, you must implement retry logic. If you use Entity Framework 6 or EF Core, the solution is very easy. Developers in Microsoft know about this behavior and they prepared easy solution in ORMs.

Connection Resiliency / Retry Logic (EF 6)

In case of EF 6 you must allow Execution Strategy and choose SqlAzureExecutionStrategy. Complete configuration looks like this:

public class MyConfiguration : DbConfiguration 
{ 
    public MyConfiguration() 
    { 
        SetExecutionStrategy( 
            "System.Data.SqlClient", 
            () => new SqlAzureExecutionStrategy(1, TimeSpan.FromSeconds(30))); 
    } 
}

Source and another information you can find in article:

Connection Resiliency (EF Core)

If you work with EF Core, you can analogicaly configure your project in the place where you register all services.

// Startup.cs from any ASP.NET Core Web API
public class Startup
{
    // Other code ...
    public IServiceProvider ConfigureServices(IServiceCollection services)
    {
        // ...
        services.AddDbContext(options =>
        {
            options.UseSqlServer(Configuration["ConnectionString"],
            sqlServerOptionsAction: sqlOptions =>
            {
                sqlOptions.EnableRetryOnFailure(
                maxRetryCount: 5,
                maxRetryDelay: TimeSpan.FromSeconds(30),
                errorNumbersToAdd: null);
            });
        });
    }
//...
}

Source and another information you can find in article:

As you can see in examples, implementation is a piece of cake.

Tip: Microsoft LogParser [Studio] superfast SQL-like querying of any log file

LogParser (download) is a command line tool from Microsoft which allows you to query any text-based log file using SQL-like syntax. The basic list of supported formats is quite impressive: IISW3C, NCSA, IIS, IISODBC, BIN, IISMSID, HTTPERR, URLSCAN, CSV, TSV, W3C, XML, EVT, ETW, NETMON, REG, ADS, TEXTLINE, TEXTWORD, FS and COM.

I usually use it for querying IIS Log files and believe me it is super-fast. On my Lenovo X1 i7/16GB/SSD it was able to query 8.97GB of log files 2min 12sec!

SELECT
    Date,
    TO_INT(COALESCE(EXTRACT_VALUE(cs-uri-query, 'id'), EXTRACT_VALUE(cs-uri-query, 'SouborSablonyID'))) AS SouborID,
    COUNT(*) AS Total
FROM '[LOGFILEPATH]'
WHERE (cs-uri-stem = '/business/sablony/soubor-partner.aspx') OR (cs-uri-stem = '/business/sablony/soubor.aspx')
GROUP BY Date, SouborID
ORDER BY Total DESC

Output to database

It is not only able to query the logs but you can use it to push the results to SQL database and many other supported data-sources (CSV, XML, …), e.g.

C:\Program Files (x86)\Log Parser 2.2>logparser “SELECT * INTO iisLogs FROM c:\temp\logs\*.log ” -i:iisw3c -o:SQL -server:localhost -database:MyLogs -username:sa -password:sa -createTable: ON

Note: If you want a plain import of log to DB (without any filtering, projection or aggregation) consider using Import Flat File… wizard from SQL Management Studio for better performance. If you want to use LogParser for feeding your DB, check the transactionRowCount option to batch uploaded rows into single transcation (e.g. -transactionRowCount:-1)

Is there any GUI for LogParser?

LogParser itself has always been a command-line utility. As an alternative it has a COM API which allows you to use it from your application. This API has been used to produce several GUIs which make the use of LogParser much easier:

  • Microsoft LogParser Studio (download) is a Microsoft product which brings not only the GUI itself but is shipped with many (181) pre-defined query templates for different log types.
    2017-11-28_2-46-39
  • Log Parser Lizard GUI is another free tool (with a paid Pro edition) produced outside Microsoft which might be even more powerful. I haven’t tested it yet but it looks promising for those of you who need to play with the logs on daily basis.

References

You might find following links useful when starting to play with LogParser:

SQL: Padding numbers with leading zeros (15 -> 00015)

You might find a better way, but how about:

= REPLACE(STR(15, 5, 0), ' ', '0')

STR(num, length, decimals) function converts the number to string of given length (padded with spaces) and given number of decimals.

Alternative?

= RIGHT('00000' + @number, 5)

Remember you should handle the formatting in presentation layer, but sometimes you might find this useful…

SQL DMV: Most Expensive Queries, Missing Indexes

Database Management Views and their usage in SQL performance tuning scenarios are well known. There are plenty posts with many different versions of the Most Expensive Queries query. As I don’t want to look for the right one again and again I archive my favorite one here:

-- Most expensive queries
SELECT TOP 20
	SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
		((CASE qs.statement_end_offset
			WHEN -1 THEN DATALENGTH(qt.TEXT)
			ELSE qs.statement_end_offset
			END - qs.statement_start_offset)/2)+1)
		AS query_text,
    db.name AS [db_name],
    qs.total_elapsed_time/1000 AS total_elapsed_time_ms,
    qs.total_elapsed_time/qs.execution_count/1000 AS average_elapsed_time_ms,
    qs.last_elapsed_time/1000 AS last_elapsed_time_ms,
    qs.execution_count,
    qs.total_worker_time/1000 AS total_worker_time_ms,
    qs.total_worker_time/qs.execution_count/1000 AS average_worker_time_ms,
    qs.last_worker_time/1000 AS last_worker_time_ms,
    qs.last_execution_time,
    qs.total_logical_reads,
	qs.last_logical_reads,
    qs.total_logical_writes,
	qs.last_logical_writes
    --qp.query_plan
    FROM sys.dm_exec_query_stats qs
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
        CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
        LEFT JOIN sys.databases db ON (qt.dbid = db.database_id)
    -- WHERE db.name='DatabaseName'
    -- ORDER BY qs.total_logical_reads DESC
    -- ORDER BY qs.total_logical_writes DESC
    -- ORDER BY qs.last_worker_time DESC -- CPU time (active)
    -- ORDER BY qs.last_elapsed_time DESC -- clock time (including locks, etc.)
    -- ORDER BY qs.total_worker_time DESC
	ORDER BY qs.total_elapsed_time DESC

…and for the same purpose the one for missing indexes (Please don’t add these indexes without further analysis of the overall impact!):

SELECT
	mid.statement
	  ,migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,OBJECT_NAME(mid.Object_id),
	  'CREATE INDEX [missing_index_' + CONVERT (VARCHAR, mig.index_group_handle) + '_' + CONVERT (VARCHAR, mid.index_handle)
	  + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
	  + ' ON ' + mid.statement
	  + ' (' + ISNULL (mid.equality_columns,'')
		+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END
		+ ISNULL (mid.inequality_columns, '')
	  + ')'
	  + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
	  migs.*, mid.database_id, mid.[object_id]
	FROM sys.dm_db_missing_index_groups mig
		INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
		INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
	WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
	ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC