Category Archives: Database

Azure: How to solve Azure SQL “database is not currently available”

Developers sometimes ask me about “database is not currently available” error when using Azure SQL and Entity Framework 6. In this article I explain why happens this error and how to solve it easily.

Case Study

Let’s start with the real life example from our customer:

„… we detected an error in our logs: „Database ‘XXX’ on server ‘XXX’ is not currently available.”. It happens usually once a week and sometimes it happens more times during a night. Should we worry about it? We want to move the database from US region do EU and move all databases under elastic pool, so that maybe we do not have to solve this issue now…“

Theory about Azure SQL

Azure SQL service is a typical PaaS service so that developer doesn’t have to worry about the platform. OS updates, security updates and other things are automatically solved on the provider side. The problem is that sometimes SQL Server reconfiguration is needed and in this case short downtime occurs. It happens during updating OS or when an issue occurred (process or load balancing failure). Most of the downtimes are very short (threshold about 60 seconds). Most of the developers do not know about this issue and just a few developers experienced this error but they do not solve it because the error is exceptional.

Solution

If you want to solve this issue, you must implement retry logic. If you use Entity Framework 6 or EF Core, the solution is very easy. Developers in Microsoft know about this behavior and they prepared easy solution in ORMs.

Connection Resiliency / Retry Logic (EF 6)

In case of EF 6 you must allow Execution Strategy and choose SqlAzureExecutionStrategy. Complete configuration looks like this:

public class MyConfiguration : DbConfiguration 
{ 
    public MyConfiguration() 
    { 
        SetExecutionStrategy( 
            "System.Data.SqlClient", 
            () => new SqlAzureExecutionStrategy(1, TimeSpan.FromSeconds(30))); 
    } 
}

Source and another information you can find in article:

Connection Resiliency (EF Core)

If you work with EF Core, you can analogicaly configure your project in the place where you register all services.

// Startup.cs from any ASP.NET Core Web API
public class Startup
{
    // Other code ...
    public IServiceProvider ConfigureServices(IServiceCollection services)
    {
        // ...
        services.AddDbContext(options =>
        {
            options.UseSqlServer(Configuration["ConnectionString"],
            sqlServerOptionsAction: sqlOptions =>
            {
                sqlOptions.EnableRetryOnFailure(
                maxRetryCount: 5,
                maxRetryDelay: TimeSpan.FromSeconds(30),
                errorNumbersToAdd: null);
            });
        });
    }
//...
}

Source and another information you can find in article:

As you can see in examples, implementation is a piece of cake.

Tip: Microsoft LogParser [Studio] superfast SQL-like querying of any log file

LogParser (download) is a command line tool from Microsoft which allows you to query any text-based log file using SQL-like syntax. The basic list of supported formats is quite impressive: IISW3C, NCSA, IIS, IISODBC, BIN, IISMSID, HTTPERR, URLSCAN, CSV, TSV, W3C, XML, EVT, ETW, NETMON, REG, ADS, TEXTLINE, TEXTWORD, FS and COM.

I usually use it for querying IIS Log files and believe me it is super-fast. On my Lenovo X1 i7/16GB/SSD it was able to query 8.97GB of log files 2min 12sec!

SELECT
    Date,
    TO_INT(COALESCE(EXTRACT_VALUE(cs-uri-query, 'id'), EXTRACT_VALUE(cs-uri-query, 'SouborSablonyID'))) AS SouborID,
    COUNT(*) AS Total
FROM '[LOGFILEPATH]'
WHERE (cs-uri-stem = '/business/sablony/soubor-partner.aspx') OR (cs-uri-stem = '/business/sablony/soubor.aspx')
GROUP BY Date, SouborID
ORDER BY Total DESC

Output to database

It is not only able to query the logs but you can use it to push the results to SQL database and many other supported data-sources (CSV, XML, …), e.g.

C:\Program Files (x86)\Log Parser 2.2>logparser “SELECT * INTO iisLogs FROM c:\temp\logs\*.log ” -i:iisw3c -o:SQL -server:localhost -database:MyLogs -username:sa -password:sa -createTable: ON

Note: If you want a plain import of log to DB (without any filtering, projection or aggregation) consider using Import Flat File… wizard from SQL Management Studio for better performance. If you want to use LogParser for feeding your DB, check the transactionRowCount option to batch uploaded rows into single transcation (e.g. -transactionRowCount:-1)

Is there any GUI for LogParser?

LogParser itself has always been a command-line utility. As an alternative it has a COM API which allows you to use it from your application. This API has been used to produce several GUIs which make the use of LogParser much easier:

  • Microsoft LogParser Studio (download) is a Microsoft product which brings not only the GUI itself but is shipped with many (181) pre-defined query templates for different log types.
    2017-11-28_2-46-39
  • Log Parser Lizard GUI is another free tool (with a paid Pro edition) produced outside Microsoft which might be even more powerful. I haven’t tested it yet but it looks promising for those of you who need to play with the logs on daily basis.

References

You might find following links useful when starting to play with LogParser:

SQL: Padding numbers with leading zeros (15 -> 00015)

You might find a better way, but how about:

= REPLACE(STR(15, 5, 0), ' ', '0')

STR(num, length, decimals) function converts the number to string of given length (padded with spaces) and given number of decimals.

Alternative?

= RIGHT('00000' + @number, 5)

Remember you should handle the formatting in presentation layer, but sometimes you might find this useful…

SQL DMV: Most Expensive Queries, Missing Indexes

Database Management Views and their usage in SQL performance tuning scenarios are well known. There are plenty posts with many different versions of the Most Expensive Queries query. As I don’t want to look for the right one again and again I archive my favorite one here:

-- Most expensive queries
SELECT TOP 20
	SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
		((CASE qs.statement_end_offset
			WHEN -1 THEN DATALENGTH(qt.TEXT)
			ELSE qs.statement_end_offset
			END - qs.statement_start_offset)/2)+1)
		AS query_text,
    db.name AS [db_name],
    qs.total_elapsed_time/1000 AS total_elapsed_time_ms,
    qs.total_elapsed_time/qs.execution_count/1000 AS average_elapsed_time_ms,
    qs.last_elapsed_time/1000 AS last_elapsed_time_ms,
    qs.execution_count,
    qs.total_worker_time/1000 AS total_worker_time_ms,
    qs.total_worker_time/qs.execution_count/1000 AS average_worker_time_ms,
    qs.last_worker_time/1000 AS last_worker_time_ms,
    qs.last_execution_time,
    qs.total_logical_reads,
	qs.last_logical_reads,
    qs.total_logical_writes,
	qs.last_logical_writes
    --qp.query_plan
    FROM sys.dm_exec_query_stats qs
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
        CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
        LEFT JOIN sys.databases db ON (qt.dbid = db.database_id)
    -- WHERE db.name='DatabaseName'
    -- ORDER BY qs.total_logical_reads DESC
    -- ORDER BY qs.total_logical_writes DESC
    -- ORDER BY qs.last_worker_time DESC -- CPU time (active)
    -- ORDER BY qs.last_elapsed_time DESC -- clock time (including locks, etc.)
    -- ORDER BY qs.total_worker_time DESC
	ORDER BY qs.total_elapsed_time DESC

…and for the same purpose the one for missing indexes (Please don’t add these indexes without further analysis of the overall impact!):

SELECT
	mid.statement
	  ,migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,OBJECT_NAME(mid.Object_id),
	  'CREATE INDEX [missing_index_' + CONVERT (VARCHAR, mig.index_group_handle) + '_' + CONVERT (VARCHAR, mid.index_handle)
	  + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
	  + ' ON ' + mid.statement
	  + ' (' + ISNULL (mid.equality_columns,'')
		+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END
		+ ISNULL (mid.inequality_columns, '')
	  + ')'
	  + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
	  migs.*, mid.database_id, mid.[object_id]
	FROM sys.dm_db_missing_index_groups mig
		INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
		INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
	WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
	ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC