Michael J. Swart

October 3, 2014

Watch Out for Misleading Behaviour From SQL Server

Takeaway: To get consistent behaviour from SQL Server, I share a set of statements I like to run when performing tuning experiments.

Inconsistent Behaviour From SQL Server?

I often have conversations where a colleague wants to understand why SQL Server performs faster in some cases and slower in other cases.

The conversation usually starts “Why does SQL Server perform faster when I…” (fill in the blank):

  1. … changed the join order of the query
  2. … added a transaction
  3. … updated statistics
  4. … added a comment
  5. … crossed my fingers
  6. … simply ran it again

What’s Going On?

It can actually seem like SQL Server performs differently based on its mood. Here are some reasons that can affect the duration of queries like the ones above

  • You changed something insignificant in the query. What you may be doing is comparing the performance of a cached plan with a newly compiled plan. Examples 1 – 4 might fall under this scenario. If that’s the case, then you took a broken thing and gave it a good thump. This percussive maintenance may be good for broken jukeboxes, but maybe not for SQL Server.
  • What about those last two? Say you hit F5 to execute a query in Management Studio, and wait a minute for your results. You immediately hit F5 again and watched the same query take fifteen seconds. Then I like to point out that maybe all that data is cached in memory.

In order to do tune queries effectively, we need consistent behaviour from SQL Server, if only to test theories and be able to rely on the results. SQL Server doesn’t seem to want to give us consistent behaviour…

So Is It Possible To Get Straight Answers?

Best line from all Star Wars

But maybe we can get straight answers from SQL Server. Here’s a test framework that I like to use before all experiments when I want consistent behaviour:

-- Only do this on dev sql servers!
CHECKPOINT 
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO, TIME ON
-- Ctrl+M in Management Studio to include actual query plan

The first two statements are meant to clear SQL Server’s cache of data. Because of write ahead logging, SQL Server will write data changes to disk immediately, but may take its time writing data changes to disk. Executing CHECKPOINT makes SQL Server do that immediately. After the checkpoint there should be no dirty buffers. That’s why DBCC DROPCLEANBUFFERS will succeed in dropping all data from memory.

The DBCC FREEPROCCACHE command will remove all cached query plans.

These commands give SQL Server a fresh starting point. It makes it easier to compare behaviour of one query with the behaviour of another.

The SET STATISTICS IO, TIME ON and the Ctrl+M are there in order to retrieve better information about the performance of the query. Often CPU time, Logical IO, and the actual query plan are more useful when tuning queries than elapsed time.

September 9, 2014

Take Care When Scripting Batches

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 1:22 pm

Takeaway: When performing long-running modifications, I’m sure many of you enjoy using batches to increase concurrency. But I want to talk about a pitfall to be aware of. If you’re not careful, the method you use to implement batching can actually worsen concurrency.

... we don't need no stinkin' batches either

Why Use Batches?

Even without an explicit transaction, all SQL statements are atomic – changes are all or nothing. So when you have long-running modifications to make, locks on data can be held for the duration of your query and that can be too long. Especially if your changes are intended for live databases.

But you can make your modifications in several smaller chunks or batches. The hope is that each individual batch executes quickly and holds locks on resources for a short period of time.

But care is needed. I’m going to give an example to show what I mean. The example uses the FactOnlineSales table in the ContosoRetailDW database (available as a download here). The FactOnlineSales table has

  • one clustered index on OnlineSalesKey and no other indexes,
  • 12 million rows,
  • and 46 thousand database pages

Metrics to Use
In this example, I want to know how long each query takes because this should let me know roughly how long locks are held.
But instead of duration, I’m going to measure logical reads. It’s a little more consistent and in the examples below it’s nicely correlated with duration.

The Straight Query

Suppose we want to remove sales data from FactOnlineSales for the “Worcester Company” whose CustomerKey = 19036. That’s a simple delete statement:

DELETE FactOnlineSales WHERE CustomerKey = 19036;

This delete statement runs an unacceptably long time. It scans the clustered index and performs 46,650 logical reads and I’m worried about concurrency issues.

Naive Batching

So I try to delete 1,000 rows at a time. This implementation seems reasonable on the surface:

DECLARE	
	@RC INT = 1;
 
WHILE (@RC > 0)
BEGIN
 
  DELETE TOP (1000) FactOnlineSales
  WHERE CustomerKey = 19036;
 
  SET @RC = @@ROWCOUNT
 
END

Unfortunately, this method does poorly. It scans the clustered index in order to find 1,000 rows to delete. The first few batches complete quickly, but later batches gradually get slower as it takes longer and longer to scan the index to find rows to delete. By the time the script gets to the last batch, SQL Server has to delete rows near the very end of the clustered index and to find them, SQL Server has to scan the entire table.

In fact, this last batch performs 46,521 logical reads (just 100 fewer reads than the straight delete). And the entire script performed 1,486,285 logical reads in total. If concurrency is what I’m after, this script is actually worse than the simple DELETE statement.

Careful Batching

But I know something about the indexes on this table. I can make use of this knowledge by keeping track of my progress through the clustered index so that I can continue where I left off:

DECLARE
	@LargestKeyProcessed INT = -1,
	@NextBatchMax INT,
	@RC INT = 1;
 
WHILE (@RC > 0)
BEGIN
 
  SELECT TOP (1000) @NextBatchMax = OnlineSalesKey
  FROM FactOnlineSales
  WHERE OnlineSalesKey > @LargestKeyProcessed
    AND CustomerKey = 19036
  ORDER BY OnlineSalesKey ASC;
 
  DELETE FactOnlineSales
  WHERE CustomerKey = 19036
    AND OnlineSalesKey > @LargestKeyProcessed
    AND OnlineSalesKey <= @NextBatchMax;
 
  SET @RC = @@ROWCOUNT;
  SET @LargestKeyProcessed = @NextBatchMax;
 
END

The delete statements in this script performed 46,796 logical reads in total but no individual delete statement performed more than 6,363.

Graphically that looks like:

Logical Reads Per Delete Statement

Logical Reads Per Delete

The careful batching method runs in roughly the same time as the straight delete statement but ensures that locks are not held for long.
The naive batching method runs with an order or complexity (compared to the expected complexity of n) and can hold locks just as long as the straight delete statement.
This underlines the importance of testing for performance.

April 23, 2014

Removing Comments from SQL

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:20 am

Check out the following deliberately crazy SQL Script:

create table [/*] /* 
  -- huh? */
(
    "--
     --" integer identity, -- /*
    [*/] varchar(20) /* -- */
         default '*/ /* -- */' /* /* /* */ */ */
); 
go

It’s not surprising that my blog’s syntax colorer has trouble with this statement. But SQL Server will run this statement without complaining. Management Studio doesn’t even show any red squiggly lines anywhere. The same statement without comments looks like this:

create table [/*] 
(
    "--
     --" integer identity, 
    [*/] varchar(20) 
         default '*/ /* -- */' 
); 
go

I want a program to remove comments from any valid SQL and I want it to handle even this crazy example. I describe a handy method that lets me do that.

Using C#

  • In your C# project, find and add a reference to Microsoft.SqlServer.TransactSql.ScriptDom. It’s available with SQL Server 2012’s Feature Pack (search for “ScriptDom” and download).
  • Add using Microsoft.SqlServer.Management.TransactSql.ScriptDom; to your “usings”.
  • Then add this method to your class:
    public string StripCommentsFromSQL( string SQL ) {
     
        TSql110Parser parser = new TSql110Parser( true );
        IList<ParseError> errors;
        var fragments = parser.Parse( new System.IO.StringReader( SQL ), out errors );
     
        // clear comments
        string result = string.Join ( 
          string.Empty,
          fragments.ScriptTokenStream
              .Where( x => x.TokenType != TSqlTokenType.MultilineComment )
              .Where( x => x.TokenType != TSqlTokenType.SingleLineComment )
              .Select( x => x.Text ) );
     
        return result;
     
    }

… and profit! This method works as well as I hoped, even on the given SQL example.

Why I Prefer This Method

A number of reasons. By using Microsoft’s own parser, I don’t have to worry about comments in strings, or strings in comments which are problems with most T-SQL-only solutions. I also don’t have to worry about nested multiline comments which can be a problem with regex solutions.

Did you know that there’s another sql parsing library by Microsoft? It’s found at Microsoft.SqlServer.Management.SqlParser.Parser. This was the old way of doing things and it’s not supported very well. I believe this library is mostly intended for use by features like Management Studio’s Intellisense. The ScriptDom library is better supported and it’s easier to code with.

Let Me Know If You Found This Useful

Add comments below. Be warned though, if you’re a spammer, I will quickly remove your comments. I’ve had practice.

January 30, 2014

Building Concurrency Tests

Testing Concurrency

So last week, I explained different ways to generate concurrent activity. I have my own favorites, but you may have your own. So pick your favorite method; whichever method you picked, it will help you call the procedure s_DoSomething super-often.

Now comes the task of defining the procedure s_DoSomething. It can be whatever you like depending on what functionality you want to test or exercise. I want to demonstrate some patterns that I follow when I define that procedure. Those patterns all start with …

the Basic Definition

To test the concurrency of a single procedure just call it:

use tempdb
GO
 
CREATE PROCEDURE dbo.s_DoSomething AS
 
EXEC Adventureworks2012.dbo.uspGetManagerEmployees 14;

Of course, I could have changed the framework to call my procedure directly but I don’t out of habit. I always leave s_DoSomething in tempdb hard-coded in the framework.

With Arbitrary Parameter Values

Often the procedures I want to test are defined with parameters. If variety is important, but the parameter values are not, then that’s when the random tricks come in:

ALTER PROCEDURE dbo.s_DoSomething AS
 
DECLARE @someString nvarchar(100) = cast(newid() as nvarchar(100));
DECLARE @someInt int = RAND() * 100;
DECLARE @someDate datetime = dateadd(MINUTE, RAND() * 10000, getdate());
DECLARE @someLongerString nvarchar(1000) = REPLICATE(@someString,20);
 
EXEC Adventureworks2012.dbo.usp_ProcWithParameters
	@someString, @someInt, @someDate, @someLongerString;

With Less Arbitrary Parameter Values

Check out this next example. Pulling a value from the target database is often preferable to calling the procedure with a random integer value.

ALTER PROCEDURE dbo.s_DoSomething AS
 
DECLARE @BusinessEntityId int;
 
SELECT TOP 1 @BusinessEntityId = BusinessEntityID 
FROM AdventureWorks2012.HumanResources.Employee
ORDER BY newid();
 
EXEC AdventureWorks2012.dbo.uspGetEmployeeManagers @BusinessEntityId;

Calling More Than One Procedure

It’s as simple as calling one after the other. But sometimes I want the frequency of the calls “weighted”.

For example, I want to have a DELETE, INSERT and UPDATE statements called 10% of the time each. The remaining 70% of the time I want to call a SELECT statement. Then I have something like:

ALTER PROCEDURE dbo.s_DoSomething AS
 
declare @r int = RAND() * 10;
 
IF (@r = 0)
  -- delete 10% of the time
  DELETE AdventureWorks2012.Person.BusinessEntity
  WHERE BusinessEntityID = CAST(RAND()*1000 as INT);
 
IF (@r = 1)
  -- insert 10% of the time
  INSERT AdventureWorks2012.Person.BusinessEntity (rowguid)
  VALUES (newid());
 
IF (@r = 2)
  -- update 10% of the time
  UPDATE AdventureWorks2012.Person.BusinessEntity
  SET rowguid = newid()
  WHERE BusinessEntityID = CAST(RAND()*1000 as INT);
 
IF (@r > 2)
  -- select the rest of the time
  SELECT BusinessEntityId, rowguid, ModifiedDate
  FROM AdventureWorks2012.Person.BusinessEntity
  WHERE BusinessEntityID = CAST(RAND()*1000 as INT);

Counting Errors Concurrently

I want to track (server side) how often s_DoSomething fails. But I don’t want tracking to be a concurrency bottleneck itself. Here’s a cool trick for that:
First define these procedures:

create procedure dbo.s_IFailed as
go
 
create procedure dbo.s_IPassed as
go
 
alter procedure dbo.s_DoSomething as
 
begin try
    declare @i int = rand() * 10;
    select @i = 1 / @i -- might divide by 0!
    exec dbo.s_IPassed;
end try
begin catch
    exec dbo.s_IFailed;
end catch

This lets me use DMVs to monitor the success rate because I can check the execution count of my dummy procedures. For example,

exec sp_recompile 'dbo.s_IFailed'; -- reset counts from other tests.
exec sp_recompile 'dbo.s_IPassed';
go
 
/* run concurrent test here, or...*/
set nocount on; 
exec dbo.s_DoSomething;
go 10000
 
select object_name(object_id), execution_count 
from sys.dm_exec_procedure_stats
where object_name(object_id) in ('s_IFailed','s_IPassed')
/*
--         count
s_IPassed   9031
s_IFailed    969
*/

This relies on the DMV sys.dm_exec_procedure_stats which was introduced in 2008. It’s like a cheap do-it-yourself, performance counter.

Next

Next week I want to show a demo. I want to show this technique in action. I’ll be troubleshooting a common concurrency problem, the deadlock.

January 23, 2014

Generating Concurrent Activity

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:21 am
Testing Concurrency

I’ve discovered that DBAs use different methods to accomplish what amounts to the same thing: Generating concurrent activity on a server. I want to explore a number of methods I’ve seen. For each method, I want to call one particular procedure, many times at once, and often… Oh, and for free.

Why are we doing this?

  • For Testing: The whole point is to generate concurrent activity on the server. Testing with this kind of activity is one of the only ways to avoid resource contention issues.
  • For Demos: Concurrency issues are so common and so varied, that it’s not surprising how often we see demos that depend on this kind of generated database activity.
  • For Troubleshooting: This is my favorite reason. A concurrency issue can’t be fixed well unless it can be reproduced reliably on some test environment. That’s why the methods below have a spot on my troubleshooting toolbelt.

The whole idea is to get many workers active on SQL Server at once.

The exocomps were smart enough to exit on their own

For each method below, look out for the place where I specify these “parameters”

  • Number of virtual workers (50 in my example)
  • Number of iterations (for instance 10,000)
  • Connection string
  • Procedure to execute (s_DoSomething in my case)

One other thing to watch for is the overhead that this tool puts on the machine. Ideally, the method is suitable to run this from the same test environment that the SQL Server sits on. So I want my method to be fairly lightweight. This means that it’s best to handle iterations on SQL Server which cuts down on overhead associated with opening connections. So in most cases, instead of

exec dbo.s_DoSomething

I have

declare @i int = 0; while (@i < 10000) begin exec dbo.s_DoSomething; set @i+= 1; end

Notepad + DOS Method

I adapted this method from a clever trick I saw once. It was Paul Randal giving a demo on tempdb contention.  You can find a video of that demo by visiting this newsletter.
It’s a simple idea. You have two batch files, the first is called Run.bat:

echo off
sqlcmd -S MYSERVER\MYINSTANCE -E -Q "set nocount on; declare @i int = 0; while (@i < 10000) begin exec tempdb.dbo.s_DoSomething; set @i+= 1; end" > NUL
exit

and the second is called Run50.bat:

start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat

Click Run50.bat and boom: instant parallel activity. Now it is a little sloppy – it opens up 50 command windows – but it does the trick pretty nicely.

Using PowerShell

We should be able to do this in one line right? Yep. Adapted from a post from Kendra Little I came up with this:

foreach($worker in 1..50) `
{ `
  Start-Job `
    -ScriptBlock `
      { `
        Import-Module sqlps; `
        Invoke-Sqlcmd `
          -Query "set nocount on; declare @i int = 0; while (@i < 10000) begin exec tempdb.dbo.s_DoSomething; set @i+= 1; end" `
          -ServerInstance "MYSERVER\MYINSTANCE" `
          | Out-Null `
      } `
}

Except that I don’t think I got it quite right. Whatever I changed makes this method unsuitable.

The script schedules a job 50 times, but it takes about a minute just to schedule them all. Once they’re scheduled, the jobs take time to start and not enough of them work in parallel to generate the needed concurrent activity so I give this method a “skip”. If you’re really comfortable with powershell, maybe you can get this to work faster. If you can, let me know.

C#

This is my preferred method. It started out as the program I used to test upsert concurrency at http://michaeljswart.com/go/upsert but a friend at work showed me that .net 4.5 has some nifty new asynchronous methods that make it look nicer, perform faster and weigh lighter.

using System;
using System.Linq;
using System.Data.SqlClient;
using System.Threading.Tasks;
 
namespace AConsoleApplication {
    class Program {
 
        static void Main( string[] args ) {
 
            var cs = new SqlConnectionStringBuilder();
            cs.DataSource = @"MYSERVER\MYINSTANCE";
            cs.InitialCatalog = "tempdb";
            cs.IntegratedSecurity = true;
            cs.AsynchronousProcessing = true;
 
            string connectionString = cs.ToString();
 
            Task[] tasks = Enumerable.Range( 0, 50 )
                .Select( i => RunCommand( connectionString ) )
                .ToArray();
 
            Task.WaitAll( tasks );
        }
 
        private static async Task RunCommand(
                string connectionString
            ) {
 
            string sql = @"s_DoSomething";
 
            SqlConnection conn = new SqlConnection( connectionString );
            conn.Open();
 
            SqlCommand cmd = new SqlCommand( sql, conn );
            cmd.CommandType = System.Data.CommandType.StoredProcedure;
 
            for( int i = 0; i < 10000; i++ ) {
                try {
                    await cmd.ExecuteNonQueryAsync();    
                } catch( Exception ex ) {
                    Console.WriteLine( ex.Message );
                }
            }
        }
    }
}

HammerDB

HammerDB (originally HammerOra) is a free utility that allows users to run benchmarks against various environments. Although it was originally built for running benchmarks on Oracle, the utility now works on Windows and for SQL Server (hence the name change). I was first introduced to the utility via Kendra Little (again):

Follow these links to learn how to use the tool for its typical function, running benchmarks. Then, once you know how to do that, it’s a quick step to repurpose the tool for your own concurrent activity. For example, replace HammerDB’s generated script with this one:

#!/usr/local/bin/tclsh8.6
package require tclodbc 2.5.1
database connect odbc "DRIVER=SQL Server Native Client 11.0;SERVER=MYSERVER\\MYINSTANCE;PORT=1433;TRUSTED_CONNECTION=YES";
odbc "set nocount on; declare @i int = 0; while (@i < 10000) begin exec tempdb.dbo.s_DoSomething; set @i+= 1; end"
odbc disconnect

It’s steady and lightweight and works really well.

SQLQueryStress

SQL Query Stress is a tool written by Adam Machanic that you can download and install for free.

SQLQueryStress

It’s fairly idiot proof and avoids a lot of the complexity of the other methods. For my own purposes, I want the utility to be lightweight. So I …

  • remember to turn off “Collect I/O Statistics”
  • remember to turn off “Collect Time Statistics”
  • Set “Number of iterations” to one.
  • Iterate in the query window, i.e. modify the query to call s_DoSomething 10,000 times.

My Ranking

Concurrent Activity Method Stars Notes
C# console app ★★★★½ It performs fastest and uses the least resources. Also, because I’m biased, it gets a small bump by not suffering from NIH issues.
HammerDB ★★★ Very fast (2nd fastest), but the interface is clumsy and modifications need tclsh experience <yuck> It’s best to use this for its intended purpose, as a benchmark tool.
Notepad and DOS ★★★ Quick and dirty and really fast. However, it’s still pretty clumsy. Interrupting a test part way through is difficult.
Powershell ★½ Yuck. I couldn’t get two runs that looked the same and it was near impossible to sort powershell cpu and memory pressure from SQL Server pressure.
SQLQueryStress ★★★★ It does the job well. It was a little difficult to interrupt a test. It also takes care to make it a lightweight framework.

Next Week

I’ll show some typical ways I define s_DoSomething.

December 4, 2013

Overly Complex Views, Procedures And Functions

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 4:03 pm

Takeaway: I define a metric which indicates a code-smell for overly complex views, procedures and functions: “FROM” count.

In the programming world, developers have defined metrics for how complex a piece of code is. For example, Visual Studio defines several metrics that are meant to give developers some idea about how complex their code is getting. These measures won’t be perfect because code complexity is not something that can be measured directly. But many think that these measures indicate complexity often enough to be useful.

Code Smells

That’s what a code smell is meant for. My own definition is that a code smell is an anti-rule-of-thumb <cough>cursors!</cough>. It doesn’t necessarily indicate something’s rotten, but like a bad smell, it’s worth checking into.

Measuring Complexity in SQL Modules

And here’s my idea about measuring complexity for SQL. It’s based on the premise that complex procedures or views will have many queries/subqueries and that most queries have FROM clauses. So what are my procedures/views/functions that may be too complex? Let’s find out:

DECLARE @Odor NVARCHAR(30) = N'FROM';
 
with
   L0 as (select 1 as C union all select 1)       --2 rows
  ,L1 as (select 1 as C from L0 as A, L0 as B)    --4 rows
  ,L2 as (select 1 as C from L1 as A, L1 as B)    --16 rows
  ,L3 as (select 1 as C from L2 as A, L2 as B)    --256 rows
  ,L4 as (select 1 as C from L3 as A, L3 as B)    --65536 rows
  ,Nums as (select row_number() over (order by (select 0)) as N from L4)
SELECT 
    OBJECT_SCHEMA_NAME(m.object_id) as SchemaName,
    OBJECT_NAME(m.object_id) as ObjectName, 
    count(1) as OdorCount
FROM Nums
CROSS JOIN sys.sql_modules m
WHERE Nums.N < LEN(m.definition)
    AND SUBSTRING(m.definition, Nums.N, LEN(@Odor)) = @Odor
GROUP BY m.object_id
ORDER BY count(1) desc, object_name(m.object_id) asc

I’ve found the metric slightly better than the simpler “procedure size” metric:

SELECT 
    OBJECT_SCHEMA_NAME(m.object_id) as SchemaName,
    OBJECT_NAME(m.object_id) as ObjectName, 
    LEN(m.definition) as ModuleSize
FROM sys.sql_modules m
ORDER BY LEN(m.definition) desc, object_name(m.object_id) asc

Try it out on your own environments and let me know if it identifies the monster procedures you know are lurking in your database.

Thanks to Adam Machanic for the substring counting syntax.

Performance Improvement!

Update December 5, 2013: In the comments, George Mastros provided a simpler and faster version of this query which does the same thing:

DECLARE @Odor NVARCHAR(30) = N'FROM';
 
Select	OBJECT_SCHEMA_NAME(object_id) As SchemaName, 
        OBJECT_NAME(object_id) As ObjectName, 
        (DataLength(definition) - DataLength(Replace(definition, @Odor, ''))) / DataLength(@Odor) As OdorCount
From	sys.sql_modules
Order By OdorCount DESC;

This is much simpler and much faster. What’s extra interesting is that George’s query itself has an odor count of 2 while my original one had a count of 7. Thanks so much George!

September 19, 2013

My Queries For Top 20 Resource Hogs

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: This post is for me. This post is a handy place for me to put some queries that I use often. If you find them useful too, that’s wonderful, bookmark this page with http://michaeljswart.com/go/Top20.

These queries will give the top 20 resource consumers for cached queries based on a few different metrics. I’ve posted queries like this before, and others have written many other versions of this query. All these queries are based on sys.dm_exec_query_stats.

But I’ve tweaked my own queries recently based on a couple things I’ve learned. So you could say that I gave my old queries a new coat of paint.

Here what I’ve added to these queries recently:

  • I added OPTION (RECOMPILE) to the query. It prevents these queries from showing up in their own results on quiet systems.
  • Joe Sack pointed out that query_hash can be used to aggregate queries that only differ by literal values. I’m doing that here. What’s extra awesome about this technique, is that it can also be used to aggregate the same queries that were executed in different procedures or in different databases. This is critical if we want to measure the impact of a single query on a server regardless of where the query is called.
  • I’m ignoring (for now) those queries with query_hash equal to 0x0. Again, Joe Sack points out that these are cursor fetches.
  • You may notice that the database name is sometimes null. It’s null when the query is not part of a stored procedure. I suppose if I was keen, I could pick out a arbitrary database name from the accompanying query plan.
  • Update September 27, 2013: Actually, In the comment section of this post, Ian Stirk gave me a really nice way to retrieve the database name from the dmvs. I’ve updated the queries below accordingly.
  • Remember that the stats found in sys.dm_exec_query_stats are only as good as what’s in cache. The cache is not a permanent store. It changes and it gets reset every server restart.

(Download all queries)

Or check them out individually below:

Top 20 Executed Queries

These are queries that run often. Frequent queries can be vulnerable to concurrency problems.

;with frequent_queries as
(
    select top 20 
        query_hash, 
        sum(execution_count) executions
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(execution_count) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.execution_count,
    executions as total_executions_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX), st.text)) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join frequent_queries fq
    on fq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by fq.executions desc,
    fq.query_hash,
    qs.execution_count desc
option (recompile)

Top 20 I/O Consumers

Specifically logical reads and writes. Still my favorite metric.

;with high_io_queries as
(
    select top 20 
        query_hash, 
        sum(total_logical_reads + total_logical_writes) io
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_logical_reads + total_logical_writes) desc
)
select @@servername as servername,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_logical_reads + total_logical_writes as total_io,
    qs.execution_count,
    cast((total_logical_reads + total_logical_writes) / (execution_count + 0.0) as money) as average_io,
    io as total_io_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join high_io_queries fq
    on fq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by fq.io desc,
    fq.query_hash,
    qs.total_logical_reads + total_logical_writes desc
option (recompile)

Top 20 CPU Consumers

Another popular metric for measuring work done.

;with high_cpu_queries as
(
    select top 20 
        query_hash, 
        sum(total_worker_time) cpuTime
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_worker_time) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_worker_time as cpu_time,
    qs.execution_count,
    cast(total_worker_time / (execution_count + 0.0) as money) as average_CPU_in_microseconds,
    cpuTime as total_cpu_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join high_cpu_queries hcq
    on hcq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by hcq.cpuTime desc,
    hcq.query_hash,
    qs.total_worker_time desc
option (recompile)

Top 20 Queries By Elapsed Time

Results found here, but not in the results of the other queries, usually suffer from things like excessive blocking or ASYNC_NETWORK_IO.

;with long_queries as
(
    select top 20 
        query_hash, 
        sum(total_elapsed_time) elapsed_time
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_elapsed_time) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(st.objectid, st.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_elapsed_time,
    qs.execution_count,
    cast(total_elapsed_time / (execution_count + 0.0) as money) as average_duration_in_microseconds,
    elapsed_time as total_elapsed_time_for_query,
    SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
        (CASE 
            WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
            ELSE qs.statement_end_offset + 2
            END - qs.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join long_queries lq
    on lq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by lq.elapsed_time desc,
    lq.query_hash,
    qs.total_elapsed_time desc
option (recompile)

By the way. This post seems to be really popular. So I waive any copyright I have on these four queries. Copy them without attribution wherever you like. Profit if you can. Go nuts.

July 3, 2013

It Took Me Ages To Figure This One Out: A Puzzle

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: I tackled a problem recently and I present a version of the problem here as a challenge. The challenge is to figure out why a query and a stored procedure return different results even though the procedure is defined using the same query.

If no one cracks this one, I’ll leave the answer in the comments in a couple days.

Too Easy

The Setup

Okay, I use AdventureWorks2012 here. I define a table data type IDList and a stored procedure s_GetCustomerProductPurchases:

use Adventureworks2012
go
 
create type IDList as TABLE(id int);
go
 
CREATE PROCEDURE dbo.s_GetCustomerProductPurchases 
	@CustomerIds IDList READONLY,
	@ProductIds IDList READONLY,
	@SearchString NVARCHAR(1000)
AS
SET NOCOUNT ON 
 
DECLARE @ResultList TABLE ( 
	ProductId INT, 
	ProductName Name 
	);
 
insert @ResultList (ProductId, ProductName)
select distinct TOP (2000) p.ProductID, p.Name
from sales.salesorderdetail sod
join sales.salesorderheader soh 
    on soh.SalesOrderID = sod.SalesOrderID
join Production.Product p
    on p.ProductID = sod.ProductID
join Sales.Customer c
    on c.CustomerID = soh.CustomerID
join @CustomerIds CIds
    on CIds.id = c.CustomerID
join @ProductIds PIds
	on PIds.id = p.ProductID
where p.Name like '%' + @SearchString + '%'
	and soh.[Status] = 5
 
IF	@@ROWCOUNT = 0 
	SELECT	0 ProductId, 'No Matching Products' ProductName
ELSE 
	SELECT	 ProductId, ProductName
	FROM	 @ResultList 
	ORDER BY ProductId

Executing the Query

Next lets run a query against Adventureworks2012. We define some variables that are used as filters in a query. This query populates a temporary results table and then we select from it:

DECLARE 
	@ProductIds IDList,
	@CustomerIds IDList,
	@SearchString Name = 'Red';
insert @ProductIds (id) VALUES (707), (717), (727)
insert @CustomerIds (id) VALUES (11000), (11103), (11201)
 
SET NOCOUNT ON 
 
DECLARE @ResultList TABLE ( 
	ProductId INT, 
	ProductName Name 
	);
 
insert @ResultList (ProductId, ProductName)
select distinct TOP (2000) p.ProductId, p.Name
from sales.salesorderdetail sod
join sales.salesorderheader soh 
    on soh.SalesOrderID = sod.SalesOrderID
join Production.Product p
    on p.ProductID = sod.ProductID
join Sales.Customer c
    on c.CustomerID = soh.CustomerID
join @CustomerIds CIds
    on CIds.id = c.CustomerID
join @ProductIds PIds
	on PIds.id = p.ProductID
where p.Name like '%' + @SearchString + '%'
	and soh.[Status] = 5
 
IF	@@ROWCOUNT = 0 
	SELECT	0 ProductId, 'No Matching Products' ProductName
ELSE 
	SELECT	 ProductId, ProductName
	FROM	 @ResultList 
	ORDER BY ProductId

These are the results I get:

ProductId ProductName
707 Sport-100 Helmet, Red

Executing the Procedure

Now run the procedure which contains the exact same query:

DECLARE 
	@ProductIds IDList,
	@CustomerIds IDList,
	@SearchString Name = 'Red';
insert @ProductIds (id) VALUES (707), (717), (727)
insert @CustomerIds (id) VALUES (11000), (11103), (11201)
 
exec dbo.s_GetCustomerProductPurchases @ProductIds, @CustomerIds, @SearchString

The results we get now look like:

ProductId ProductName
0 No Matching Products

So Why Are These Results Different?

That’s what I want you to figure out. Try to crack this one before looking at the comments.

SQL Fiddle

Have you used SQL Fiddle yet? It’s a site that allows SQL developers to set up different scenarios for use in online discussions. So if you don’t have Adventureworks2012 handy, I’ve setup up two SQLFiddles here and here. Your job then is to follow these two links and explain why the results are different.

September 26, 2012

A Quick SQL Challenge

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:21 pm

So I was recently involved in an issue where a particular Unicode string was causing some problems in an application. The string was coming from a production database. I was asked to help reproduce this. It boiled down to this:

Can you write a SQL Script which produces a string containing every possible unicode character?

That’s every character from 0x0000 (NUL) all the way up to 0xffff (which is not even legal unicode). If you want a crack at it, start with the code here:

declare @allNCHARS nvarchar(max);
 
/*
Your answer would fit here.
*/
 
UPDATE MYTABLE 
SET Value = @allNCHARS
WHERE Id = 1;

My own answer is in the comments Maybe you can come up with something more elegant.
But it’s a handy string. If you’re in QA, it can help exercise a lot of different text issues (code pages, encodings, conversions etc…).

In my own case, the string actually helped me. I was able to reproduce the issue on a local developer machine. A colleague of mine then did more detective work and squashed that bug, so it was a bit of a tag team effort there.

September 12, 2012

When I Use Nested Joins

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Tags: — Michael J. Swart @ 12:00 pm

I want to explain how I avoid RIGHT OUTER joins in favor of LEFT OUTER joins and how I avoid OUTER joins in favor of INNER joins.

There's no place bitchin-er than Kitchener

Inner joins have no direction, but outer joins do so that we have three kinds of joins:

  • INNER JOIN (JOIN for short)
  • LEFT OUTER JOIN (LEFT JOIN for short)
  • RIGHT OUTER JOIN (RIGHT JOIN for short)

I leave out FULL OUTER JOINS for now because I never use them. And in fact RIGHT OUTER JOINS can always be written as LEFT OUTER JOINS, so in practice I only use just the two kinds of joins:

  • INNER JOIN
  • LEFT OUTER JOIN

Now as a rule of thumb, inner joins are more efficient than outer joins so it would be better to write queries that avoid outer joins. Let’s get to an example. My goal here is to write a query that gives me a list of employees and their director (if any) based on this schema.

CREATE TABLE STAFF
(
    Id NVARCHAR(20) NOT NULL PRIMARY KEY,
    Name NVARCHAR(400) NOT NULL,
    Department NVARCHAR(20),
    Role NVARCHAR(20) NOT NULL
)
 
CREATE TABLE BOSSES
(
    EmployeeId NVARCHAR(20)
        REFERENCES STAFF(Id),
    BossId nvarchar(20)
        REFERENCES STAFF(Id),
    PRIMARY KEY (EmployeeId, BossId)
)

BOSSES is a table that contains not just direct reports, but all direct and indirect reports (making it handy for this query).

Using a RIGHT JOIN

The answer is fairly straightforward. I join the BOSSES table with the STAFF table to give me all the directors and their reports:
Two inner joins

select S_Employee.Name, S_Boss.Name as Director
from STAFF S_Boss
join BOSSES B
    on B.BossId = S_Boss.Id
join STAFF S_Employee
	on B.EmployeeId = S_Employee.Id
where S_Boss.Role = 'Director'

But wait, this isn’t a complete list of employees. What about those in the company that don’t report to any director (not even indirectly). This is where the right outer join comes in:
An inner join and a right join

select S_Employee.Name, S_Boss.Name as Director
from STAFF S_Boss
join BOSSES B
    on B.BossId = S_Boss.Id
RIGHT join STAFF S_Employee
    on B.EmployeeId = S_Employee.Id
    and S_Boss.Role = 'Director'

Notice that the S_Boss.Role filter can’t belong to the where clause any more. If it did, we’d lose director-less employees again and we’d be back where we started.

Using LEFT JOINS Only

That works… but for the sake of style, let’s only use left joins. I prefer using only left joins in this case because logically, the results are meant to be the set of employees. So I like to start with that list of employees and then join in the bosses if necessary. That’s why I start with the set of employees as the first table. The other joined tables aren’t the stars of this query; they’re only there to help provide that director attribute. So I have this:
Two left joins

select S_Employee.Name, S_Boss.Name as Director
from STAFF S_Employee
left join BOSSES B
    on B.EmployeeId = S_Employee.Id
left join STAFF S_Boss
    on B.BossId = S_Boss.Id
    and S_Boss.Role = 'Director'

But you notice that I’m now using two left joins… Really I only want the one outer join that I was using in the first example. Turns out I can do that:

Using Nested Joins

Well that just looks like this:
One left join

select S_Employee.Name, S_Boss.Name as Director
from STAFF S_Employee
left join BOSSES B
    join STAFF S_Boss
        on B.BossId = S_Boss.Id
        and S_Boss.Role = 'Director'
    on B.EmployeeId= S_Employee.Id

This is logically equivalent to the right join but it uses left joins instead and only uses one outer join. Great!

A lot about this example is only about style. Maybe you prefer the right join example and think that I’m breaking some grammar rule here. It feels like I’m splitting an infinitive or something. Let me know what your preference is.

« Newer PostsOlder Posts »

Powered by WordPress