Michael J. Swart

December 4, 2013

Overly Complex Views, Procedures And Functions

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 4:03 pm

Takeaway: I define a metric which indicates a code-smell for overly complex views, procedures and functions: “FROM” count.

In the programming world, developers have defined metrics for how complex a piece of code is. For example, Visual Studio defines several metrics that are meant to give developers some idea about how complex their code is getting. These measures won’t be perfect because code complexity is not something that can be measured directly. But many think that these measures indicate complexity often enough to be useful.

Code Smells

That’s what a code smell is meant for. My own definition is that a code smell is an anti-rule-of-thumb <cough>cursors!</cough>. It doesn’t necessarily indicate something’s rotten, but like a bad smell, it’s worth checking into.

Measuring Complexity in SQL Modules

And here’s my idea about measuring complexity for SQL. It’s based on the premise that complex procedures or views will have many queries/subqueries and that most queries have FROM clauses. So what are my procedures/views/functions that may be too complex? Let’s find out:

DECLARE @Odor NVARCHAR(30) = N'FROM';
 
with
   L0 as (select 1 as C union all select 1)       --2 rows
  ,L1 as (select 1 as C from L0 as A, L0 as B)    --4 rows
  ,L2 as (select 1 as C from L1 as A, L1 as B)    --16 rows
  ,L3 as (select 1 as C from L2 as A, L2 as B)    --256 rows
  ,L4 as (select 1 as C from L3 as A, L3 as B)    --65536 rows
  ,Nums as (select row_number() over (order by (select 0)) as N from L4)
SELECT 
    OBJECT_SCHEMA_NAME(m.object_id) as SchemaName,
    OBJECT_NAME(m.object_id) as ObjectName, 
    count(1) as OdorCount
FROM Nums
CROSS JOIN sys.sql_modules m
WHERE Nums.N < LEN(m.definition)
    AND SUBSTRING(m.definition, Nums.N, LEN(@Odor)) = @Odor
GROUP BY m.object_id
ORDER BY count(1) desc, object_name(m.object_id) asc

I’ve found the metric slightly better than the simpler “procedure size” metric:

SELECT 
    OBJECT_SCHEMA_NAME(m.object_id) as SchemaName,
    OBJECT_NAME(m.object_id) as ObjectName, 
    LEN(m.definition) as ModuleSize
FROM sys.sql_modules m
ORDER BY LEN(m.definition) desc, object_name(m.object_id) asc

Try it out on your own environments and let me know if it identifies the monster procedures you know are lurking in your database.

Thanks to Adam Machanic for the substring counting syntax.

Performance Improvement!

Update December 5, 2013: In the comments, George Mastros provided a simpler and faster version of this query which does the same thing:

DECLARE @Odor NVARCHAR(30) = N'FROM';
 
Select	OBJECT_SCHEMA_NAME(object_id) As SchemaName, 
        OBJECT_NAME(object_id) As ObjectName, 
        (DataLength(definition) - DataLength(Replace(definition, @Odor, ''))) / DataLength(@Odor) As OdorCount
From	sys.sql_modules
Order By OdorCount DESC;

This is much simpler and much faster. What’s extra interesting is that George’s query itself has an odor count of 2 while my original one had a count of 7. Thanks so much George!

October 30, 2013

“You’ve got access, but I’m not sure why”

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:00 am

Takeaway: Use “xp_logininfo” when curious about how and why someone has access to a database.

I had this I.M. conversation with a colleague at work yesterday.

Me: What was that server name again?
Stephen: It was SQL001
Me: Thanks I’m in.
Stephen: eyeballing security… I don’t see you on that list so if you do actually have access I’m confused

To be honest, I didn’t know why I had access either. Normally, to examine access, I look at security logins and look for users or groups that might match my own credentials. I do that here in SQL Server Management Studio’s Object Explorer:

Looking At Logins

The way I used to look at logins

In this case it’s not immediately obvious why I have access and what level of access I have.

Enter xp_logininfo

So I had this exchange with Allen Kinsel (@AllenKinsel) (via #sqlhelp on twitter):

Allen Kinsel saves the day

xp_logininfo works beautifully for what I wanted to do.

It’s a command that’s too obscure. I think xp_loginfo should be more widely known (hence this blog post). And when I run:

exec xp_logininfo 'MYDOMAIN\mswart', 'all'

I get:

account name type privilege mapped login name permission path
MYDOMAIN\mswart user user MYDOMAIN\mswart MYDOMAIN\SupportGroup
MYDOMAIN\mswart user admin MYDOMAIN\mswart BUILTIN\Administrators

This tells me exactly what kind of access I have and why. In this case, it looks like I have access for two reasons. First, I have access because I belong to the support group and the other reason is because I’m part of the “BUILTIN/Administrators” group. This is so much better than eyeballing a list of logins.

xp_logininfo: It’s worth remembering.

September 19, 2013

My Queries For Top 20 Resource Hogs

Filed under: SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: This post is for me. This post is a handy place for me to put some queries that I use often. If you find them useful too, that’s wonderful, bookmark this page with http://michaeljswart.com/go/Top20.

These queries will give the top 20 resource consumers for cached queries based on a few different metrics. I’ve posted queries like this before, and others have written many other versions of this query. All these queries are based on sys.dm_exec_query_stats.

But I’ve tweaked my own queries recently based on a couple things I’ve learned. So you could say that I gave my old queries a new coat of paint.

Here what I’ve added to these queries recently:

  • I added OPTION (RECOMPILE) to the query. It prevents these queries from showing up in their own results on quiet systems.
  • Joe Sack pointed out that query_hash can be used to aggregate queries that only differ by literal values. I’m doing that here. What’s extra awesome about this technique, is that it can also be used to aggregate the same queries that were executed in different procedures or in different databases. This is critical if we want to measure the impact of a single query on a server regardless of where the query is called.
  • I’m ignoring (for now) those queries with query_hash equal to 0×0. Again, Joe Sack points out that these are cursor fetches.
  • You may notice that the database name is sometimes null. It’s null when the query is not part of a stored procedure. I suppose if I was keen, I could pick out a arbitrary database name from the accompanying query plan.
  • Update September 27, 2013: Actually, In the comment section of this post, Ian Stirk gave me a really nice way to retrieve the database name from the dmvs. I’ve updated the queries below accordingly.
  • Remember that the stats found in sys.dm_exec_query_stats are only as good as what’s in cache. The cache is not a permanent store. It changes and it gets reset every server restart.

(Download all queries)

Or check them out individually below:

Top 20 Executed Queries

These are queries that run often. Frequent queries can be vulnerable to concurrency problems.

;with frequent_queries as
(
    select top 20 
        query_hash, 
        sum(execution_count) executions
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(execution_count) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(ST.objectid, ST.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.execution_count,
    executions as total_executions_for_query,
    SUBSTRING(ST.TEXT,(QS.statement_start_offset + 2) / 2,
        (CASE 
            WHEN QS.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),ST.text)) * 2
            ELSE QS.statement_end_offset
            END - QS.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join frequent_queries fq
    on fq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by fq.executions desc,
    fq.query_hash,
    qs.execution_count desc
option (recompile)

Top 20 I/O Consumers

Specifically logical reads and writes. Still my favorite metric.

;with high_io_queries as
(
    select top 20 
        query_hash, 
        sum(total_logical_reads + total_logical_writes) io
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_logical_reads + total_logical_writes) desc
)
select @@servername as servername,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(ST.objectid, ST.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_logical_reads + total_logical_writes as total_io,
    qs.execution_count,
    cast((total_logical_reads + total_logical_writes) / (execution_count + 0.0) as money) as average_io,
    io as total_io_for_query,
    SUBSTRING(ST.TEXT,(QS.statement_start_offset + 2) / 2,
        (CASE 
            WHEN QS.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),ST.text)) * 2
            ELSE QS.statement_end_offset
            END - QS.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join high_io_queries fq
    on fq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by fq.io desc,
    fq.query_hash,
    qs.total_logical_reads + total_logical_writes desc
option (recompile)

Top 20 CPU Consumers

Another popular metric for measuring work done.

;with high_cpu_queries as
(
    select top 20 
        query_hash, 
        sum(total_worker_time) cpuTime
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_worker_time) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(ST.objectid, ST.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_worker_time as cpu_time,
    qs.execution_count,
    cast(total_worker_time / (execution_count + 0.0) as money) as average_CPU_in_microseconds,
    cpuTime as total_cpu_for_query,
    SUBSTRING(ST.TEXT,(QS.statement_start_offset + 2) / 2,
        (CASE 
            WHEN QS.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),ST.text)) * 2
            ELSE QS.statement_end_offset
            END - QS.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join high_cpu_queries hcq
    on hcq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by hcq.cpuTime desc,
    hcq.query_hash,
    qs.total_worker_time desc
option (recompile)

Top 20 Queries By Elapsed Time

Results found here, but not in the results of the other queries, usually suffer from things like excessive blocking or ASYNC_NETWORK_IO.

;with long_queries as
(
    select top 20 
        query_hash, 
        sum(total_elapsed_time) elapsed_time
    from sys.dm_exec_query_stats 
    where query_hash <> 0x0
    group by query_hash
    order by sum(total_elapsed_time) desc
)
select @@servername as server_name,
    coalesce(db_name(st.dbid), db_name(cast(pa.value AS INT)), 'Resource') AS [DatabaseName],
    coalesce(object_name(ST.objectid, ST.dbid), '<none>') as [object_name],
    qs.query_hash,
    qs.total_elapsed_time,
    qs.execution_count,
    cast(total_elapsed_time / (execution_count + 0.0) as money) as average_duration_in_microseconds,
    elapsed_time as total_elapsed_time_for_query,
    SUBSTRING(ST.TEXT,(QS.statement_start_offset + 2) / 2,
        (CASE 
            WHEN QS.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),ST.text)) * 2
            ELSE QS.statement_end_offset
            END - QS.statement_start_offset) / 2) as sql_text,
    qp.query_plan
from sys.dm_exec_query_stats qs
join long_queries lq
    on lq.query_hash = qs.query_hash
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
cross apply sys.dm_exec_query_plan (qs.plan_handle) qp
outer apply sys.dm_exec_plan_attributes(qs.plan_handle) pa
where pa.attribute = 'dbid'
order by lq.elapsed_time desc,
    lq.query_hash,
    qs.total_elapsed_time desc
option (recompile)

By the way. This post seems to be really popular. So I waive any copyright I have on these four queries. Copy them without attribution wherever you like. Profit if you can. Go nuts.

September 11, 2013

Three More Tricky Tempdb Lessons

Filed under: SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: Tempdb contention is a common DBA problem. I want to share three lesser known surprises about everyone’s favorite bottleneck.

If my professional life was made into a series, tempdb contention would have a recurring role as the antagonist. It provides challenges on a regular basis and I finally believe I understand it’s behavior well enough to have contention licked for good. But even recently, tempdb still holds some surprises for me.

You Can Still Get SGAM Contention After TF1118

You may know what trace flag TF1118 does. It affects the allocation of pages in all databases. But it’s most often used for its effect in tempdb. The idea is that TF1118 will cause SQL Server to store all table data in dedicated extents rather than mixed extents. Without mixed extents, the need to access SGAM pages is reduced and that leads to less contention as described in this KB article.

But the use of SGAM isn’t reduced to zero. What?! How can that be? Who’s still using mixed extents? It turns out that IAM pages are always stored on mixed extents. Every time any object is created (or destroyed) that will require at least one latch on an SGAM page. On an extremely busy system, you can still see contention here.

Equally Sized Data Files Can Become Unbalanced After Autogrow

That’s not the surprising thing. The surprising thing is that it can impact the effectiveness of having multiple data files and it needs to be kept in mind. Once again from Microsoft:

If data files [have] unequal sizes, the proportional fill algorithm tries to use the largest file more for GAM allocations instead of spreading the allocations between all the files.

This makes multiple data files less effective than it could be when tackling contention. But don’t disable auto-grow, just size your tempdb data files large enough (Tip: google “instant file initialization”)

Before we move on, if your tempdb data files have auto-grown, do you know what the initial sizes were?

Tempdb Contention Can Be Sudden and Catastrophic

I hadn’t seen this before. Most contention I’m familiar with looks like a “sluggish” database. The response times on particular queries just aren’t what they ought to be – getting gradually worse until the impact is felt hard. But just recently I have seen tempdb contention show up all-of-a-sudden making the database nearly useless. Especially after excessive blocking kicks in. Here’s how that can happen:

  • Imagine a super-frequent procedure creates a temp table.
  • Several of these temp tables are cached. (There can be several execution contexts for the procedure and so several temp tables).
  • A table used in the procedure has enough modified rows to have it’s statistics auto-updated.
  • The procedure’s query plan is invalidated and is dropped from the cache.
  • The cached temp tables that were used for the procedure are marked to be destroyed
  • The procedure is recompiled and new temp tables are created
  • A system process destroys the marked temp tables that need to be cleaned up

With a very busy system, these things can happen in only a few seconds. In fact those two last items describe actions where hundreds of threads can get into a fight over allocation pages.

The following query may give a list of query plans for frequently called procedures as described:

with frequentSprocs as 
(
    select top 10 count(1) as [memory objects], cp.plan_handle from sys.dm_exec_cached_plans cp
    cross apply sys.dm_exec_cached_plan_dependent_objects(cp.plan_handle) do
    join sys.dm_os_memory_objects mo
        on do.memory_object_address = mo.memory_object_address
    where objtype = 'Proc'
    group by cp.plan_handle
    order by 1 desc
)
select fs.*, qp.query_plan
from frequentSprocs fs
cross apply sys.dm_exec_query_plan(fs.plan_handle) qp
option (recompile)

What Next?

I’m lucky to work with software that runs on systems that are crazy powerful and crazy busy. It’s unlikely that you’ll run into tempdb problems in the exact same way that we did. So the typical advice for PFS/GAM/SGAM contention is still valid:

  • Enable trace flag TF 1118
  • Create multiple (equally sized) data files for tempdb
  • Reduce how frequently you use temp tables and table variables

 

August 15, 2013

A Tuning Technique For Parallel Queries

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: I describe a technique I use to tune parallel queries. I tune the serial version of the query using MAXDOP 1. It’s a simpler query plan. Faster understanding leads to faster solutions.

In a parallel universe, I'm blogging about cartography and projection mapping.

How better to explain than by example!

The Example

For this example, I use AdventureWorks2012 and I beef up the sales order tables using Jonathan Kehayias’s script Enlarging the AdventureWorks Sample Databases. For my example, I want to know which salesperson sold the most red items in their last ten orders (looking at orders with at least one red item).

My first crack at the query gives me this:

SELECT P.FirstName, P.LastName, SUM(OrderQty) Items
FROM Person.Person P
CROSS APPLY (
    SELECT TOP 10 SUM(OrderQty) as OrderQty, H.SalesOrderID
    FROM Sales.SalesOrderDetailEnlarged D
    JOIN Sales.SalesOrderHeaderEnlarged H
        ON D.SalesOrderID = H.SalesOrderID
    JOIN Production.Product Pr
        ON D.ProductID = Pr.ProductID
    WHERE H.SalesPersonID = P.BusinessEntityID
        AND Pr.Color = 'red'
    GROUP BY H.SalesOrderID, H.ShipDate
    ORDER BY H.ShipDate DESC
) AS LastTenOrdersWithRedItems
GROUP BY P.BusinessEntityID, P.FirstName, P.LastName
ORDER BY SUM(OrderQty) DESC

Great! That query gives me the Adventureworks salespeople and the number of red items they’ve sold in their last ten orders. But the execution time was a bit slow and so I decide to tune the query. The query plan looks like this:
The query plan for the previous query

There’s a couple things that jump out at me right away: Some of the thick lines tell me that there’s a lot of data getting processed. There’s also a warning that tells me that tempdb was used for a sorting operation. And in general, I notice that there are a lot of operators here including parallelism operators.

So now I’m going to try to look at a serial query plan for the same query. I have to tell SQL Server that the maximum degree of parallelism it should consider is one thread. That’s done with the MAXDOP query hint like this:

SELECT P.FirstName, P.LastName, SUM(OrderQty) Items
FROM Person.Person P
CROSS APPLY (
    SELECT TOP 10 SUM(OrderQty) as OrderQty, H.SalesOrderID
    FROM Sales.SalesOrderDetailEnlarged D
    JOIN Sales.SalesOrderHeaderEnlarged H
        ON D.SalesOrderID = H.SalesOrderID
    JOIN Production.Product Pr
         ON D.ProductID = Pr.ProductID
    WHERE H.SalesPersonID = P.BusinessEntityID
        AND Pr.Color = 'red'
    GROUP BY H.SalesOrderID, H.ShipDate
    ORDER BY H.ShipDate DESC
) AS LastTenOrders
GROUP BY P.BusinessEntityID, P.FirstName, P.LastName
ORDER BY SUM(OrderQty) DESC
OPTION (MAXDOP 1) -- this line has been added

The hint is in there, but it’s not meant to be a permanent thing. I put it in there in order to look at the serial version of the query plan. Sure enough, it looks simpler and it’s easier to examine:

The query plan for the previous query

The information is still the same, but to me it’s a little clearer where the problem is. I see that SalesOrderHeader is scanned completely and then spooled (one iteration per person) and then sorted by date. I also see that there’s a lot of data coming from Person.Person.

So in this case, I may decide to add an index on SalesOrderHeader(SalesPersonId, ShipDate) like this:

CREATE INDEX IX_SalesOrderHeaderEnlarged_SalesPersonShipDate 
    ON Sales.SalesOrderHeaderEnlarged(SalesPersonId ASC, ShipDate DESC)

A Bonus Optimization
There’s another optimization I’d like to consider. Notice that all 19,972 rows in the Person.Person table are scanned. Well I can do better than that. I can join Person.Person to HumanResources.Employee because I know that all salespeople are employees. The join acts as a filter and it really cuts down on processing. The query now looks like:

SELECT P.FirstName, P.LastName, SUM(OrderQty) Items
FROM Person.Person P
JOIN HumanResources.Employee E
    ON P.BusinessEntityID = E.BusinessEntityID
CROSS APPLY (
    SELECT TOP 10 SUM(OrderQty) as OrderQty, H.SalesOrderID
    FROM Sales.SalesOrderDetailEnlarged D
    JOIN Sales.SalesOrderHeaderEnlarged H
        ON D.SalesOrderID = H.SalesOrderID
    JOIN Production.Product Pr
        ON D.ProductID = Pr.ProductID
    WHERE H.SalesPersonID = P.BusinessEntityID
        AND Pr.Color = 'red'
    GROUP BY H.SalesOrderID, H.ShipDate
    ORDER BY H.ShipDate DESC
) AS LastTenOrders
GROUP BY P.BusinessEntityID, P.FirstName, P.LastName
ORDER BY SUM(OrderQty) DESC

With the new index, this query performs about as well as we can hope. The new plan now looks like this:

The query plan for the previous query

Why this Works For Me

Adding MAXDOP 1 is a quick way to give me a serial version of the query plan and the serial version of the plan is simpler. This trick saves me time. Or it has often enough that it’s worthwhile to try.

Another reason this works for me is because the queries I write and maintain are often required to run sub-second. You might be in the same boat if you write your queries for high volume OLTP systems. This usually means that there’s a high standard for query performance. Acceptably tuned queries are fast enough that SQL Server will rarely decide to consider parallel query plans. And that means that it’s tempting to look for parallelism to find candidates for poorly tuned queries.

Okay, Here Come The Caveats

  • Reading the title, some will assume that I’m claiming a parallel query plan is a bad query plan. Absolutely not. Queries that are executed with parallel plans are queries that do a lot of work. That’s not bad in itself. Sometimes queries need to do a lot of work. The question that should be asked is whether these queries are doing too much work.
  • I’m not claiming MAXDOP 1 is a tuning technique in itself. It’s just a trick I use to look at a query from a different point of view. If your query is a thousand times faster after adding MAXDOP 1 and you don’t know why, then you’re not done yet.
  • Parallel execution plans are not dull or useless! They’re fascinating! I’ve learned that the mental effort spent following a Paul White post on parallelism always pays off.

July 3, 2013

It Took Me Ages To Figure This One Out: A Puzzle

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 12:00 pm

Takeaway: I tackled a problem recently and I present a version of the problem here as a challenge. The challenge is to figure out why a query and a stored procedure return different results even though the procedure is defined using the same query.

If no one cracks this one, I’ll leave the answer in the comments in a couple days.

Too Easy

The Setup

Okay, I use AdventureWorks2012 here. I define a table data type IDList and a stored procedure s_GetCustomerProductPurchases:

use Adventureworks2012
go
 
create type IDList as TABLE(id int);
go
 
CREATE PROCEDURE dbo.s_GetCustomerProductPurchases 
	@CustomerIds IDList READONLY,
	@ProductIds IDList READONLY,
	@SearchString NVARCHAR(1000)
AS
SET NOCOUNT ON 
 
DECLARE @ResultList TABLE ( 
	ProductId INT, 
	ProductName Name 
	);
 
insert @ResultList (ProductId, ProductName)
select distinct TOP (2000) p.ProductID, p.Name
from sales.salesorderdetail sod
join sales.salesorderheader soh 
    on soh.SalesOrderID = sod.SalesOrderID
join Production.Product p
    on p.ProductID = sod.ProductID
join Sales.Customer c
    on c.CustomerID = soh.CustomerID
join @CustomerIds CIds
    on CIds.id = c.CustomerID
join @ProductIds PIds
	on PIds.id = p.ProductID
where p.Name like '%' + @SearchString + '%'
	and soh.[Status] = 5
 
IF	@@ROWCOUNT = 0 
	SELECT	0 ProductId, 'No Matching Products' ProductName
ELSE 
	SELECT	 ProductId, ProductName
	FROM	 @ResultList 
	ORDER BY ProductId

Executing the Query

Next lets run a query against Adventureworks2012. We define some variables that are used as filters in a query. This query populates a temporary results table and then we select from it:

DECLARE 
	@ProductIds IDList,
	@CustomerIds IDList,
	@SearchString Name = 'Red';
insert @ProductIds (id) VALUES (707), (717), (727)
insert @CustomerIds (id) VALUES (11000), (11103), (11201)
 
SET NOCOUNT ON 
 
DECLARE @ResultList TABLE ( 
	ProductId INT, 
	ProductName Name 
	);
 
insert @ResultList (ProductId, ProductName)
select distinct TOP (2000) p.ProductId, p.Name
from sales.salesorderdetail sod
join sales.salesorderheader soh 
    on soh.SalesOrderID = sod.SalesOrderID
join Production.Product p
    on p.ProductID = sod.ProductID
join Sales.Customer c
    on c.CustomerID = soh.CustomerID
join @CustomerIds CIds
    on CIds.id = c.CustomerID
join @ProductIds PIds
	on PIds.id = p.ProductID
where p.Name like '%' + @SearchString + '%'
	and soh.[Status] = 5
 
IF	@@ROWCOUNT = 0 
	SELECT	0 ProductId, 'No Matching Products' ProductName
ELSE 
	SELECT	 ProductId, ProductName
	FROM	 @ResultList 
	ORDER BY ProductId

These are the results I get:

ProductId ProductName
707 Sport-100 Helmet, Red

Executing the Procedure

Now run the procedure which contains the exact same query:

DECLARE 
	@ProductIds IDList,
	@CustomerIds IDList,
	@SearchString Name = 'Red';
insert @ProductIds (id) VALUES (707), (717), (727)
insert @CustomerIds (id) VALUES (11000), (11103), (11201)
 
exec dbo.s_GetCustomerProductPurchases @ProductIds, @CustomerIds, @SearchString

The results we get now look like:

ProductId ProductName
0 No Matching Products

So Why Are These Results Different?

That’s what I want you to figure out. Try to crack this one before looking at the comments.

SQL Fiddle

Have you used SQL Fiddle yet? It’s a site that allows SQL developers to set up different scenarios for use in online discussions. So if you don’t have Adventureworks2012 handy, I’ve setup up two SQLFiddles here and here. Your job then is to follow these two links and explain why the results are different.

April 17, 2013

The Sch-M lock is Evil

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

A necessary evil, but still evil. Why? Because it it won’t share with Sch-S and Sch-S is taken and held for absolutely everything (Yes, even your NOLOCK queries). And that can lead to some interesting concurrency problems. Let me explain.

Sch-M

Sch-M is an abbreviation for Schema Modification. It refers to a kind of lock that is taken on a table/index or other object whenever you want to modify that object. SQL Server allows only one Sch-M lock on an object at a time. So if you want to modify a table, your process waits to takes a Sch-M lock on that table. Once that lock is granted, the modification is completed and then the lock is released.

Sch-S

Sch-S is an abbreviation for Schema Stability. It is a kind of lock that is taken on an object when a process doesn’t want that object to change its definition. It makes sense. If I’m reading a set of rows through a table, I don’t want a column to disappear on me half way through. SQL Server allows many Sch-S locks on a table.

Sch-S vs. Sch-M

But Sch-S locks are incompatible with Sch-M locks. This means that when you want to modify a table, you’re not granted a Sch-M lock immediately. You have to wait for everyone using that table to finish using it. You’re essentially put on hold until the existing queries complete and their existing Sch-S locks are released. This also means that while you’re waiting, every query who wants to begin using that table is waiting in line behind you too. Basically “Everybody outta the pool while the lifeguards change shifts.” But that’s usually acceptable right? Database schema modifications are a big enough change to require a maintenance window.

Index Rebuilds Are Table Modifications

It’s true, if you have the luxury of maintenance windows for your DB changes, you’ll be alright. But you also have to consider your database maintenance plans (automated or otherwise). Those plans can launch index rebuilds while the database is online. And all index rebuilds also count as table modifications and take Sch-M locks. An index rebuild has syntax like this:

ALTER INDEX [PK_MyTable] ON [MyTable] REBUILD WITH (ONLINE=ON)

Hopefully you’ve remembered that ONLINE=ON part. When you use that part, the index is rebuilt in the background and at the end of that processing time, a Sch-M lock is taken and released very quickly.

But maybe you’re not so lucky. Maybe you’re not running 2012 yet and have an index that includes blobs. Or maybe you’re running on Standard Edition. In those cases you won’t be able to use the ONLINE=ON feature. In that case, the Sch-M lock is taken by the rebuild process and it’s held the entire time that index is rebuilt. During the rebuild, that index is now truly offline. No access for you.

You Can Get Stuck

Just like I did. A while ago, I was asked to help with this exact situation. An index rebuild had been running for hours it was offline and the Sch-M lock that was held was preventing anybody from using or even looking at that table. I was stuck between a rock and a hard place. I had to choose between letting the index rebuild complete (which could take hours) or cancelling the job (whose rollback could take hours). There was nothing I could do to avoid additional hours of downtime. There was another bit of irony in my situation. We didn’t care about the existing data in that table. A truncate table or drop/recreate table would have suited us just fine.

… Like Really Stuck

It occurred to me to try something out. What if I created an identical empty table with a different name. We didn’t need any data in the locked table. So using a new table could work. And because the locked table is only accessed by stored procedures, I can modify those procedures to use the new table instead of the locked one.

Nope!

For some reason, the ALTER PROCEDURE requires a Sch-S lock on the old table, the table it no longer refers to. The sprocs can’t operate or be modified without a Sch-S lock on that locked table. This sketch illustrates my situation.

Make sure your index plans never attempt to rebuild big indexes offline.

Extras

Some relevant DBAReactions gifs:

April 3, 2013

Altering Text Columns: Only a Metadata Change?

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 8:00 am

Say you want to change the type of a text column using the ALTER TABLE … ALTER COLUMN syntax. It is valuable to know how much work SQL Server will have to do to fulfill your request. When your tables are large, it can mean the difference between a maintenance window that lasts five minutes, or one that lasts five hours or more.

I give a list of exactly when you’ll feel that pain and when you won’t.

A joke about char(max)

(BTW, CHAR(MAX) columns are impossible, you’ll get the gag if you figure out why)

When is the Whole Table Processed?

Here are conditions which require processing the entire table:

  • switching from unicode to non-unicode or vice versa.
  • changing a column from nullable to not nullable.
  • going from fixed length field to variable length field or vice versa.
  • decreasing the maximum length of a field.
  • increasing the maximum length of a fixed length field.
  • converting from limited length columns to unlimited or vice versa. (e.g. varchar(10) to varchar(max)).
  • collation modifications on non-unicode columns that change character set or code page. (See Collation Hell Part 3 by Dan Guzman)

On large tables, any of the above alterations will be a heavy hitter and will take time and fill transaction log (except that a shrink of fixed length fields seems to only require a scan).

What Changes are Metadata Only Changes?

That’s a lot of conditions! What’s alterations are left?

Not much:

  • Increasing the maximum length of a variable length column.
  • Changing the type from text to varchar(max).
  • Changing the type from ntext to nvarchar(max).
  • Any of the above while making a non-nullable field nullable.
  • Any of the above with a change in collation (with some big caveats, see Collation Hell Part 3 by Dan Guzman).

These changes are metadata only changes which means SQL Server doesn’t have to touch any of the actual data. So the size of the table will not impact the time it takes SQL Server to process the ALTER TABLE command. But see some notes about concurrency below.

Some Notes

Some notes about the above:

  • I ignored and make no claims about migrations where text or ntext is the target column type because the exceptions are strange and that scenario seems to fall under “Why would you want to do that?”
  • The above applies to only versions I’ve tested. Specifically 2008, and 2012.
  • The metadata-only changes I described above is not entirely on-line. There are still concurrency concerns to watch out for. These ALTER statements still request Schema modification (Sch-M) locks on the table, and once granted, only hold them briefly. But if you try to alter a column on a live environment and some long running query blocks your ALTER TABLE statement, then other queries that need access to the table will be blocked as well. 
  • Terms I used
    • fixed length: char(x), nchar(x)
    • variable length: varchar(x), nvarchar(x)
    • unlimited length: varchar(max), nvarchar(max), text, ntext
    • unicode: nchar(x), nvarchar(x), nvarchar(max), ntext
    • non-unicode: char(x), varchar(x), varchar(max), text

March 19, 2013

Checking Out Amazon Redshift

In order to refresh my memory about what I learned in University, I’ve been watching a course that UC Berkeley made available on Youtube. It’s been a good course so far. Recently I got to the topic of logical fallacies. And so I’m reminded that the following is not a valid argument: Jimmy advises X; Jimmy profits if you follow advice X; Therefore, Not X. It’s not valid in the academic sense. In the practical sense, I have to remember to not distrust all commercials and marketing videos.

But it’s hard, especially when Googling for “Big Data” or “Analytics”, I find it very difficult to find anything to read that’s both unbiased and useful. Maybe it’s because I’m still skeptical about any solution that is promoted by the people who stand to profit from following their advice (I’m trying not to discount their advice, I just squint at it a little).

So when Amazon came out with a publicly available Beta for their new Redshift Datawarehouse service (accompanied by a slick marketing video), I decided to kick the tires. Here’s some notes I made:

Amazon’s Redshift Commercial

Their commercial is here. And I have to say it reminded me of an infomercial. You know the kind where they try to sell fancy mop gadgets by showing how ridiculously clumsy people can be with regular mops. The Amazon commercial is light on technical details but I don’t think it was meant for an audience like me and you. I made a conscious effort not to hold that against them.

Warehouse in the cloud

Having a warehouse in the cloud makes a lot of sense in some ways. Pay-as-you-go pricing is what you want when you prefer operational costs over capital costs. Many businesses who don’t have I.T. as a core competency will find this a pretty attractive alternative to running servers or an appliance themselves. But it can get expensive quick, Amazon advertises less than $1000 / Terabyte / year. But with upfront costs for reserved pricing and a two terabyte minimum. The smallest rate you can get away with is $3000 per year for three years. In practice it will likely be significantly more. I can’t see anyone choosing Redshift without doing due diligence on the pricing, but it’s probably not going to be dirt cheap.

Connecting to Redshift
Star Trek's Jean Luc Picard says "Very well Data, open a channel"
Connections are made to Redshift only through ODBC (or JDBC) using Postgres drivers. Some challenges:

  • Picking my way through 32 bit vs 64 bit was tricky. Getting and using the right driver took some care.
  • Uploading through this connection is slow. So don’t try it this way. I mean it’s possible, but it’s simply not recommended. I learned this lesson not by reading through Amazon docs, but by attempting to run an SSIS job to funnel data into an ODBC destination. It was actually working, but it was impractical because it was so slow.

Creating the warehouse schema wasn’t too hard: I scripted a subset of tables from an existing warehouse. I stripped indexes, constraints and identities. There were a couple syntax differences (int becomes integer, bit becomes boolean, and GO becomes semicolon) but it went pretty smooth. For Redshift, in lieu of indexes, you choose columns on each table for the distribution key and the sort key. Underneath the covers, Redshift uses ParAccel and so if you’re familiar with that, you’ve got a great head start. Otherwise, Amazon’s Developer Guide is a good start. I’d like to hear more from others who know more about modeling for Redshift; it seems like a regular star schema will work well here.

Loading data is best through S3

I created a SSIS package that takes data from my relational warehouse. It takes that data and dumps it into delimited files (Hat tip to @peschkaj for advice on delimiter values). Then I gzipped the whole thing. I loaded those gzipped files into Amazon S3 and loaded data into Redshift using those files. Initially, I was afraid of Amazon S3 (What? I have to sign up for and learn about another service?) but working with Amazon S3 was dead simple.

Consuming data

I connected a Tableau client to Redshift using an ODBC connection. This Tableau discussion tells me that’s the current way to connect Tableau and Redshift. There are quite a few SQL limitations imposed by the ODBC protocol. So the Tableau experience was not too smooth. Tableau has a new Redshift connector coming out in a matter of weeks which should clear these limitations.
The point is that Amazon claims Redshift integrates seamlessly with a list of reporting apps (including Tableau). I believe it really will by the time Redshift is released, it’s just right now there’s a pretty big seam.

Next?

I’m going to get around to evaluating Google Big Query and of course Microsoft’s HDInsight. You might notice that lately, I find myself in a phase of learning (fun!) and so this post and the next few will be a description of my exploration.

February 28, 2013

Follow up on Ad Hoc TVP contention

Filed under: SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Last month I posted PAGELATCH Contention on 2:1:103. It described a troubleshooting experience I had. I was troubleshooting a performance problem that was felt when applications issued very frequent ad hoc queries that used Table Valued Parameters (TVPs).

I’ve recently learned some extra things from the SQL Server team at Microsoft that I wanted to pass on. So this post acts as a kind of update to that one.

I know. I've drawn Columbo before, but it's my blog.

Reproduction

One thing I did was to come up with an application that reproduces the contention on 2:1:103 in the same way as production. I share that here now.

What this reproduction does is launch fifty processes. Each process uses multiple threads to hit a db server with an ad hoc query that uses a TVP.  For the curious here it is: Download AdHocTVPContention.zip and then open readme.txt. This reproduction is a mashup of

  • A simple batch script which launches an executable 50 times at once. A technique I gleaned from Paul Randal in a demo of his.
  • A multi-threaded executable which sends queries asynchronously. A framework I first developed in a post I wrote on concurrency.
  • The ad hoc TVP query itself. It was a bit of a modification of my example at Table Valued Parameters, A Short Complete Example. The only difference is that the command is a query instead of a stored procedure.

What I’ve Learned Since Then

The Microsoft team helped me understand a few things. The main thing is that the particular version of SQL Server (including service pack level) matters a lot:

SQL Server 2005 and Earlier

TVPs weren’t supported then. (This post isn’t for you).

SQL Server 2008 and R2

In my earlier post, I dismissed a fix provided by Microsoft. Fix: Poor performance in SQL Server 2008 R2 when table-valued functions use many table variables. At the time, I dismissed it for a number of reasons:

  • The workaround listed on that kb article page did not help us.
  • The symptoms that were described at that issue did not match my own scenario (i.e. I wasn’t using table-valued functions).
  • The title mentioned R2 and I thought (mistakenly) that this fix was not applicable to SQL Server 2008. (Update April 5, 2013: The fix used to be available as a hotfix for 2008 SP3, but as of March 2013, this fix is now included in a cumulative update. Upgrade to SP3 CU10 if you’re using 2008 and want this fix. Microsoft has also updated the title of the fix to mention 2008)

Microsoft helped me learn that although the symptoms in that fix differ from mine, the improvement in that fix involves a change to code that also improves my scenario. So with the fix applied, on my test environment, I saw that SQL Server could handle at least 6 times the volume of ad hoc queries than it could before (your mileage may vary).

SQL Server 2012

But there’s one other thing I noticed. When I ran the reproduction on a SQL Server 2012 environment, the process just flew through the workload! Temp table creation was no longer a bottleneck; there was no more contention on temp table creation. The Microsoft Engineer I worked with noticed this too and eventually it prompted a CSS blog entry. You can read all about this 2012 performance improvement at Temp table caching improvement for table valued parameters in SQL Server 2012

Partner vs MVP

I’m very lucky to have many investigation options available to me when I’m digging into a particular quirk of SQL Server behaviour. There’s the public options like #sqlhelp or dba.stackexchange or Microsoft support. And for me there’s some private options like the MVP mailing list. But this month, I feel lucky to work as a Microsoft partner. It’s through that connection that I learned so much.

Too Long; Didn’t Read

If you’re experiencing 2:1:103 contention because of too frequent ad hoc TVP queries then:

« Newer PostsOlder Posts »

Powered by WordPress