Michael J. Swart

September 18, 2014

SQL Server Ignores Trailing Spaces In Identifiers

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:27 am

Takeaway: According to SQL Server, an identifier with trailing spaces is considered equivalent to the same identifier with those spaces removed. That was unexpected to me because that’s not how other programming languages work. My investigation was interesting and I describe that here.

The First Symptom

Here’s the setting, I work with a tool developed internally that reads metadata from a database (table names, column names, column types and that sort of thing). Recently the tool told me that a table had an unexpected definition. In this case, a column name had an extra trailing space. I expected the column name "Id" (2 characters), but my tool was reporting an actual value of  "Id " (notice the blank at the end, 3 characters). That’s what started my investigation.

But that’s really weird. What would lead to a space accidentally getting tacked on to a column name? I couldn’t think of any reason. I also noticed a couple other things. Redgate SQL Compare was reporting no discrepancies and the database users weren’t complaining at all, they seemed just fine. A bug in the in-house tool seemed most likely. My hunch was that there was a problem with the way we collecting or storing these column names (how did a space sneak in there?).

Where Are Column Names Stored?

I wanted to look at the real name of the column – straight from the source – so I ran:

SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME LIKE 'Id%'

It told me that my tool wasn’t wrong. That the column was actually named "Id " with the space. So maybe Red Gate is getting its metadata from somewhere else? I know of a few places to get column information. Maybe Red Gate is getting it from one of those? Specifically I wanted to look closer at these views:

  • sys.columns
  • sys.syscolumns
  • INFORMATION_SCHEMA.COLUMNS

Because these objects are views, I used sp_helptext to learn that all the column names ultimately come from a system table called sys.syscolpars. But sys.syscolpars is a system table and you can’t look at its contents unless you connect to the database using the dedicated administrator connection. And that’s exactly what I did.

I learned that there is only one version of column names, only one place on disk that sql server persists the name of the column. It’s interesting because this implies that Red Gate’s SQL Compare trims trailing spaces from identifier names.

But Doesn’t SQL Server Care?

Well, there’s one way to check:

CREATE TABLE [MyTest] ( [id ] INT );
INSERT INTO [MyTest] VALUES (1);
 
SELECT [id ], [id] -- one column name with a space, one column name without
FROM [MyTest]; 
-- returns a dataset with column names as specified in the query.
go

Just like Red Gate’s SQL Compare, it seems like SQL Server doesn’t care about trailing spaces in identifiers either.

Google? Stackoverflow? Want to Weigh In?

A quick search led me to the extremely relevant Is SQL Server Naming trailing space insensitive?.

And that question has answers which link to the Books Online page documenting Delimited Identifiers. That page claims that “SQL Server stores the name without the trailing spaces.” Hmmm, they either mean in memory, or the page is inaccurate. I just looked at the system tables a moment ago and the trailing spaces are definitely retained.

The stackoverflow question also led me to a reported defect, the Connect item Trailing space in column names. This item was closed as “by design”. So this behavior is deliberate.

What do other SQL Vendors do?

I want to do experiments on SQL databases from other vendors but my computer doesn’t have a large number of virtual machines or playground environments. But do you know who does? SQL Fiddle
It’s very easy to use this site to see what different database vendors do. I just pick a vendor and I can try out any SQL I want. It took very little effort to be able to compile this table:

RDBMS CREATE TABLE... SELECT "id ", "id"...
MySQL Incorrect column name 'id '
Oracle Success "id": invalid identifier
PostgreSQL Success Column "id" does not exist
SQLite Success could not prepare statement (1 no such column: id)
SQL Server Success
ID ID
1 1

And What Does the ANSI standard say?

Look at the variety of behaviors from each vendor. I wonder what the “standard” implementation should be.

Mmmm... SQL Syntax rules.

I googled “ANSI SQL 92″ and found its wikipedia page and that led me to the SQL-92 Standard itself.

ANSI (paraphrased) says that

<delimited identifier> ::= <double quote><one or more characters><double quote>

And it also says explicitly that delimited identifiers can include spaces.

What About String Comparisons In General?

During my experiments on SQL Server I found myself executing this query:

SELECT *
FROM sys.columns
WHERE name = 'Id'

I was surprised to find out that my three-character "Id " column came back in the results. This means that SQL Server ignores trailing spaces for all string comparisons, not just for identifiers.

I changed my google search and looked for “sql server string comparison trailing space”. This is where I found another super-relevant document from Microsoft: INF: How SQL Server Compares Strings with Trailing Spaces.

Microsoft pointed to the ANSI standard again. I mean they explained exactly where to look, they pointed straight to (Section 8.2, , General rules #3) which is the section where ANSI explains how the comparison of two character strings is determined. The ANSI standard says that for string comparisons, the shorter string is effectively padded with trailing spaces so that comparisons can always performed on strings with an equal number of characters. Why? I don’t know.

And that’s where identifier comparisons come in. I found another part of the standard (Syntax rule #11) which tells me that Identifiers are equivalent if they compare as equivalent according to regular string comparison rules. So that’s the link between string comparisons and identifier comparisons.

Summary

There’s a number of things I learned about string comparisons. But does any of this matter? Hardly. No one deliberately chooses to name identifiers using trailing spaces. And I could have decided to sum this whole article up in a single tweet (see the title).

But did you figure out the head fake? This blog post is actually about investigation. The investigation is the interesting thing. This post describes the tools I like to use and how I use them to find things out for myself including:

  • Queries against SQL Server itself, the obvious authority on SQL Server behavior.
    • Made use of sp_helptext
    • Made use of the Dedicated Adminstrator Connection to look at system tables
  • Microsoft’s Books Online (used this twice!)
  • Microsoft Connect
  • Google
  • Stackoverflow
  • SQLFiddle
  • Wikipedia
  • the ANSI Standard

Maybe none of these resources are new or exciting. You’ve likely used many of these in the past. But that’s the point, you can find out about any topic in-depth by being a little curious and a little resourceful. I love to hear about investigation stories. Often how people find things can be at least as interesting as the actual lesson.

June 27, 2014

Trivia about Trivial Plans

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 12:43 pm

Takeaway: I found an example of a query plan which performs better than the “trivial” query plan.

This post is trivia in that it won’t help you do your job as a developer or DBA. But it’s interesting anyway. It offers a look into an interesting part of SQL Server’s query optimizer.

The Setup

I use the 2012 AdventureWorks database and I mess around with the indexes. It’s a setup that Kendra Little developed in order to demonstrate index intersection.

use AdventureWorks2012
GO
 
DROP INDEX Person.Person.IX_Person_LastName_FirstName_MiddleName;
GO
 
CREATE INDEX [IX_Person_FirstName_LastName] ON [Person].[Person] 
( FirstName, LastName ) WITH (ONLINE=ON);
GO
 
CREATE INDEX [IX_Person_MiddleName] ON [Person].[Person] 
( MiddleName ) WITH (ONLINE=ON);
GO

The Trivial Plan

In management studio, include the actual query plan and run this query:

SET STATISTICS IO ON
 
SELECT FirstName, MiddleName, LastName
FROM Person.Person
 
-- 19972 rows returned
-- 1 scan, 3820 logical reads
-- optimization level: TRIVIAL
-- estimated cost: 2.84673

With such a simple query – one against a single table with no filters – SQL Server will choose to scan the narrowest covering index and it won’t bother optimizing the plan any further. That’s what it means to say the optimization level is TRIVIAL.

For this query, the only index that contains all three columns is the clustered one. So it seems there’s no alternative but to scan it. That sounds reasonable right? That’s what we see in the query plan, it looks looks like this:

ClusteredScan

But notice that SQL Server is doing a lot of reading with this plan choice. The table Person.Person has a column called Demographics. This xml field makes the table very wide, so wide that a typical page in Person.Person can only fit about 5 or 6 rows on average.

The Better-Than-Trivial Plan

Now look at this query:

SELECT FirstName, MiddleName, LastName
FROM Person.Person
WHERE FirstName LIKE '%'
 
-- 19972 rows returned
-- 2 scans, 139 logical reads
-- optimization level: FULL
-- estimated cost: 1.46198

The filter is put in place to have no logical effect.  It complicates things just enough so that SQL Server won’t use a trivial plan. SQL Server fully optimizes the query and the query plan now looks like this:

IndexIntersect

Notice that the plan has scans on two indexes nonclustered indexes and a hash join. SQL Server figures (correctly) that scans of two narrow indexes plus a hash join are still cheaper than the single scan of the fat clustered index.

Careful

I don’t think I need to say this, but I do not recommend adding WHERE column like '%' anywhere except maybe in contrived examples for demo purposes.

(MJS — Enjoy the summer, See you in September!)

May 22, 2014

Enabling the New Cardinality Estimator in SQL Server 2014

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 8:33 am

Takeaway: SQL Server 2014 will make use of its newly re-written Cardinality Estimator when the database’s compatibility mode is at least 120. But there’s more to the story.

What’s a Cardinality Estimator (CE)?

Say you’ve been hired to phone everyone on a particular list. If it’s a list of all Americans taller than seven feet, you might manage quite well on your own. But if it’s a list of all Americans shorter than seven feet, you’ll probably need help from others. That’s not surprising because the sizes of the lists are wildly different. One list could have 300 people on it and the other could have 300 million. The expected sizes of the lists influence how you tackle this problem.

This was after phones, but before the do not call list.

SQL Server does the same thing. It uses statistics to find the best ways to execute queries. To find a good query plan, SQL Server often needs to make many choices (which join type, join order, parallelism etc…) It needs to estimate the cost of each choice and it uses educated guesses to evaluate these costs. That’s what the CE was built to do. It provides educated guesses about the number of rows a query plan has to process. That’s why it’s called the cardinality estimator. The accuracy of these estimates will influence the quality of query plans, and consequently, the performance of queries.

With SQL Server 2014, Microsoft released a rewritten version of SQL Server’s CE. I can’t wait to take advantage of it. I’m looking forward to tuning fewer poorly performing queries. Queries that seem to be written well, but are vulnerable to bad query plans.

Risk of Regressions

The CE is part of the query optimizer, so the rewrite represents a significant change to the database engine. And with any pervasive change, there’s always a risk of regressions. While rare, some workloads are expected to perform worse with the new CE. Joe Sack’s excellent white paper Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator has some essential tips and suggestions on how to assess and deal with these potential regressions.

Some users may want to continue using the legacy CE. And some users may want to decouple the adoption of the new CE with the adoption of SQL Server 2014. Microsoft anticipated this and so they give DBAs a choice. DBAs have the option to either use the new CE or to stick with the legacy CE.

Enabling the New CE – the Official Details

Simply put, CE behavior can be controlled using the compatibility mode and/or trace flags:

  • The new CE is enabled when compatibility mode is 120 and disabled when it is less than that. The compatibility mode of a database is not modified automatically during an upgrade to 2014, so remember to adjust it accordingly.
  • New trace flags are introduced. Trace flag 9481 can force SQL Server to use the legacy CE when it would otherwise use the new one. Conversely, trace flag 2312 can force SQL Server to use the new CE. And if flags 9481 and 2312 are ever both enabled (in any context), then neither flag takes effect. They cancel each other out and the CE behavior is determined only by the compatibility mode.

Just those two things allow you to influence the CE behavior depending on the granularity you require:

  • For a single query – You could use the QUERYTRACEON hint but it’s not a tempting option. Sysadmin privileges or a forced plan are required.
  • Based on your session – Use session trace flags (again, sysadmin privileges are required).
  • Based on the database you’re connected to – Use compatibility mode.
  • For the whole server – Use global trace flags.

Again, Joe Sack’s white paper explains this in more detail. He provides syntax examples and methods to determine which CE was used based on a query plan.

Corner Use Cases

This leads to some surprising behaviors:

Connect to a System Database to Avoid Compatibility Mode issues

For example, this works:

use master -- in SQL Server 2014, master will always be at compatibility mode 120
GO
 
-- any query (regardless of participating tables) will now use the new CE. e.g.:
SELECT COUNT(*) 
FROM Adventureworks2012.Sales.SalesOrderHeader;

But it’s just a trick and not a technique I would recommend. Besides, this trick doesn’t work when calling stored procedures from other databases.

Using a Trace Flag to Cancel Another One

Trace flags 2312 and 9481 don’t play together well. There is no scenario where one takes precedence over the other. If they’re both enabled, then they cancel each other out:

use Adventureworks2012 -- at compatibility mode 110
GO
DBCC TRACEON( 9481 );
 
SELECT COUNT(*) 
FROM Sales.SalesOrderHeader
OPTION( QUERYTRACEON 2312 ); -- 2312 normally enables the new CE 
-- the 2312 hint is canceled by the 9481 trace flag, the legacy CE is still used.

Again, I avoid this scenario so that I don’t need to worry.

How I Plan To Adopt the New CE

I’d like to begin using the new CE as soon as I upgrade to 2014.

But if I wanted to, I would feel comfortable using compatibility mode as a feature toggle for the new CE. There are other behavior differences between compatibility modes 110 and 120. But I don’t use them and won’t encounter them. They’re obscure and easy to review. So for me, I can ignore those other features and use compatibility mode 120 as the CE feature toggle.

The trace flags 2312 and 9481 are new in SQL Server 2014. So if SQL Server is not at version 2014, it will ignore those trace flags. I intend to do the same no matter what version I’m using. I don’t expect to see many queries showing serious regressions with the new CE, but if I encounter any I’m not going to manage them with these trace flags. Instead, I plan to:

  1. Use hints (whether that means index hints, join hints or query hints) to stabilize the plan temporarily.
  2. Spend time tuning or rewriting the query so that it performs well without these hints.

Further Reading

April 23, 2014

Removing Comments from SQL

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:20 am

Check out the following deliberately crazy SQL Script:

create table [/*] /* 
  -- huh? */
(
    "--
     --" integer identity, -- /*
    [*/] varchar(20) /* -- */
         default '*/ /* -- */' /* /* /* */ */ */
); 
go

It’s not surprising that my blog’s syntax colorer has trouble with this statement. But SQL Server will run this statement without complaining. Management Studio doesn’t even show any red squiggly lines anywhere. The same statement without comments looks like this:

create table [/*] 
(
    "--
     --" integer identity, 
    [*/] varchar(20) 
         default '*/ /* -- */' 
); 
go

I want a program to remove comments from any valid SQL and I want it to handle even this crazy example. I describe a handy method that lets me do that.

Using C#

  • In your C# project, find and add a reference to Microsoft.SqlServer.TransactSql.ScriptDom. It’s available with SQL Server 2012′s Feature Pack (search for “ScriptDom” and download).
  • Add using Microsoft.SqlServer.Management.TransactSql.ScriptDom; to your “usings”.
  • Then add this method to your class:
    public string StripCommentsFromSQL( string SQL ) {
     
        TSql110Parser parser = new TSql110Parser( true );
        IList<ParseError> errors;
        var fragments = parser.Parse( new System.IO.StringReader( SQL ), out errors );
     
        // clear comments
        string result = string.Join ( 
          string.Empty,
          fragments.ScriptTokenStream
              .Where( x => x.TokenType != TSqlTokenType.MultilineComment )
              .Where( x => x.TokenType != TSqlTokenType.SingleLineComment )
              .Select( x => x.Text ) );
     
        return result;
     
    }

… and profit! This method works as well as I hoped, even on the given SQL example.

Why I Prefer This Method

A number of reasons. By using Microsoft’s own parser, I don’t have to worry about comments in strings, or strings in comments which are problems with most T-SQL-only solutions. I also don’t have to worry about nested multiline comments which can be a problem with regex solutions.

Did you know that there’s another sql parsing library by Microsoft? It’s found at Microsoft.SqlServer.Management.SqlParser.Parser. This was the old way of doing things and it’s not supported very well. I believe this library is mostly intended for use by features like Management Studio’s Intellisense. The ScriptDom library is better supported and it’s easier to code with.

Let Me Know If You Found This Useful

Add comments below. Be warned though, if you’re a spammer, I will quickly remove your comments. I’ve had practice.

April 11, 2014

Implementing the Recycle Bin Pattern In SQL

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 8:00 am

Kitchener Ontario, recycling since 1983I participated in a week long hackathon recently. It was great to be able to spend the whole week on a self-directed project. I’m excited to write about what my team accomplished, but actually I want to blog about what another team accomplished. That team implemented a really nice “send to recycle bin” feature and they gave me the green light to write about it here.

The recycle bin feature is ultimately a data-hiding feature. Users don’t necessarily want to destroy data, they just don’t want to look at it right now. There are a lot of ways to implement this feature, but one way is by making a few changes in the database (as opposed to the application).

What Needs To Change?

Surprisingly not much. Take your table and give it a nullable RecycleDate column. This is all you need to track the recycled rows. Then create a view that filters out recycled items. That’s pretty much it. Afterwards, if you rename the table, then the view can take its place. This is what that would look like on Adventureworks’ Sales.ShoppingCartItems table:

ALTER TABLE Sales.ShoppingCartItem
  ADD RecycleDate DATE NULL
    CONSTRAINT DF_ShoppingCartItem_RecycleDate DEFAULT NULL;
 
GO
 
EXEC sp_rename 'Sales.ShoppingCartItem', 'AllShoppingCartItems'
 
GO
 
CREATE VIEW Sales.ShoppingCartItem
WITH SCHEMABINDING
AS
    SELECT  ShoppingCartItemID ,
            ShoppingCartID ,
            Quantity ,
            ProductID ,
            DateCreated ,
            RecycleDate
    FROM    Sales.AllShoppingCartItems
    WHERE   RecycleDate IS NULL;
 
GO
 
CREATE PROCEDURE Sales.s_RecycleShoppingCartItem
    (
      @ShoppingCartItemId INT
    )
AS 
    UPDATE  Sales.ShoppingCartItem
    SET     RecycleDate = GETDATE()
    WHERE   ShoppingCartItemID = @ShoppingCartItemId;
 
GO

DML Impact

So what’s the impact on other Delete, Insert, Update or Select statements that are executed against your modified table?

  • Delete statements shouldn’t be affected. You’ll notice that recycle bin contents can’t be deleted via the view. That’s okay.
  • Old Insert statements should work as expected with no adjustments, especially if you name your columns in a column list.
  • Update statements? Check, they’ll continue to work.
  • Select statements will also be unaffected. Especially if you’ve avoided SELECT *.

What About Foreign Keys?

Okay, this is where it gets little tricky. If you don’t use ON DELETE or ON UPDATE clauses with your foreign keys, then you have to be a little careful. I want to show just one example of how things can get a bit messy. Returning to our Adventureworks example, lets think about a query that deletes “shopping carts” as long as it has no items.

DELETE Sales.ShoppingCart
WHERE ShoppingCartId = @ShoppingCartIdToDelete
AND NOT EXISTS
  (
    -- any items in the cart?
    SELECT 1
    FROM Sales.ShoppingCartItem
    WHERE ShoppingCartId = @ShoppingCartIdToDelete
  )

In the old world, this works no problem. But our check for items in the cart misses items that have been recycled and so this query would fail. You’ll have to remember to find queries like this and update them to check Sales.AllShoppingCartItems instead.

Data Lifecycle Policy Concerns

You have a policy right? The lack of one can make it too easy to retain data indefinitely. The concern isn’t necessarily storage, but whether you’re meeting any policies or regulations concerning privacy or other things like that.

The recycle bin feature may make it a little easier to accidentally retain data you didn’t mean to. It may be worth regression testing any delete or purge functionality that you have.

Indexing

Depending on how much data is hidden in the recycle bin, you shouldn’t have to re-evaluate your indexing strategy. Your indexes should probably serve you just as well after this implementation. But if you find yourself storing more than 90% of your data as recycled data, then you may want to start considering re-assessing the table’s indexes. You could consider things like filtered indexes, filtered stats and/or partitioned tables. But before you do, see Data Lifecycle Policy Concerns above.

Other Things To Watch

Any changes to schema or any code should lead to extra testing and the changes I’m proposing are no different.

You have to know your app and environment. Is your recycle bin against a table that participates in downstream Business Intelligence projects? How about Change-Data-Capture? Service Broker? Notification Services? You know better than I do.

Other Reycling Bin Implementations

There are lots of methods.

For example, You don’t have to implement this pattern using SQL. You can implement it in your application. Hiding recycled data via the application makes a lot of sense. Especially if your more of a programmer than a SQL developer (By the way, where’d you come from? Who let you in here?)

It’s worth giving this some thought. Without a recycle bin, the demand to retrieve “deleted” data can be great enough to prompt someone to dig through a restored backup. Digging through restored backups actually counts as a recycle bin implementation even if it is an unintentional and painful one.

March 16, 2014

T-SQL Tuesday #52 Round-Up: Argue Against a Popular Opinion

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 1:52 pm

T-SQL Tuesday LogoThere were thirteen excellent posts this month. (See the invite post here). I loved reading every one of them. It gave a really good perspective on topics that are typically seen as cut and dry. Thank you to each one of the bloggers for putting in the time to participate this month. Here are the posts, each with a mini-review.

Michael J Swart
(@MJSwart)
I Don’t Mind SELECT * Sometimes

The opinion:
The popular opinion is that “SELECT *” should always be avoided. I argue that there are some cases where there are no downsides.
Did I buy it?
I may be a little biased, but I totally bought it. Truer words were never spoken.
Boris Hristov
(@BorisHristov)
Only My Technical Skills Matter

The opinion:
The popular opinion is that only technical skills matter. Boris argues that the most effective professionals develop other skills (like soft skills).
Did I buy it?
For sure. I would also add to his post. It’s easy to agree that soft skills matter. But I’ve recently learned that (holy crap) do they ever matter!
Tech skills are important.
Communication skills can come with enough practice
But tech skills + communication skills + wisdom are a lethal combination.
Russ Thomas
(@SQLJudo)
Eye Heart You Dee Effs

The opinion:
There are a lot of UDF-haters out there in SQL land. Not Russ, he loves them.
Did I buy it?
What say I? I say aye! Well done Russ. Totally got into the spirit of this month’s topic. BTW, he gets the Michael J Swart “Best Article Of The Month” award.
Bob Pusateri
(@SQLBob)
Why I Hate Row Compression

The opinion:
Row compression has been an extremely popular feature since its introduction. But Bob warns us of the risk of using it as spackle for sloppy data modeling.
Did I buy it?
Sloppy design causes future problems. I buy that. Row compression can enable sloppy design. I buy that. The Spock in me says row compression can cause future problems. But the Kirk in me has trouble accepting it.
Thanks for writing the most though-provoking article.
Ken Fisher
(@sqlstudent144)
Keep your head down!

The opinion:
Ken takes a look at people known for their collaborative character and he argues against that behavior with the more isolationist advice: “Keep your head down!”. The sarcasm is called out explicitly and so Ken is actually promoting more collaborative behaviors.
Did I buy it?
His post’s thesis argues with the popular trend of more collaboration. As a blogger myself, thumbs up! It’s was a really well written article, the light-hearted style makes for a good read.
Rob Farley
(@rob_farley)
Scans are better than Seeks. Really.

The opinion:
Rob and I had a great discussion in the week leading up to #tsql2sday. He wants us to believe (with no irony) that scans are better than seeks.
Did I buy it?
After reading the article, yes, I buy it. It turns out there’s a lot of subtlety when talking about seeks and scans. Especially when seeks perform RangeScans or when scans operate on filtered indexes.
Brent Ozar
(@BrentO)
Stop Tuning with Wait Stats Percentages

The opinion:
It’s all in the title. Wait statistics are a popular method of tuning SQL Server. Brent argues that you shouldn’t pay much attention to the reported percentages. Or at least not without some careful context.
Did I buy it?
Sure did. My own take is that wait stats are useful when measured against representative load, when there’s something to tune, and when you look wait rankings over percentages.
Jeremiah Peschka
(@peschkaj)
What Use is an Average?

The opinion:
I invited Jeremiah to contribute this month. Jeremiah has a writing style that doesn’t always show in his more technical blog posts. I was not disappointed. The popular opinion he seems to be arguing against is that the “average” statistic is a worthwhile measure (when applied to query durations for example). He wants us to believe it’s not always enough.
Did I buy it?
Yep, it’s not always enough. And he gets creative with his citations. It was crazy enough Jeremiah… it was crazy enough.
Steve Hood
(@SimpleSQLServer)
Why worry about CXPACKET

The opinion:
The opinion Steve Hood tackles is that the wait type “CXPACKET” is a problem and should cause worry. He argues that it just indicates parallelism which is not necessarily a problem.
Did I buy it?
Yes, In the particular case of CXPACKET, it seems that enough people are tempted to treat the symptom rather than the disease. Steve’s article addresses that.
Steve Jones
(@way0utwest)
Arguments

The opinion:
Steve Jones argues against “Never Turn on Auto Shrink”.
Really?
Really Really?
Did I buy it?
I’m a believer. He gave not one but two scenarios where I would use autoshrink as well.
Mike Fal
(@mike_fal)
Stop depending on “it depends”

The opinion:
So this is interesting. Mike Fal is asking us to get off the fence. It’s popular for DBAs to fall back on “it depends” when cornered for an answer. Mike wants us to “stop it”.
Did I buy it?
Yes, you bet. One of the reasons I picked this topic (“argue against a popular opinion”) is to explore exactly what things depend on. One of the best written articles in this list.
Jes Schultz Borland
(@grrl_geek)
Why Back Up System Databases?

The opinion:
Arguing against public opinion. Jes tells us that that could be the title of her autobiography. The opinion she argues against is that it’s super important to back up your system databases. She thinks it’s just not as important as others want us to believe.
Did I buy it?
Yes, Jes makes a good point. She raises something important. Whether you back up your system databases or script out something to help rebuild a server from scratch. It should be practiced at least once.
Tamera Clark
(@tameraclark)
Don’t Be That Guy

The opinion:
Tamera takes a different tack. Rather than an opinion, she argues against Mr. Popular asking us to not be that guy.
Did I buy it?
She gives 10 pieces of advice to a particular target audience. It’s great advice and it’s a good piece of writing. My only worry is that not enough of her target audience will recognize themselves as the target audience.

March 3, 2014

T-SQL Tuesday #052: Argue Against A Popular Opinion

Filed under: SQLServerPedia Syndication — Tags: — Michael J. Swart @ 10:07 pm

T-SQL Tuesday Logo

Update See the round up post here.
I was asked to host T-SQL Tuesday this month, T-SQL Tuesday #52. This marks my second time hosting and I’m happy to do it again.

Your writing assignment for March 2014 is to

pick a popular opinion and argue against it.

… or at least qualify it. Given any issue, people drift to two kinds of crowds. There’s the “it depends” crowd and there’s the “never ever” crowd. We tend to fall in with one crowd or the other.  This month, I want you to find an “never ever” issue and argue for it (or conversely, find a “always” issue and argue against it).

I wonder how this month will go. It takes guts to go against common wisdom.

You don’t necessarily have to argue against a universal opinion, but it should at least be popular. I think that your choice of opinions is practically limitless:

  • Bob Duffy had a list of 10 interview questions that annoy SQL professionals. With some great topics there including GUIDs, Cursors, and heaps.
  • Google results for “SQL.Server should.never”
  • Fair’s fair. Here are the results for “SQL.Server should.always”
  • Does anyone want to have a shot at redeeming Microsoft Access?
  • Foreign Keys, SchmoreignKeys.
  • Check it out. SQL Server supports varbinary(max)! Ideal for json documents and xml documents (or both!)
  • Shrinking databases and/or log files (because of the fragmentation! God save us all from fragmentation!)

Here’s a little secret. This month’s topic is not for you. It’s for the readers. It’s a chance for you to give them a more nuanced understanding of a topic that they may not have given a lot of thought up until now. I’m a little curious myself.

The rules are the same as always:

Follow These Rules

  1. The post must go live on your blog between 00:00 GMT Tuesday, March 11, 2014 and 00:00 GMT Wednesday, March 12, 2014.
    In other words, set your sql server date, time and timezone properly and run this script:

    IF GETUTCDATE() BETWEEN '20140311' AND '20140312'
    	SELECT 'You Can Post'
    ELSE
    	SELECT 'Not Time To Post'
  2. Your post has to link back to this post, and the link must be anchored from the logo (found above) which must also appear at the top of the post
  3. Leave a comment here (below) or I won’t be able to find your post.

That’s it! Good luck! Can’t wait to see what you have in store.

Your humble host,
Michael J. Swart

February 13, 2014

Troubleshooting Concurrency – A Demo

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:00 am

Testing Concurrency

So you’ve read about how I generate concurrent activity by calling s_DoSomething super often. And you’ve even seen examples of how I define s_DoSomething. Now, you can watch me put all that stuff in action in this screencast.

Then download demo scripts and other files by clicking TroubleshootingConcurrency.zip and then Ctrl+S to save.

Click play and follow along.

The Transcript:

Intro

Hey everyone,
So I’ve posted a couple articles about ways to troubleshoot concurrency problems.
Now I want to demo what that looks like in action.
So what I’m going to do, I’m going to take a common concurrency problem.
In this case, I’m going to tackle the dreaded deadlock and I’m going to investigate the problem and I want to show you the actions that I take so you can see what I’m doing.
And I’ll be using my concurrency generator to help me test a solution that I come up with.
Ready? Here we go

Identify Deadlock To Troubleshoot

So now what I’m gonna do is I’m going to be troubleshooting a deadlock or a set of deadlocks.
And just to set up the scenario a bit, I would normally attack this problem if I’m handed a set of deadlocks to troubleshoot.
In this case <let’s look in here> what I have is a set of deadlock graphs.
Each graph is a file which contains a whole bunch of information about an instance of a deadlock that SQL Server detected and dealt with.
Lets take a look at one.
I’m going to open up in SQL Server Management Studio and first thing you notice is that there are four ovals and four rectangles.
The ovals are processes and the rectangles are resources.
The ovals are fighting over access to the resources and they happen to be fighting in a circle so they get deadlocked.
No one can do anything.
Everyone’s waiting on everybody else.
And you can actually see that SQL Server detected and chose this process as the deadlock victim.
All right, now that’s four processes.
Because I happen to know that each one of these deadlock graphs indicates the same problem that I’m trying to troubleshoot, I’m going to pick a deadlock graph that’s actually smaller.
I’m going to pick graph 112 and open it.
Yeah, I can deal with this.
This one only has two processes fighting over two resources.
This process owns an exclusive lock on this key lock.
I like to think of key locks as a row, just mentally.
And this one owns an exclusive lock on this row up here.
But they’re both waiting for the other guy to release that so they can get a shared lock on it.
Now shared locks and exclusive locks are mutually exclusive.
They are incompatible and so that’s why we wait.
So let’s look at the specifics.
This resource is a key lock which means it’s a row in the table MyPeople and that table’s in the schema Person which is in the database Adventureworks.
In particular it’s got an exclusive lock on a row in an index called pk_Person_MyPeople.
Because of the naming convention, I happen to know this is a primary key.
And it looks like this resource is the same kind of row, or the same kind of key lock.
And the processes, what do we know about this?
Not much. It’s running a stored procedure inside database 9 which is Adventureworks and the ObjectId of the procedure is given there.
I don’t know much else about that yet.
I do know that this process was running the same procedure.
Great, so I think now it’s time to look a little bit closer at this deadlock.

Find Root Cause

So the way I dig deeper into these deadlock graphs is by opening up these files in an xml editor.
It turns out that these deadlock graphs are simply defined as xml.
So you can open this up in notepad or your favorite xml editor.
I’m going to use Visual Studio, because that’s what I like to use.
There it is. Right away, you notice that it’s got the same information.
There’s the processes, but it’s got a lot more details.
So before we knew that this procedure…
Now we know that not only is this procedure called Adventureworks.dbo.s_Person_MyPeople_Delete, but that it was running that query when it deadlocked.
We also know that the other procedure was running the same thing s_Person_MyPeople_Delete.
I want to find out more about this procedure because it was the thing that was deadlocking.
So I’m going to copy. That’s in the clipboard now. I’m going to use Management Studio to be able to do that.
So (sp_helptext and results-to-text, f5) there’s the definition, because we’re connected to Adventureworks.
It’s a very simple statement. DELETE Person.MyPeople WHERE BusinessEntityId = @BusinessEntityId.
Now it’s very unusual for a procedure this small to participate in a deadlock.
Especially when Person.MyPeople …
Let’s look at this (results to grid alt+f1)
Okay, there’s the indexes… especially… this table has a clustered unique index on BusinessEntityId.
So I know that something else is going on here.
Maybe there’s a trigger
Or maybe there’s an indexed view on Person.MyPeople that needs to be updated every time there’s a modification.
Or maybe there’s a foreign key that needs to be enforced.
Something has to explain why after a modification, it continues to want to read other different rows in the same table.
In order to find out more, I’m actually going to look at the execution plan.
So (let’s copy paste and let’s see) I’m going to highlight that and display estimated execution plan.
Alright there we go, this gives me a lot more information.
Here are the things I notice.
There’s a missing index warning of course.
It looks like my unindexed foreign key theory is looking better.
Cause look, there’s the delete.
It goes into the table spool, the rows that are about to be deleted.
It comes out there and it looks … for each row, it makes sure that it scans the table in order to find … let’s see … Mentorid.
It’s looking for Mentorid.
It wants to assert that there are no rows where Mentorid is pointing to BusinessEntityId.
So my guess is that looking at the missing index details… sure enough it wants an index on MentorId.
It’s pretty clear to me that this table needs to enforce… (results-to-grid, alt+f1)
It needs to enforce this foreign key fk_MyPeople_MyPeople where MentorId references BusinessEntityId.
So it’s a foreign key to itself and it needs to scan the whole table in order to enforce that foreign key.
That’s looking like my root cause and I believe that adding the missing index that’s suggested here will help solve my deadlock problem.
But how do we know for sure? That’s next.

Verify Solution With Concurrency Generator

Okay, where are we?
We looked a little at this procedure s_person_MyPeople_delete and in particular, this query, this delete statement that was giving us some deadlock problems.
And we have a pretty good idea of its root cause.
Basically, this statement needed to scan Person.MyPeople in order to enforce a foreign key, a self referencing foreign key.
And it’s that reason that it’s participating in deadlocks.
So it’s my theory that if we have an index on MentorId that’s suggested in these missing index details.
If we create that index, it will not only speed up this query, but it will help concurrency and avoid deadlocks and that’s the solution I’m recommending.
But before I can say that I nailed this problem, I need to reproduce it and see that my fix solves the problem.
And I can do that using my load generator that I’ve been talking about in my blog lately.
First I want to define an s_DoSomething in tempdb. Let’s create
(use tempdb)
And I’m just going to modify an example from my blog.
And I’m going to alter a procedure s_DoSomething
Now I only care about an integer parameter right?
Because the stored procedure I care about only has the one parameter.
Let’s change it. There that looks pretty good.
But I actually want to modify that value.
In order to do that let’s look at Person.MyPeople
I’m going to look at the range of values. (max(businessentityid) and max(mentorid))
Let’s see how that does.
That’s interesting, so the largest MentorId is 10388.
So i want my integer value to be at least that and the range is that … so that range.
Okay, so just to avoid off by one errors, I’m going to call that nine and I’m going to shrink that range a bit.
So that sounds good. Let’s create it.
Oh, it doesn’t exist yet. There we go. That sounds good.
Now that s_dosomething is defined, I want to call it a lot of times concurrently.
This is done with the utility that I have.
This is the concurrency launcher and you can look at the definition of it, which is in my blog.
And after it’s compiled, I’m going to launch it.
And we wait a little while, and oh look there’s a deadlock and as things start blocking up, I expect a few more.
More blocking, more deadlocks, more, more, yeah, there we go.
Okay, I would say that we reproduced the problem.
I’m going to cancel that.
Let’s launch it again, just to make sure.
Deadlock, deadlock, deadlock deadlock, ahh, very good. Isn’t that great?
I can reproduce this problem at will.
Let’s see if my theory is sound.
My theory is that if I create this index (call it ix_myindex) name it something better later. (Tada)
My theory is that once that’s created, I should no longer see any concurrency problems.
Let’s launch it again.
I’m hoping to see nothing,
I’m hoping to see nothing,
It’s looking pretty good.
Wow, It completed.
So It completed which tells me that it has executed a procedure 500000 times.
All concurrently 50 at a time without a single deadlock.
That is encouraging news.
So now I would feel comfortable recommending this index as a solution to the set of deadlocks that I was troubleshooting.

Lemme explain… No there’s no time, Lemme Sum Up

So in conclusion, that’s an example of how I use my utility.
I find it useful enough that I’ve added a shortcut to my windows taskbar.
That gives me one-click concurrent database activity.
It helps me look closer at issues where processes suffer from unlucky timing or other problems that are hard to reproduce because of fussy concurrency conditions.
I hope you found this useful.

Cheers.

January 30, 2014

Building Concurrency Tests

Testing Concurrency

So last week, I explained different ways to generate concurrent activity. I have my own favorites, but you may have your own. So pick your favorite method; whichever method you picked, it will help you call the procedure s_DoSomething super-often.

Now comes the task of defining the procedure s_DoSomething. It can be whatever you like depending on what functionality you want to test or exercise. I want to demonstrate some patterns that I follow when I define that procedure. Those patterns all start with …

the Basic Definition

To test the concurrency of a single procedure just call it:

use tempdb
GO
 
CREATE PROCEDURE dbo.s_DoSomething AS
 
EXEC Adventureworks2012.dbo.uspGetManagerEmployees 14;

Of course, I could have changed the framework to call my procedure directly but I don’t out of habit. I always leave s_DoSomething in tempdb hard-coded in the framework.

With Arbitrary Parameter Values

Often the procedures I want to test are defined with parameters. If variety is important, but the parameter values are not, then that’s when the random tricks come in:

ALTER PROCEDURE dbo.s_DoSomething AS
 
DECLARE @someString nvarchar(100) = cast(newid() as nvarchar(100));
DECLARE @someInt int = RAND() * 100;
DECLARE @someDate datetime = dateadd(MINUTE, RAND() * 10000, getdate());
DECLARE @someLongerString nvarchar(1000) = REPLICATE(@someString,20);
 
EXEC Adventureworks2012.dbo.usp_ProcWithParameters
	@someString, @someInt, @someDate, @someLongerString;

With Less Arbitrary Parameter Values

Check out this next example. Pulling a value from the target database is often preferable to calling the procedure with a random integer value.

ALTER PROCEDURE dbo.s_DoSomething AS
 
DECLARE @BusinessEntityId int;
 
SELECT TOP 1 @BusinessEntityId = BusinessEntityID 
FROM AdventureWorks2012.HumanResources.Employee
ORDER BY newid();
 
EXEC AdventureWorks2012.dbo.uspGetEmployeeManagers @BusinessEntityId;

Calling More Than One Procedure

It’s as simple as calling one after the other. But sometimes I want the frequency of the calls “weighted”.

For example, I want to have a DELETE, INSERT and UPDATE statements called 10% of the time each. The remaining 70% of the time I want to call a SELECT statement. Then I have something like:

ALTER PROCEDURE dbo.s_DoSomething AS
 
declare @r int = RAND() * 10;
 
IF (@r = 0)
  -- delete 10% of the time
  DELETE AdventureWorks2012.Person.BusinessEntity
  WHERE BusinessEntityID = CAST(RAND()*1000 as INT);
 
IF (@r = 1)
  -- insert 10% of the time
  INSERT AdventureWorks2012.Person.BusinessEntity (rowguid)
  VALUES (newid());
 
IF (@r = 2)
  -- update 10% of the time
  UPDATE AdventureWorks2012.Person.BusinessEntity
  SET rowguid = newid()
  WHERE BusinessEntityID = CAST(RAND()*1000 as INT);
 
IF (@r > 2)
  -- select the rest of the time
  SELECT BusinessEntityId, rowguid, ModifiedDate
  FROM AdventureWorks2012.Person.BusinessEntity
  WHERE BusinessEntityID = CAST(RAND()*1000 as INT);

Counting Errors Concurrently

I want to track (server side) how often s_DoSomething fails. But I don’t want tracking to be a concurrency bottleneck itself. Here’s a cool trick for that:
First define these procedures:

create procedure dbo.s_IFailed as
go
 
create procedure dbo.s_IPassed as
go
 
alter procedure dbo.s_DoSomething as
 
begin try
    declare @i int = rand() * 10;
    select @i = 1 / @i -- might divide by 0!
    exec dbo.s_IPassed;
end try
begin catch
    exec dbo.s_IFailed;
end catch

This lets me use DMVs to monitor the success rate because I can check the execution count of my dummy procedures. For example,

exec sp_recompile 'dbo.s_IFailed'; -- reset counts from other tests.
exec sp_recompile 'dbo.s_IPassed';
go
 
/* run concurrent test here, or...*/
set nocount on; 
exec dbo.s_DoSomething;
go 10000
 
select object_name(object_id), execution_count 
from sys.dm_exec_procedure_stats
where object_name(object_id) in ('s_IFailed','s_IPassed')
/*
--         count
s_IPassed   9031
s_IFailed    969
*/

This relies on the DMV sys.dm_exec_procedure_stats which was introduced in 2008. It’s like a cheap do-it-yourself, performance counter.

Next

Next week I want to show a demo. I want to show this technique in action. I’ll be troubleshooting a common concurrency problem, the deadlock.

January 23, 2014

Generating Concurrent Activity

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 10:21 am
Testing Concurrency

I’ve discovered that DBAs use different methods to accomplish what amounts to the same thing: Generating concurrent activity on a server. I want to explore a number of methods I’ve seen. For each method, I want to call one particular procedure, many times at once, and often… Oh, and for free.

Why are we doing this?

  • For Testing: The whole point is to generate concurrent activity on the server. Testing with this kind of activity is one of the only ways to avoid resource contention issues.
  • For Demos: Concurrency issues are so common and so varied, that it’s not surprising how often we see demos that depend on this kind of generated database activity.
  • For Troubleshooting: This is my favorite reason. A concurrency issue can’t be fixed well unless it can be reproduced reliably on some test environment. That’s why the methods below have a spot on my troubleshooting toolbelt.

The whole idea is to get many workers active on SQL Server at once.

The exocomps were smart enough to exit on their own

For each method below, look out for the place where I specify these “parameters”

  • Number of virtual workers (50 in my example)
  • Number of iterations (for instance 10,000)
  • Connection string
  • Procedure to execute (s_DoSomething in my case)

One other thing to watch for is the overhead that this tool puts on the machine. Ideally, the method is suitable to run this from the same test environment that the SQL Server sits on. So I want my method to be fairly lightweight. This means that it’s best to handle iterations on SQL Server which cuts down on overhead associated with opening connections. So in most cases, instead of

exec dbo.s_DoSomething

I have

declare @i int = 0; while (@i < 10000) begin exec dbo.s_DoSomething; set @i+= 1; end

Notepad + DOS Method

I adapted this method from a clever trick I saw once. It was Paul Randal giving a demo on tempdb contention.  You can find a video of that demo by visiting this newsletter.
It’s a simple idea. You have two batch files, the first is called Run.bat:

echo off
sqlcmd -S MYSERVER\MYINSTANCE -E -Q "set nocount on; declare @i int = 0; while (@i < 10000) begin exec tempdb.dbo.s_DoSomething; set @i+= 1; end" > NUL
exit

and the second is called Run50.bat:

start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat
start run.bat

Click Run50.bat and boom: instant parallel activity. Now it is a little sloppy – it opens up 50 command windows – but it does the trick pretty nicely.

Using PowerShell

We should be able to do this in one line right? Yep. Adapted from a post from Kendra Little I came up with this:

foreach($worker in 1..50) `
{ `
  Start-Job `
    -ScriptBlock `
      { `
        Import-Module sqlps; `
        Invoke-Sqlcmd `
          -Query "set nocount on; declare @i int = 0; while (@i < 10000) begin exec tempdb.dbo.s_DoSomething; set @i+= 1; end" `
          -ServerInstance "MYSERVER\MYINSTANCE" `
          | Out-Null `
      } `
}

Except that I don’t think I got it quite right. Whatever I changed makes this method unsuitable.

The script schedules a job 50 times, but it takes about a minute just to schedule them all. Once they’re scheduled, the jobs take time to start and not enough of them work in parallel to generate the needed concurrent activity so I give this method a “skip”. If you’re really comfortable with powershell, maybe you can get this to work faster. If you can, let me know.

C#

This is my preferred method. It started out as the program I used to test upsert concurrency at http://michaeljswart.com/go/upsert but a friend at work showed me that .net 4.5 has some nifty new asynchronous methods that make it look nicer, perform faster and weigh lighter.

using System;
using System.Linq;
using System.Data.SqlClient;
using System.Threading.Tasks;
 
namespace AConsoleApplication {
    class Program {
 
        static void Main( string[] args ) {
 
            var cs = new SqlConnectionStringBuilder();
            cs.DataSource = @"MYSERVER\MYINSTANCE";
            cs.InitialCatalog = "tempdb";
            cs.IntegratedSecurity = true;
            cs.AsynchronousProcessing = true;
 
            string connectionString = cs.ToString();
 
            Task[] tasks = Enumerable.Range( 0, 50 )
                .Select( i => RunCommand( connectionString ) )
                .ToArray();
 
            Task.WaitAll( tasks );
        }
 
        private static async Task RunCommand(
                string connectionString
            ) {
 
            string sql = @"s_DoSomething";
 
            SqlConnection conn = new SqlConnection( connectionString );
            conn.Open();
 
            SqlCommand cmd = new SqlCommand( sql, conn );
            cmd.CommandType = System.Data.CommandType.StoredProcedure;
 
            for( int i = 0; i < 10000; i++ ) {
                try {
                    await cmd.ExecuteNonQueryAsync();    
                } catch( Exception ex ) {
                    Console.WriteLine( ex.Message );
                }
            }
        }
    }
}

HammerDB

HammerDB (originally HammerOra) is a free utility that allows users to run benchmarks against various environments. Although it was originally built for running benchmarks on Oracle, the utility now works on Windows and for SQL Server (hence the name change). I was first introduced to the utility via Kendra Little (again):

Follow these links to learn how to use the tool for its typical function, running benchmarks. Then, once you know how to do that, it’s a quick step to repurpose the tool for your own concurrent activity. For example, replace HammerDB’s generated script with this one:

#!/usr/local/bin/tclsh8.6
package require tclodbc 2.5.1
database connect odbc "DRIVER=SQL Server Native Client 11.0;SERVER=MYSERVER\\MYINSTANCE;PORT=1433;TRUSTED_CONNECTION=YES";
odbc "set nocount on; declare @i int = 0; while (@i < 10000) begin exec tempdb.dbo.s_DoSomething; set @i+= 1; end"
odbc disconnect

It’s steady and lightweight and works really well.

SQLQueryStress

SQLQueryStress is a tool written by Adam Machanic that you can download and install for free.

SQLQueryStress

It’s fairly idiot proof and avoids a lot of the complexity of the other methods. For my own purposes, I want the utility to be lightweight. So I …

  • remember to turn off “Collect I/O Statistics”
  • remember to turn off “Collect Time Statistics”
  • Set “Number of iterations” to one.
  • Iterate in the query window, i.e. modify the query to call s_DoSomething 10,000 times.

My Ranking

Concurrent Activity Method Stars Notes
C# console app ★★★★½ It performs fastest and uses the least resources. Also, because I’m biased, it gets a small bump by not suffering from NIH issues.
HammerDB ★★★ Very fast (2nd fastest), but the interface is clumsy and modifications need tclsh experience <yuck> It’s best to use this for its intended purpose, as a benchmark tool.
Notepad and DOS ★★★ Quick and dirty and really fast. However, it’s still pretty clumsy. Interrupting a test part way through is difficult.
Powershell ★½ Yuck. I couldn’t get two runs that looked the same and it was near impossible to sort powershell cpu and memory pressure from SQL Server pressure.
SQLQueryStress ★★★★ It does the job well. It was a little difficult to interrupt a test. It also takes care to make it a lightweight framework.

Next Week

I’ll show some typical ways I define s_DoSomething.

Older Posts »

Powered by WordPress