Michael J. Swart

August 16, 2023

Deploying Resource Governor Using Online Scripts

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:07 pm

When I deploy database changes, I like my scripts to be quick, non-blocking, rerunnable and resumable. I’ve discovered that:

  • Turning on Resource Governor is quick and online
  • Turning off Resource Governor is quick and online
  • Cleaning or removing configuration is easy
  • Modifying configuration may take some care

Turning on Resource Governor

Just like sp_configure, Resource Governor is configured in two steps. The first step is to specify the configuration you want, the second step is to ALTER RESOURCE GOVERNOR RECONFIGURE.
But unlike sp_configure which has a “config_value” column and a “run_value” column, there’s no single view that makes it easy to determine what values are configured, and what values are in use. It turns out that the catalog views are the configured values and the dynamic management views are the current values in use:

Catalog Views (configuration)

  • sys.resource_governor_configuration
  • sys.resource_governor_external_resource_pools
  • sys.resource_governor_resource_pools
  • sys.resource_governor_workload_groups

Dynamic Management Views (running values and stats)

  • sys.dm_resource_governor_configuration
  • sys.dm_resource_governor_external_resource_pools
  • sys.dm_resource_governor_resource_pools
  • sys.dm_resource_governor_workload_groups

When a reconfigure is pending, these views can contain different information and getting them straight is the key to writing rerunnable deployment scripts.

Turning on Resource Governor (Example)

Despite Erik Darling’s warning, say you want to restrict SSMS users to MAXDOP 1:

Plot a Course

use master;
 
IF NOT EXISTS (
	SELECT *
	FROM sys.resource_governor_resource_pools
	WHERE name = 'SSMSPool'
)
BEGIN
	CREATE RESOURCE POOL SSMSPool;
END
 
IF NOT EXISTS (
	SELECT *
	FROM sys.resource_governor_workload_groups
	WHERE name = 'SSMSGroup'
)
BEGIN
	CREATE WORKLOAD GROUP SSMSGroup 
	WITH (MAX_DOP = 1)
	USING SSMSPool;
END
 
IF ( OBJECT_ID('dbo.resource_governor_classifier') IS NULL )
BEGIN
	DECLARE @SQL NVARCHAR(1000) = N'
CREATE FUNCTION dbo.resource_governor_classifier() 
	RETURNS sysname 
	WITH SCHEMABINDING
AS
BEGIN
 
	RETURN 
		CASE APP_NAME()
			WHEN ''Microsoft SQL Server Management Studio - Query'' THEN ''SSMSGroup''
			ELSE ''default''
		END;
END';
	exec sp_executesql @SQL;
END;
 
IF NOT EXISTS (
	SELECT *
	FROM sys.resource_governor_configuration /* config */
	WHERE classifier_function_id = OBJECT_ID('dbo.resource_governor_classifier') )
   AND OBJECT_ID('dbo.resource_governor_classifier') IS NOT NULL
BEGIN
	ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.resource_governor_classifier); 
END

And when you’re ready, RECONFIGURE:

Make it so

IF EXISTS (
	SELECT *
	FROM sys.dm_resource_governor_configuration
	WHERE is_reconfiguration_pending = 1
) OR EXISTS (
	SELECT *
	FROM sys.resource_governor_configuration
	WHERE is_enabled = 0
)
BEGIN
	ALTER RESOURCE GOVERNOR RECONFIGURE;
END
GO

Turning off Resource Governor

Pretty straightforward, the emergency stop button looks like this:

ALTER RESOURCE GOVERNOR DISABLE;

If you ever find yourself in big trouble (because you messed up the classifier function for example), use the Dedicated Admin Connection (DAC) to disable Resource Governor. The DAC uses the internal workload group regardless of how Resource Governor is configured.

After you’ve disabled Resource Governor, you may notice that the resource pools and workload groups are still sitting there. The configuration hasn’t been cleaned up or anything.

Cleaning Up

Cleaning up doesn’t start out too bad, deal with the classifier function, then drop the groups and pools:

ALTER RESOURCE GOVERNOR DISABLE
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL); 
DROP FUNCTION IF EXISTS dbo.resource_governor_classifier;
 
IF EXISTS (
	SELECT *
	FROM sys.resource_governor_workload_groups
	WHERE name = 'SSMSGroup'
)
BEGIN
	DROP WORKLOAD GROUP SSMSGroup;
END
 
IF EXISTS (
	SELECT *
	FROM sys.resource_governor_resource_pools
	WHERE name = 'SSMSPool'
)
BEGIN
	DROP RESOURCE POOL SSMSPool;
END

You’ll be left in a state where is_reconfiguration_pending = 1 but since Resource Governor is disabled, it doesn’t really matter.

Modifying Resource Governor configuration

This is kind of a tricky thing and everyone’s situation is different. My advice would be to follow this kind of strategy:

  • Determine if the configuration is correct, if not:
    • Turn off Resource Governor
    • Clean up
    • Configure correctly (plot a course)
    • Turn on (make it so)

Somewhere along the way, if you delete a workload group that some session is still using, then ALTER RESOURCE GOVERNOR RECONFIGURE may give this error message:

Msg 10904, Level 16, State 2, Line 105
Resource governor configuration failed. There are active sessions in workload groups being dropped or moved to different resource pools.
Disconnect all active sessions in the affected workload groups and try again.

You have to wait for those sessions to end (or kill them) before trying again. But which sessions? These ones:

SELECT 
	dwg.name [current work group], 
	dwg.pool_id [current resource pool], 
	wg.name [configured work group], 
	wg.pool_id [configured resource pool],
	s.*
FROM 
	sys.dm_exec_sessions s
INNER JOIN 
	sys.dm_resource_governor_workload_groups dwg /* existing groups */
	ON dwg.group_id = s.group_id
LEFT JOIN 
	sys.resource_governor_workload_groups wg /* configured groups */
	ON wg.group_id = s.group_id
WHERE 
	isnull(wg.group_id, -1) <> dwg.pool_id
ORDER BY 
	s.session_id;

If you find your own session in that list, reconnect.
Once that list is empty feel free to try again.

January 3, 2023

Can your application handle all BIGINT values?

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 12:24 pm

In the past I’ve written about monitoring identity columns to ensure there’s room to grow.

But there’s a related danger that’s a little more subtle. Say you have a table whose identity column is an 8-byte bigint. An application that converts those values to a 4-byte integer will not always fail! Those applications will only fail if the value is larger than 2,147,483,647.

If the conversion of a large value is done in C#, you’ll get an Overflow Exception or an Invalid Cast Exception and if the conversion is done in SQL Server you’ll see get this error message:

Msg 8115, Level 16, State 2, Line 21
Arithmetic overflow error converting expression to data type int.

The danger

If such conversions exist in your application, you won’t see any problems until the bigint identity values are larger than 2,147,483,647. My advice then is to test your application with large identity values in a test environment. But how?

Use this script to set large values on BIGINT identity columns

On a test server, run this script to get commands to adjust bigint identity values to beyond the maximum value of an integer:

-- increase bigint identity columns
select 
	'DBCC CHECKIDENT(''' + 
	QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
	QUOTENAME(object_Name(object_id)) + ''', RESEED, 2147483648);
' as script
from 
	sys.identity_columns
where 
	system_type_id = 127
	and object_id in (select object_id from sys.tables);
 
-- increase bigint sequences
select 
	'ALTER SEQUENCE ' +
	QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
	QUOTENAME(object_Name(object_id)) + ' 
	RESTART WITH 2147483648 INCREMENT BY ' + 
	CAST(increment as sysname) +
	' NO MINVALUE NO MAXVALUE;
' as script
from 
	sys.sequences
where 
	system_type_id = 127;

Prepared for testing

The identity columns in your test database are now prepared for testing. And hopefully you have an automated way to exercise your application code to find sneaky conversions to 4-byte integers. I found several of these hidden defects myself and I’m really glad I had the opportunity to tackle these before they became an issue in production.

November 25, 2022

Use RCSI to tackle most locking and blocking issues in SQL Server

Filed under: Miscelleaneous SQL,Technical Articles — Michael J. Swart @ 12:54 pm

What’s the best way to avoid most blocking issues in SQL Server? Turn on Read Committed Snapshot Isolation (RCSI). That’s it.

Configuring RCS isolation level

To see if it’s enabled on your database, use the is_read_committed_snapshot_on column in sys.databases like this:

select is_read_committed_snapshot_on
from sys.databases
where database_id = db_id();

To enable the setting alter the database like this:

ALTER DATABASE CURRENT
SET READ_COMMITTED_SNAPSHOT ON

Is it that easy?

Kind of. For the longest time at work, we ran our databases with this setting off. Mostly because that’s the default setting for SQL Server. As a result, we encountered a lot of blocking and deadlocks. I got really really good at interpreting deadlocks and blocking graphs. I’ve written many blog posts on blocking and I even wrote a handy tool (the blocked process report viewer) to help understand who the lead blocker was in a blocking traffic jam.

Eventually after a lot of analysis we turned on RCSI. Just that setting change probably gave us the biggest benefit for the least effort. We rarely have to deal with blocking issues. I haven’t made use of the blocked process report viewer in years.

Be like Severus Snape

I’m reminded of a note that Snape (from the Harry Potter books) wrote in his textbook on poison antidotes “Just shove a bezoar down their throats.” The idea was that you didn’t have to be good at diagnosing and creating antidotes because a bezoar was simply an “antidote to most poisons”.

In the same way, I’ve found that RCSI is an antidote to most blocking.

September 21, 2022

Batching Follow-Up

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 12:00 pm

When I wrote Take Care When Scripting Batches, I wanted to guard against a common pitfall when implementing a batching solution (n-squared performance). I suggested a way to be careful. But I knew that my solution was not going to be universally applicable to everyone else’s situation. So I wrote that post with a focus on how to evaluate candidate solutions.

But we developers love recipes for problem solving. I wish it was the case that for whatever kind of problem you got, you just stick the right formula in and problem solved. But unfortunately everyone’s situation is different and the majority of questions I get are of the form “What about my situation?” I’m afraid that without extra details, the best advice remains to do the work to set up the tests and find out for yourself.

Your Own Batches

But despite that. I’m still going to answer some common questions I get. But I’m going to continue to focus on how I evaluate each solution.
(Before reading further, you might want to re-familiarize yourself with the original article Take Care When Scripting Batches).

Here are some questions I get:

What if the clustered index is not unique?

Or what if the clustered index had more than one column such that leading column was not unique. For example, imagine the table was created with this clustered primary key:

ALTER TABLE dbo.FactOnlineSales
ADD CONSTRAINT PK_FactOnlineSales
PRIMARY KEY CLUSTERED (DateKey, OnlineSalesKey)

How do we write a batching script in that case? It’s usually okay if you just use the leading column of the clustered index. The careful batching script looks like this now:

DECLARE
  @LargestKeyProcessed DATETIME = '20000101',
  @NextBatchMax DATETIME,
  @RC INT = 1;
 
WHILE (@RC > 0)
BEGIN
 
  SELECT TOP (1000) @NextBatchMax = DateKey
  FROM dbo.FactOnlineSales
  WHERE DateKey > @LargestKeyProcessed
    AND CustomerKey = 19036
  ORDER BY DateKey ASC;
 
  DELETE dbo.FactOnlineSales
  WHERE CustomerKey = 19036
    AND DateKey > @LargestKeyProcessed
    AND DateKey <= @NextBatchMax;
 
  SET @RC = @@ROWCOUNT;
  SET @LargestKeyProcessed = @NextBatchMax;
 
END

The performance is definitely comparable to the original careful batching script:

Logical Reads Per Delete

Logical Reads Per Delete

But is it correct? A lot of people wonder if the non-unique index breaks the batching somehow. And the answer is yes, but it doesn’t matter too much.

By limiting the batches by DateKey instead of the unique OnlineSalesKey, we are giving up batches that are exactly 1000 rows each. In fact, most of the batches in my test process somewhere between 1000 and 1100 rows and the whole thing requires three fewer batches than the original script. That’s acceptable to me.

If I know that the leading column of the clustering key is selective enough to keep the batch sizes pretty close to the target size, then the script is still accomplishing its goal.

What if the rows I have to delete are sparse?

Here’s another situation. What if instead of customer 19036, we were asked to delete customer 7665? This time, instead of deleting 45100 rows, we only have to delete 379 rows.

I try the careful batching script and see that all rows are deleted in a single batch. SQL Server was looking for batches of 1000 rows to delete. But since there aren’t that many, it scanned the entire table to find just 379 rows. It completed in one batch, but that single batch performed as poorly as the straight algorithm.

One solution is to create an index (online!) for these rows. Something like:

CREATE INDEX IX_CustomerKey 
ON dbo.FactOnlineSales(CustomerKey) 
WITH (ONLINE = ON);

Most batching scripts are one-time use. So maybe this index is one-time use as well. If it’s a temporary index, just remember to drop it after the script is complete. A temp table could also do the same trick.

With the index, the straight query only needed 3447 logical reads to find all the rows to delete:

DELETE dbo.FactOnlineSales WHERE CustomerKey = 7665;

Careful

Logical Reads

Can I use the Naive algorithm if I use a new index?

How does the Naive and other algorithms fare with this new index on dbo.FactOnlineSales(CustomerKey)?

The rows are now so easy to find that the Naive algorithm no longer has the n-squared behavior we worried about earlier. But there is some extra overhead. We have to delete from more than one index. And we’re doing many b-tree lookups (instead of just scanning a clustered index).

Remember the Naive solution looks like this:

DECLARE	@RC INT = 1;
 
WHILE (@RC > 0)
BEGIN
 
  DELETE TOP (1000) dbo.FactOnlineSales
  WHERE CustomerKey = 19036;
 
  SET @RC = @@ROWCOUNT
 
END

But now with the index, the performance looks like this (category Naive with Index)
Logical Reads Per Delete

The index definitely helps. With the index, the Naive algorithm definitely looks better than it did without the index. But it still looks worse than the careful batching algorithm.

But look at that consistency! Each batch processes 1000 rows and reads exactly the same amount. I might choose to use Naive batching with an index if I don’t know how sparse the rows I’m deleting are. There are a lot of benefits to having a constant runtime for each batch when I can’t guarantee that rows aren’t sparse.

Explore new solutions on your own

There are many different solutions I haven’t explored. This list isn’t comprehensive.

But it’s all tradeoffs. When faced with a choice between candidate solutions, it’s essential to know how to test and measure each solution. SQL Server has more authoritative answers about the behavior of SQL Server than me or any one else. Good luck.

September 7, 2022

This Function Generates UNPIVOT Syntax

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

Just like PIVOT syntax, UNPIVOT syntax is hard to remember.
When I can, I prefer to pivot and unpivot in the application, but here’s a function I use sometimes when I want don’t want to scroll horizontally in SSMS.

CREATE OR ALTER FUNCTION dbo.GenerateUnpivotSql (@Sql NVARCHAR(MAX))
  RETURNS NVARCHAR(MAX) AS
BEGIN 
RETURN '
WITH Q AS 
(
  SELECT TOP (1) ' + 
  (
    SELECT 
      STRING_AGG(
        CAST(
          'CAST(' + QUOTENAME(NAME) + ' AS sql_variant) AS ' + QUOTENAME(NAME) 
          AS NVARCHAR(MAX)
        ), ',
    '
      )
    FROM sys.dm_exec_describe_first_result_set(@sql, DEFAULT, DEFAULT)
  ) + '
  FROM ( 
    ' + @sql + '
  ) AS O 
)
SELECT U.FieldName, U.FieldValue
FROM Q
UNPIVOT (FieldValue FOR FieldName IN (' +
  (
    SELECT STRING_AGG( CAST( QUOTENAME(name) AS NVARCHAR(MAX) ), ',
  ' ) 
  FROM sys.dm_exec_describe_first_result_set(@sql, DEFAULT, DEFAULT)
  ) + '
  )) AS U';
END
GO

And you might use it like this:

declare @sql nvarchar(max) ='SELECT * FROM sys.databases WHERE database_id = 2';
declare @newsql nvarchar(max) = dbo.GenerateUnpivotSql (@sql);
exec sp_executesql @sql;
exec sp_executesql @newsql;

to get results like this:
Results

Uses

I find this function useful whenever I want to take a quick look at one row without all that horizontal scrolling. Like when looking at sys.dm_exec_query_stats and other wide dmvs. This function is minimally tested, so caveat emptor.

August 9, 2022

Formatting Binary(10) LSN Values For Use In sys.fn_dblog()

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 3:06 pm

System procedures like sp_replincrementlsn and system functions like fn_cdc_get_min_lsn and fn_cdc_get_max_lsn return values that are of type binary(10).

These values represent LSNs, Log Sequence Numbers which are an internal way to represent the ordering of transaction logs.

Typically as developers, we don’t care about these values. But when we do want to dig into the transaction log, we can do so with sys.fn_dblog which takes two optional parameters. These parameters are LSN values which limit the results of sys.fn_dblog. But the weird thing is that sys.fn_dblogis a function whose LSN parameters are NVARCHAR(25).

The function sys.fn_dblog doesn’t expect binary(10) values for its LSN parameters, it wants the LSN values as a formatted string, something like: 0x00000029:00001a3c:0002.

Well, to convert the binary(10) LSN values into the format expected by sys.fn_dblog, I came up with this function:

CREATE OR ALTER FUNCTION dbo.fn_lsn_to_dblog_parameter(
    @lsn BINARY(10)
)
RETURNS NVARCHAR(25)
AS 
BEGIN
  RETURN
    NULLIF(
      STUFF (
        STUFF (
          '0x' + CONVERT(NVARCHAR(25), @lsn, 2),
          11, 0, ':' ),
        20, 0, ':' ),
      '0x00000000:00000000:0000'
    )
END
GO

Example

I can increment the LSN once with a no-op and get back the lsn value with sp_replincrementlsn.
I can then use fn_lsn_to_dblog_parameter to get an LSN string to use as parameters to sys.fn_dblog.
This helps me find the exact log entry in the transaction that corresponds to that no-op:

DECLARE @lsn binary(10);
DECLARE @lsn_string nvarchar(25)
exec sp_replincrementlsn @lsn OUTPUT;
SET @lsn_string = dbo.fn_lsn_to_dblog_parameter(@lsn);
 
select @lsn_string as lsn_string, [Current LSN], Operation
from sys.fn_dblog(@lsn_string, @lsn_string);

March 18, 2022

UPSERT Requires a Unique Index

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 10:11 am

To avoid deadlocks when implementing the upsert pattern, make sure the index on the key column is unique. It’s not sufficient that all the values in that particular column happen to be unique. The index must be defined to be unique, otherwise concurrent queries can still produce deadlocks.

Say I have a table with an index on Id (which is not unique):

CREATE TABLE dbo.UpsertTest(
	Id INT NOT NULL,
	IdString VARCHAR(100) NOT NULL,
	INDEX IX_UpsertTest CLUSTERED (Id)
)

I implement my test UPSERT procedure the way I’m supposed to like this:

CREATE OR ALTER PROCEDURE dbo.s_DoSomething  
AS 
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION 
	DECLARE @Id BIGINT = DATEPART(SECOND, GETDATE());
	DECLARE @IdString VARCHAR(100) = CAST(@Id AS VARCHAR(100)); 
 
	IF EXISTS ( 
		SELECT * 
		FROM dbo.UpsertTest WITH (UPDLOCK) 
		WHERE Id = @Id 
	) 
	BEGIN 
		UPDATE dbo.UpsertTest 
		SET IdString = @IdString 
		WHERE Id = @Id; 
	END 
	ELSE 
	BEGIN 
		INSERT dbo.UpsertTest (Id, IdString) 
		VALUES (@Id, @IdString); 
	END; 
COMMIT

When I exercise this procedure concurrently with many threads it produces deadlocks! I can use extended events and the output from trace flag 1200 to find out what locks are taken and what order.

What Locks Are Taken?

It depends on the result of the IF statement. There are two main scenarios to look at. Either the row exists or it doesn’t.

Scenario A: The Row Does Not Exist (Insert)
These are the locks that are taken:

    For the IF EXISTS statement:

    • Acquire Range S-U lock on resource (ffffffffffff) which represents “infinity”

    For the Insert statement:

    • Acquire RangeI-N lock on resource (ffffffffffff)
    • Acquire X lock on resource (66467284bfa8) which represents the newly inserted row

Insert Scenario

Scenario B: The Row Exists (Update)
The locks that are taken are:

    For the IF EXISTS statement:

    • Acquire Range S-U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire RangeX-X lock on resource (66467284bfa8)
    • Acquire RangeX-X lock on resource (ffffffffffff)

Update Scenario

Scenario C: The Row Does Not Exist, But Another Process Inserts First (Update)
There’s a bonus scenario that begins just like the Insert scenario, but the process is blocked waiting for resource (ffffffffffff). Once it finally acquires the lock, the next locks that are taken look the same as the other Update scenario. The locks that are taken are:

    For the IF EXISTS statement:

    • Wait for Range S-U lock on resource (ffffffffffff)
    • Acquire Range S-U lock on resource (ffffffffffff)
    • Acquire Range S-U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire RangeX-X lock on resource (66467284bfa8)
    • Acquire RangeX-X lock on resource (ffffffffffff)

Update After Wait Scenario

The Deadlock

And when I look at the deadlock graph, I can see that the two update scenarios (Scenario B and C) are fighting:
Scenario B:

  • Acquire RangeX-X lock on resource (66467284bfa8) during UPDATE
  • Blocked RangeX-X lock on resource (ffffffffffff) during UPDATE

Scenario C:

  • Acquire RangeS-U lock on resource (ffffffffffff) during IF EXISTS
  • Blocked RangeS-U lock on resource (66467284bfa8) during IF EXISTS

Why Isn’t This A Problem With Unique Indexes?

To find out, let’s take a look at one last scenario where the index is unique:
Scenario D: The Row Exists (Update on Unique Index)

    For the IF EXISTS statement:

    • Acquire U lock on resource (66467284bfa8)

    For the Update statement:

    • Acquire X lock on resource (66467284bfa8)

Visually, I can compare scenario B with Scenario D:
Update Two Scenarios

When the index is not unique, SQL Server has to take key-range locks on either side of the row to prevent phantom inserts, but it’s not necessary when the values are guaranteed to be unique! And that makes all the difference. When the index is unique, no lock is required on resource (ffffffffffff). There is no longer any potential for a deadlock.

Solution: Define Indexes As Unique When Possible

Even if the values in a column are unique in practice, you’ll help improve concurrency by defining the index as unique. This tip can be generalized to other deadlocks. Next time you’re troubleshooting a deadlock involving range locks, check to see whether the participating indexes are unique.

This quirk of requiring unique indexes for the UPSERT pattern is not unique to SQL Server, I notice that PostgreSQL requires a unique index when using their “ON CONFLICT … UPDATE” syntax. This is something they chose to do very deliberately.

Other Things I Tried

This post actually comes from a real problem I was presented. It took a while to reproduce and I tried a few things before I settled on making my index unique.

Lock More During IF EXISTS?
Notice that there is only one range lock taken during the IF EXISTS statement, but there are two range needed for the UPDATE statement. Why is only one needed for the EXISTS statement? If extra rows get inserted above the row that was read, that doesn’t change the answer to EXISTS. So it’s technically not a phantom read and so SQL Server doesn’t take that lock.

So what if I changed my IF EXISTS to

IF ( 
	SELECT COUNT(*)
	FROM dbo.UpsertTest WITH (UPDLOCK) 
	WHERE Id = @Id 
) > 0

That IF statement now takes two range locks which is good, but it still gets tripped up with Scenario C and continues to deadlock.

Update Less?
Change the update statement to only update one row using TOP (1)

UPDATE TOP (1) dbo.UpsertTest 
SET IdString = @IdString
WHERE Id = @Id;

During the update statement, this only requires one RangeX-X lock instead of two. And that technique actually works! I was unable to reproduce deadlocks with TOP (1). So it is indeed a candidate solution, but making the index unique is still my preferred method.

February 7, 2022

Five Ways Time Makes Unit Tests Flaky

Filed under: Miscelleaneous SQL,Technical Articles — Michael J. Swart @ 12:21 pm
Five Ways Time Makes Unit Tests Flaky
I explore different sources of test flakiness related to time:

A flaky test is a unit test that sometimes passes and sometimes fails. The causes of these flaky tests are often elusive because they’re not consistently reproducible.

I’ve found that unit tests that deal with dates and times are notorious for being flaky – especially such tests that talk to SQL Server. I want to explore some of the reasons this can happen.

My Setup


All scripts and code samples are available on github.
In the examples I discuss below, I’m using a table defined like this:

CREATE TABLE dbo.MESSAGE_LOG
(
	LogId INT IDENTITY NOT NULL 
		PRIMARY KEY,
	LogMessage NVARCHAR(MAX) NOT NULL,
	CreatedDate DATETIME
		DEFAULT (SYSDATETIME())
)

I also wrote some methods that execute these queries:

AddLogMessage

INSERT dbo.MESSAGE_LOG(LogMessage)
OUTPUT inserted.LogId
VALUES (@Message);

AddLogMessageWithDate
Same method but this allows the application to supply the LastUpdate value

INSERT dbo.MESSAGE_LOG(LogMessage, LastUpdate)
OUTPUT inserted.LogId
VALUES (@Message, @LastUpdate)

UpdateLogMessage

UPDATE dbo.MESSAGE_LOG
SET Message = @Message,
    LastUpdatedTime = SYSDATETIME()
WHERE LogId = @LogId

Sources of Flaky Tests

In no particular order:

Tests Run Too Quick?

The following test checks to see that UpdateMessage updated the LastUpdate column.

[Test]
public void UpdateMessage_DateIsUpdated_1() {
    string message = Guid.NewGuid().ToString();
    int logId = m_data.AddLogMessage( message );
    LogMessageDto? dto = m_data.GetLogMessage( logId );
    DateTime createdDate = dto.LastUpdate;
 
    string newMessage = Guid.NewGuid().ToString();
    m_data.UpdateLogMessage( logId, newMessage );
 
    dto = m_data.GetLogMessage( logId );
    DateTime updatedDate = dto.LastUpdate;
 
    // The following assertion may fail! 
    // updatedDate and createdDate are Equal if the server is fast enough
    Assert.Greater( updatedDate, createdDate ); 
}

The test ran so quickly that updatedDate has the same value as createdDate. This test may fail with this error message:

    Failed UpdateMessage_DateIsUpdated [55 ms]
    Error Message:
    Expected: greater than 2022-02-05 15:18:10.33
    But was: 2022-02-05 15:18:10.33

It’s tempting to to get around this by adding a Thread.Sleep call between the insert and update. I don’t recommend it. That kind of pattern adds up and really lengthens the time it takes to run all tests.

Another solution might involve changing Greater to GreaterOrEqual but then we can’t verify that the value has actually been updated.

Storing dates using a more precise datetime type like DATETIME2 may help avoid more failures, but maybe not all failures.

The Right Way
Ideally we want to set up the test case such that the LastUpdate value is a constant date that’s definitely in the past. I would change this test to use AddLogMessageWithDate instead of AddLogMessage:

    DateTime then = new DateTime(2000, 01, 01);
    int logId = m_data.AddLogMessageWithDate( message, then );

Not All DateTimes Are Created Equal


.Net’s DateTime is different than SQL Server’s DATETIME. Specifically they have different precisions. DateTime values in SQL Server are rounded to increments of .000, .003, or .007 seconds. This means that you can’t store a .Net DateTime value in SQL Server and get it back. This test demonstrates the problem:

[Test]    
public void StoreDate_ReadItBack() {
    // Store date
    string message = Guid.NewGuid().ToString();
    DateTime now = DateTime.Now;
    int logId = m_data.AddLogMessageWithDate( message, now );
 
    // Read it back
    LogMessageDto? dto = m_data.GetLogMessage( logId );
 
    // The following assertion may fail! 
    // SQL Server's DATETIME has a different precision than .Net's DateTime
    Assert.AreEqual( now, dto.LastUpdate );
}

It may fail with:

    Failed StoreDate_ReadItBack [101 ms]
    Error Message:
    Expected: 2022-02-04 15:11:20.4474577
    But was: 2022-02-04 15:11:20.447

The Right Way
Understanding the resolution limitations of SQL Server’s DateTime is important here. A few solutions come to mind:

  • Maybe use a constant value instead of “now”
  • Modify the database columns to use SQL Server’s DATETIME2 which has a better resolution
  • Just fetch “now” from the database. I like this idea. When I use it again later, I’ll go into more detail.

Time Zones (Of Course)


Running integration tests that talk to a database on a separate servercan mean translating server times back and forth between both servers. This leads to another common source of flakiness: time zones. It’s not easy to avoid this kind of issue. Both Azure and AWS try to tackle this by use UTC everywhere.

A flaky test might look like this.

public void UpdateMessage_DateIsUpdated_2() {
    string message = Guid.NewGuid().ToString();
    DateTime now = DateTime.Now;
    int logId = m_data.AddLogMessageWithDate( message, now );
 
    string newMessage = Guid.NewGuid().ToString();
    m_data.UpdateLogMessage( logId, newMessage );
 
    LogMessageDto? dto = m_data.GetLogMessage( logId );
 
    // This next assertion can fail if the database is in a different time zone        
    Assert.GreaterOrEqual( dto.LastUpdate, now );
}

It fails like this:

    Failed UpdateMessage_DateIsUpdated_2 [19 ms]
    Error Message:
    Expected: greater than or equal to 2022-02-05 21:06:54.521464
    But was: 2022-02-05 16:06:54.52

Why is this pattern a source of flaky tests? The success of the test depends on the time zones of the test server and the database server. But even if you control both time zones, this particular example is still vulnerable to clock drift as we’ll see later.

The Right Way
Use a constant time or try fetching “now” from the database.

DateTime now = m_nowProvider.Now();

Here I’m using a method I wrote which simply returns the value of SELECT GETDATE(); from the database.

Clock Drift


Related to time zones is clock drift which again causes errors when you compare dates from two different servers.

No server’s clock is perfect and I like to think of each server’s clock as having its own time zone. Windows tells me that my laptop is set at (UTC -05:00) but with clock drift it’s probably something like (UTC -05:00:01.3). You can work at synchronizing clocks, but unless you’re testing that synchronization, you shouldn’t depend on it in your tests.

Just like in the case with time zones, this test may fail when it compares times from two different clocks:

public void UpdateMessage_DateIsUpdated_3() {
    string message = Guid.NewGuid().ToString();
    DateTime now = DateTime.Now;
    int logId = m_data.AddLogMessageWithDate( message, now );
 
    string newMessage = Guid.NewGuid().ToString();
    m_data.UpdateLogMessage( logId, newMessage );
 
    LogMessageDto? dto = m_data.GetLogMessage( logId );
 
    // This next test can fail if the clocks on the database server is off by a few seconds
    Assert.GreaterOrEqual( dto.LastUpdate, now );
}

The Right Way
Just like before, use a constant value or try fetching “now” from the database.

DateTime now = m_nowProvider.Now();

This way we’re comparing times from only one server’s clock.

Daylight Savings (Of Course)


This next test is flaky because of daylight savings time. It’s not specific to SQL Server but I thought I’d include it because I have been burned by this before:

[Test]    
public void StoreDateInTheFuture() {
    string message = Guid.NewGuid().ToString();
    DateTime inAMonth = DateTime.Now + TimeSpan.FromDays( 30 );        
 
    // CovertTime may fail because "a month from now" may be an invalid DateTime (with daylight savings)
    inAMonth = TimeZoneInfo.ConvertTime( inAMonth, TimeZoneInfo.Local );
    m_data.AddLogMessageWithDate( message, inAMonth );
    Assert.Pass();
}

I saw a test just like this one fail at 2:18 AM on February 9th, 2018. Adding 30 days to that date brought us to 2:18AM which was right in the middle of the hour we were skipping for daylight savings time and that’s what caused the error. This test fails with:

    Failed StoreDateInTheFuture [32 ms]
    Error Message:
    System.ArgumentException : The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid. (Parameter ‘dateTime’)

Summary


Flaky tests come from non-deterministic tests. To quote Martin Fowler, “Few things are more non-deterministic than a call to the system clock”. Try to:

  • Write tests with hard coded dates
  • Avoid comparing dates sourced from two different clocks
  • Consider writing a “NowProvider” (which can be mocked!)
  • Be very deliberate about time zones
  • Be very deliberate about data types (both in C# and SQL Server)

January 19, 2022

Measure the Effect of “Cost Threshold for Parallelism”

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 10:40 am

The configuration setting cost threshold for parallelism has a default value of 5. As a default value, it’s probably too low and should be raised. But what benefit are we hoping for? And how can we measure it?

The database that I work with is a busy OLTP system with lots of very frequent, very inexpensive queries and so I don’t like to see any query that needs to go parallel.

What I’d like to do is raise the configuration cost threshold to something larger and look at the queries that have gone from multi-threaded to single-threaded. I want to see that these queries become cheaper on average. By cheaper I mean consume less cpu. I expect the average duration of these queries to increase.

How do I find these queries? I can look in the cache. The view sys.dm_exec_query_stats can tell me if a query plan is parallel, and I can look into the plans for the estimated cost. In my case, I have relatively few parallel queries. Only about 300 which means the xml parsing piece of this query runs pretty quickly.

Measure the Cost of Parallel Queries

WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT 
	sql_text.[text] as sqltext,
	qp.query_plan,
	xml_values.subtree_cost as estimated_query_cost_in_query_bucks,
	qs.last_dop,
	CAST( qs.total_worker_time / (qs.execution_count + 0.0) as money ) as average_query_cpu_in_microseconds,
	qs.total_worker_time,
	qs.execution_count,
	qs.query_hash,
	qs.query_plan_hash,
	qs.plan_handle,
	qs.sql_handle	
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
CROSS APPLY 
	(
		SELECT SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
		(CASE 
			WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
			ELSE qs.statement_end_offset + 2
			END - qs.statement_start_offset) / 2)
	) as sql_text([text])
OUTER APPLY 
	( 
		SELECT 
			n.c.value('@QueryHash', 'NVARCHAR(30)')  as query_hash,
			n.c.value('@StatementSubTreeCost', 'FLOAT')  as subtree_cost
		FROM qp.query_plan.nodes('//StmtSimple') as n(c)
	) xml_values
WHERE qs.last_dop > 1
AND sys.fn_varbintohexstr(qs.query_hash) = xml_values.query_hash
AND execution_count > 10
ORDER BY xml_values.subtree_cost
OPTION (RECOMPILE);

What Next?

Keep track of the queries you see whose estimated subtree cost is below the new threshold you’re considering. Especially keep track of the query_hash and the average_query_cpu_in_microseconds.
Then make the change and compare the average_query_cpu_in_microseconds before and after. Remember to use the sql_hash as the key because the plan_hash will have changed.
Here’s the query modified to return the “after” results:

Measure the Cost of Those Queries After Config Change

WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT 
	sql_text.[text] as sqltext,
	qp.query_plan,
	xml_values.subtree_cost as estimated_query_cost_in_query_bucks,
	qs.last_dop,
	CAST( qs.total_worker_time / (qs.execution_count + 0.0) as money ) as average_query_cpu_in_microseconds,
	qs.total_worker_time,
	qs.execution_count,
	qs.query_hash,
	qs.query_plan_hash,
	qs.plan_handle,
	qs.sql_handle	
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
CROSS APPLY 
	(
		SELECT SUBSTRING(st.[text],(qs.statement_start_offset + 2) / 2,
		(CASE 
			WHEN qs.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),st.[text])) * 2
			ELSE qs.statement_end_offset + 2
			END - qs.statement_start_offset) / 2)
	) as sql_text([text])
OUTER APPLY 
	( 
		SELECT 
			n.c.value('@QueryHash', 'NVARCHAR(30)')  as query_hash,
			n.c.value('@StatementSubTreeCost', 'FLOAT')  as subtree_cost
		FROM qp.query_plan.nodes('//StmtSimple') as n(c)
	) xml_values
WHERE qs.query_hash in ( /* put the list of sql_handles you saw from before the config change here */ )
AND sys.fn_varbintohexstr(qs.query_hash) = xml_values.query_hash
ORDER BY xml_values.subtree_cost
OPTION (RECOMPILE);

What I Found

August 9, 2021

Find Procedures That Use SELECT *

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

I have trouble with procedures that use SELECT *. They are often not “Blue-Green safe“. In other words, if a procedure has a query that uses SELECT * then I can’t change the underlying tables can’t change without causing some tricky deployment issues. (The same is not true for ad hoc queries from the application).

I also have a lot of procedures to look at (about 5000) and I’d like to find the procedures that use SELECT *.
I want to maybe ignore SELECT * when selecting from a subquery with a well-defined column list.
I also want to maybe include related queries like OUTPUT inserted.*.

The Plan

  1. So I’m going to make a schema-only copy of the database to work with.
  2. I’m going to add a new dummy-column to every single table.
  3. I’m going to use sys.dm_exec_describe_first_result_set_for_object to look for any of the new columns I created

Any of my new columns that show up, were selected with SELECT *.

The Script

use master;
DROP DATABASE IF EXISTS search_for_select_star;
DBCC CLONEDATABASE (the_name_of_the_database_you_want_to_analyze, search_for_select_star);
ALTER DATABASE search_for_select_star SET READ_WRITE;
GO
 
use search_for_select_star;
 
DECLARE @SQL NVARCHAR(MAX);
SELECT 
	@SQL = STRING_AGG(
		CAST(
			'ALTER TABLE ' + 
			QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + 
			'.' + 
			QUOTENAME(OBJECT_NAME(object_id)) + 
			' ADD NewDummyColumn BIT NULL' AS NVARCHAR(MAX)),
		N';')
FROM 
	sys.tables;
 
exec sp_executesql @SQL;
 
SELECT 
	SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name, 
	r.column_ordinal,
	r.name
FROM 
	sys.procedures p
CROSS APPLY 
	sys.dm_exec_describe_first_result_set_for_object(p.object_id, NULL) r
WHERE 
	r.name = 'NewDummyColumn'
ORDER BY 
	p.schema_id, p.name;
 
use master;
DROP DATABASE IF EXISTS search_for_select_star;

Update

Tom from StraightforwardSQL pointed out a nifty feature that Microsoft has already implemented.

Yes it does! You can use it like this:

select distinct SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name
from sys.procedures p
cross apply sys.dm_sql_referenced_entities(
	object_schema_name(object_id) + '.' + object_name(object_id), default) re
where re.is_select_all = 1

Comparing the two, I noticed that my query – the one that uses dm_exec_describe_first_result_set_for_object – has some drawbacks. Maybe the SELECT * isn’t actually included in the first result set, but some subsequent result set. Or maybe the result set couldn’t be described for one of these various reasons

On the other hand, I noticed that dm_sql_referenced_entities has a couple drawbacks itself. It doesn’t seem to capture select statements that use `OUTPUT INSERTED.*` for example.

In practice though, I found the query that Tom suggested works a bit better. In the product I work most closely with, dm_sql_referenced_entities only missed 3 procedures that dm_exec_describe_first_result_set_for_object caught. But dm_exec_describe_first_result_set_for_object missed 49 procedures that dm_sql_referenced_entities caught!

Older Posts »

Powered by WordPress