Michael J. Swart

October 1, 2021

A System-Maintained LastModifiedDate Column

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 11:13 am

I like rowversion columns, I like that they’re system-maintained and they provide a unique deterministic way to compare or order changes in a database. But they’re not timestamps (despite the alias).

I also like datetime2 columns which are called LastModifiedDate. They can indicate the date and time that a row was last modified. But I have to take care of maintaining the column myself. I either have to remember to update that column on every update or I have to use something like a trigger with the associated overhead.

But maybe there’s another option.

GENERATED ALWAYS AS ROW START
What if I use the columns that were meant to be used for temporal tables, but leave SYSTEM_VERSIONING off?

CREATE TABLE dbo.Test
(
	Id INT IDENTITY NOT NULL,
	Value VARCHAR(100) NOT NULL,
	LastModifiedDate DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
	SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,	
	PERIOD FOR SYSTEM_TIME (LastModifiedDate, SysEndTime),
	CONSTRAINT PK_Test PRIMARY KEY CLUSTERED (Id)
)

It’s system maintained, it’s an actual datetime and I don’t have to worry about the overhead of triggers.
But it’s not a perfect solution:

  • SysEndTime is required but it’s an unused column here. It’s needed to define the period for SYSTEM_TIME and it’s always going to be DATETIME2’s maximum value. That’s why I made that column “hidden”. It’s got an overhead of 8 Bytes per row.
  • The value for LastModifiedDate will be the starting time of the transaction that last modified the row. That might lead to confusing behaviors illustrated by this example. Say that:
    • Transaction A starts
    • Transaction B starts
    • Transaction B modifies a row
    • Transaction A modifies the same row

    After all that, the last modified date will indicate the time that transaction A starts. In fact if I try these shenanigans with system versioning turned on, I get this error message when transaction A tries to modify the same row:

      Msg 13535, Level 16, State 0, Line 16
      Data modification failed on system-versioned table ‘MyDb.dbo.Test’ because transaction time was earlier than period start time for affected records.

Everything is tradeoffs. If you can live with the drawbacks of the system-generated last modified date column, then it might be an option worth considering.

August 9, 2021

Find Procedures That Use SELECT *

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:00 pm

I have trouble with procedures that use SELECT *. They are often not “Blue-Green safe“. In other words, if a procedure has a query that uses SELECT * then I can’t change the underlying tables can’t change without causing some tricky deployment issues. (The same is not true for ad hoc queries from the application).

I also have a lot of procedures to look at (about 5000) and I’d like to find the procedures that use SELECT *.
I want to maybe ignore SELECT * when selecting from a subquery with a well-defined column list.
I also want to maybe include related queries like OUTPUT inserted.*.

The Plan

  1. So I’m going to make a schema-only copy of the database to work with.
  2. I’m going to add a new dummy-column to every single table.
  3. I’m going to use sys.dm_exec_describe_first_result_set_for_object to look for any of the new columns I created

Any of my new columns that show up, were selected with SELECT *.

The Script

use master;
DROP DATABASE IF EXISTS search_for_select_star;
DBCC CLONEDATABASE (the_name_of_the_database_you_want_to_analyze, search_for_select_star);
ALTER DATABASE search_for_select_star SET READ_WRITE;
GO
 
use search_for_select_star;
 
DECLARE @SQL NVARCHAR(MAX);
SELECT 
	@SQL = STRING_AGG(
		CAST(
			'ALTER TABLE ' + 
			QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + 
			'.' + 
			QUOTENAME(OBJECT_NAME(object_id)) + 
			' ADD NewDummyColumn BIT NULL' AS NVARCHAR(MAX)),
		N';')
FROM 
	sys.tables;
 
exec sp_executesql @SQL;
 
SELECT 
	SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name, 
	r.column_ordinal,
	r.name
FROM 
	sys.procedures p
CROSS APPLY 
	sys.dm_exec_describe_first_result_set_for_object(p.object_id, NULL) r
WHERE 
	r.name = 'NewDummyColumn'
ORDER BY 
	p.schema_id, p.name;
 
use master;
DROP DATABASE IF EXISTS search_for_select_star;

Update

Tom from StraightforwardSQL pointed out a nifty feature that Microsoft has already implemented.

Yes it does! You can use it like this:

select distinct SCHEMA_NAME(p.schema_id) + '.' + p.name AS procedure_name
from sys.procedures p
cross apply sys.dm_sql_referenced_entities(
	object_schema_name(object_id) + '.' + object_name(object_id), default) re
where re.is_select_all = 1

Comparing the two, I noticed that my query – the one that uses dm_exec_describe_first_result_set_for_object – has some drawbacks. Maybe the SELECT * isn’t actually included in the first result set, but some subsequent result set. Or maybe the result set couldn’t be described for one of these various reasons

On the other hand, I noticed that dm_sql_referenced_entities has a couple drawbacks itself. It doesn’t seem to capture select statements that use `OUTPUT INSERTED.*` for example.

In practice though, I found the query that Tom suggested works a bit better. In the product I work most closely with, dm_sql_referenced_entities only missed 3 procedures that dm_exec_describe_first_result_set_for_object caught. But dm_exec_describe_first_result_set_for_object missed 49 procedures that dm_sql_referenced_entities caught!

August 4, 2021

What To Avoid If You Want To Use MERGE

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 1:06 pm

Aaron Bertrand has a post called Use Caution with SQL Server’s MERGE Statement. It’s a pretty thorough compilation of all the problems and defects associated with the MERGE statement that folks have reported in the past. But it’s been a few years since that post and in the spirit of giving Microsoft the benefit of the doubt, I revisited each of the issues Aaron brought up.

Some of the items can be dismissed based on circumstances. I noticed that:

  • Some of the issues are fixed in recent versions (2016+).
  • Some of the issues that have been marked as won’t fix have been fixed anyway (the repro script associated with the issue no longer fails).
  • Some of the items are complaints about confusing documentation.
  • Some are complaints are about issues that are not limited to the MERGE statement (e.g. concurrency and constraint checks).

So what about the rest? In what circumstances might I decide to use a MERGE statement? What do I still need to avoid? In 2019 and later, if I’m using MERGE, I want to avoid:

It’s a shorter list than Aaron’s but there’s another gotcha. The same way that some of the items get addressed with time, new issues continue to pop up. For example, temporal tables are a relatively new feature that weren’t a thing when Aaron’s first posted his list. And so I also want to avoid:

If MERGE has trouble with old and new features , then it becomes clear that MERGE is a very complicated beast to implement. It’s not an isolated feature and it multiplies the number of defects that are possible.

Severity of Issues

There’s a large number of issues with a large variety of severity. Some of the issues are minor annoyances, or easily avoidable. Some of them are serious performance issues that are harder to deal with. But a few of the issues can be worse than that! If I ask SQL Server to UPDATE something, and SQL Server responds with (1 row affected) , then it better have affected that row! If it didn’t, then that’s a whole new level of severity.

That leads me to what I think is the worst, unfixed bug. To be safe from it, avoid MERGEs with:

The defect is called Merge statement Delete does not update indexed view in all cases. It’s still active as of SQL Server 2019 CU9 and it leaves your indexed view inconsistent with the underlying tables. For my case, I can’t predict what indexed views would be created in the future, so I would shorten my advice to say avoid MERGEs with:

Conclusion

When I first started writing this post, I thought the gist was going to be “MERGE isn’t so bad if you just avoid these things”. But it still is bad. Aaron’s list is a little out of date but his advice is not. If Aaron updated that post today, the list of defects and issues with MERGE would be different and just as concerning to me as the day he wrote it.

So just to simplify my advice to you. Avoid:

  • MERGE

April 1, 2021

Only UPDATE Rows That Are Changing, But Do It Carefully

Filed under: Miscelleaneous SQL,Technical Articles,Tongue In Cheek — Michael J. Swart @ 12:20 pm

If you update a column to the exact same value as it had before, there’s still work being done.

Quite obediently, SQL Server takes out its eraser, erases the old value, and writes the same value in its place even though nothing changed!

But it feels like a real change. It has consequences for locking and an impact to the transaction log just as if it were a real change.

So that leads to performance optimizations that look like this:

Original Update Statement:

UPDATE Users
SET    DisplayName = @NewDisplayName
WHERE  Id = @Id;

Only Update When Necessary:

UPDATE Users
SET    DisplayName = @NewDisplayName
WHERE  Id = @Id
AND    DisplayName <> @NewDisplayName;

But Take Care!

Be careful of this kind of optimization. For example, you have to double check that DisplayName is not a nullable column (do you know why?). There are other things to worry about too, mostly side effects:

Side Effects

This simple update statement can have loads of side effects that can be hard to see. And the trouble with any side effect, is that other people can place dependencies on them! It happens all the time. Here is a list of just some of the side effects I can think of, I’m sure it’s not exhaustive.

Triggers: Ugh, I dislike triggers at the best of times, so check out any triggers that might exist on the table. In the original UPDATE statement, the row always appears in the INSERTED and DELETED tables, but in the improved version, the row does not necessarily. You have to see if that matters.

RowCount: What if the original update statement was part of a batch that looked like this:

UPDATE Users
SET    DisplayName = @NewDisplayName
WHERE  Id = @Id;
 
IF (@@ROWCOUNT > 0)
    RAISERROR ('Could not find User to update', 16, 1);

At least this side effect has the benefit of not being hidden. It’s located right beside the code that it depends on.

Rowversion: A rowversion value changes every time a row changes. Such a column would get updated in the original UPDATE statement, but not in the improved version. I can think of a number of reasonable of use cases that might depend on a rowversion column. ETLs for example that only care about changed data. So this might actually be an improvement for that ETL, but then again, maybe the number of “changed” rows was the important part and that number is now changing with the improvement. Speaking of ETLs:

Temporal Tables: Yep, the UPDATE statement is a “change” in the table that gets tracked in temporal history.

Change Data Capture, etc…: I haven’t bothered to set up Change Data Capture to check, but I assume that an UPDATE statement that updates a row to the same value is still considered a change. Right or wrong, the performance improvement changes that assumption.

People Depend On Side Effects

When I see people do this, I start to feel grouchy: Someone’s getting in the way of my performance improvement! But it happens. People depend on side effects like these all the time. I’m sure I do. XKCD pokes fun at this with Workflow where he notices that “Every change breaks someone’s workflow”. And now I’m imagining a case where some knucklehead is using the growth of the transaction log as a metric, like “Wow, business is really booming today, 5GB of transaction log growth and it’s not even noon!”

Although these are silly examples, there are of course more legit examples I could probably think of. And so in a well-functioning organization, we can’t unilaterally bust other peoples workflows (as much as we might like to).

January 26, 2021

Avoid This Pitfall When Using sp_getapplock

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 11:10 am

Takeaway: For most use cases, using sp_releaseapplock is unnecessary. Especially when using @LockOwner = 'Transaction (which is the default).

The procedure sp_getapplock is a system stored procedure that can be helpful when developing SQL for concurrency. It takes a lock on an imaginary resource and it can be used to avoid race conditions.

But I don’t use sp_getapplock a lot. I almost always depend on SQL Server’s normal locking of resources (like tables, indexes, rows etc…). But I might consider it for complicated situations (like managing sort order in a hierarchy using a table with many different indexes).

In that case, I might use it something like this:

BEGIN TRAN
 
exec sp_getapplock 
    @Resource = @LockResourceName, 
    @LockMode = 'Exclusive', 
    @LockOwner = 'Transaction';
 
/* read stuff (e.g. "is time slot available?") */
/* change stuff (e.g. "make reservation") */
 
exec sp_releaseapplock
    @Resource = @LockResourceName, 
    @LockOwner = 'Transaction';
 
COMMIT

But there’s a problem with this pattern, especially when using RCSI. After sp_releaseapplock is called, but before the COMMIT completes, another process running the same code can read the previous state. In the example above, both processes will think a time slot is available and will try to make the same reservation.

What I really want is to release the applock after the commit. But because I specified the lock owner is 'Transaction'. That gets done automatically when the transaction ends! So really what I want is this:

BEGIN TRAN
 
exec sp_getapplock 
    @Resource = @LockResourceName, 
    @LockMode = 'Exclusive', 
    @LockOwner = 'Transaction';
 
/* read stuff (e.g. "is time slot available?") */
/* change stuff (e.g. "make reservation") */
 
COMMIT -- all locks are freed after this commit

January 18, 2021

Pushing SQL Server Higher

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication — Michael J. Swart @ 9:39 am

I sometimes get frustrated with SQL Server. It’s a tricky and challenging product to work with and some days I feel like I’ve only scratched the surface.

But sometimes I’m really impressed with what it can do. It really is a good (if expensive) product.

With the increase in online learning, last week we broke a record for a single SQL Server. 254,000 batch requests per second! That’s the first time we’ve pushed it above the quarter million mark.

(The screen shot was from a day earlier, but you get the idea)

I’d like to hear from others that have reached these kinds of numbers in prod to talk about their experiences. Find me on twitter (@MJSwart).

Find Out More About Our Experience

I’ll be presenting to the Ohio North User Group this February 2nd called How High Can We Scale SQL Server? Feel free to sign up!
I have fond memories of presenting to folks in Cleveland. Why they host their SQL Saturday’s in February is still a mystery. And so I’m glad to present virtually this February to keep up my streak.

January 8, 2021

Collect Wait Stats Regularly in Order To Report On Them Over Time

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:48 pm

I find wait stats so useful that I’ve got Paul Randal’s SQL Server Wait Statistics (or please tell me where it hurts…) bookmarked and I visit it frequently.

But that gives the total amount of waits for each wait type accumulated since the server was started. And that isn’t ideal when I’m troubleshooting trouble that started recently. No worries, Paul also has another fantastic post Capturing wait statistics for a period of time.

Taking that idea further, I can collect data all the time and look at it historically, or just for a baseline. Lot’s of monitoring tools do this already, but here’s what I’ve written:

Mostly I’m creating these scripts for me. I’ve created a version of these a few times now and some reason, I can’t find them each time I need them again!

This stuff can be super useful, especially, if you combine it with a visualization tool (like PowerBI or even Excel).
For example, here’s a chart I made when we were experiencing the XVB_LIST spinlock issues I wrote about not too long ago. Good visualizations can really tell powerful stories.

A visualization of a spinlock graph

I’m talking here about spins and not waits of course, but the idea is the same and I’ve included the spinlock monitoring scripts in the same repo on github.

Also a quick reminder wait stats aren’t everything. Don’t neglect monitoring resources as Greg Gonzales pointed out last year.

October 28, 2020

Detect Excessive Spinlock Contention on SQL Server

Scaling SQL Server High
The beginning of the school year is behind us and what a semester start! 2020 has been tough on many of us and I’m fortunate to work for a company whose services are in such high demand. In fact we’ve seen some scaling challenges like we’ve never seen before. I want to talk about some of them.

Context

As we prepared to face unprecedented demand this year, we began to think about whether bigger is better. Worried about CPU limits, we looked to what AWS had to offer in terms of their instance sizes.

We were already running our largest SQL Servers on r5 instances with 96 logical CPUs. But we decided to evaluate the pricy u instances which have 448 logical CPUs and a huge amount of memory.

Painful Symptoms

Well, bigger is not always better. We discovered that as we increased the load on the u-series servers, there would come a point where all processors would jump to 100% and stayed there. You could say it plateaued (based on the graph, would that be a plateau? A mesa? Or a butte?)

Graph showing cpu plataued at 100%

When that occurred, the number of batch requests that the server could handle dropped significantly. So we saw more CPU use, but less work was getting done.

The high demand kept the CPU at 100% with no relief until the demand decreased. When that happened, the database seemed to recover. Throughput was restored and the database’s metrics became healthy again. During this trouble we looked at everything including the number of spins reported in the sys.dm_os_spinlock_stats dmv.

The spins and backoffs reported seemed extremely high, especially for the category “XVB_LIST”, but we didn’t really have a baseline to tell whether those numbers were problematic. Even after capturing the numbers and visualizing them we saw larger than linear increases as demand increased, but were those increases excessive?

How To Tell For Sure

Chris Adkin has a post Diagnosing Spinlock Problems By Doing The Math. He explains why spinlocks are useful. It doesn’t seem like a while loop that chews up CPU could improve performance, but it actually does when it helps avoid context switches. He gives a formula to help find how much of the total CPU is spent spinning. That percentage can then help decide whether the spinning is excessive.

But I made a tiny tweak to his formula and I wrote a script to have SQL Server do the math:

  • You still have to give the number of CPUs on your server. If you don’t have those numbers handy, you can get them from SQL Server’s log. I include one of Glenn Berry’s diagnostic queries for that.
  • There’s an assumption in Chris’s calculation that one spin consumes one CPU clock cycle. A spin is really cheap (because it can use the test-and-set instruction), but it probably consumes more than one clock cycle. I assume four, but I have no idea what the actual value is.
EXEC sys.xp_readerrorlog 0, 1, N'detected', N'socket';
-- SQL Server detected 2 sockets with 24 cores per socket ...
 
declare @Sockets int = 2;
declare @PhysicalCoresPerSocket int = 24;
declare @TicksPerSpin int = 4;
 
declare @SpinlockSnapshot TABLE ( 
    SpinLockName VARCHAR(100), 
    SpinTotal BIGINT
);
 
INSERT @SpinlockSnapshot ( SpinLockName, SpinTotal )
SELECT name, spins
FROM   sys.dm_os_spinlock_stats
WHERE  spins > 0;
 
DECLARE @Ticks bigint
SELECT @Ticks = cpu_ticks 
FROM sys.dm_os_sys_info
 
WAITFOR DELAY '00:00:10'
 
DECLARE @TotalTicksInInterval BIGINT
DECLARE @CPU_GHz NUMERIC(20, 2);
 
SELECT @TotalTicksInInterval = (cpu_ticks - @Ticks) * @Sockets * @PhysicalCoresPerSocket,
       @CPU_GHz = ( cpu_ticks - @Ticks ) / 10000000000.0
FROM sys.dm_os_sys_info;
 
SELECT ISNULL(Snap.SpinLockName, 'Total') as [Spinlock Name], 
       SUM(Stat.spins - Snap.SpinTotal) as [Spins In Interval],
       @TotalTicksInInterval as [Ticks In Interval],
       @CPU_Ghz as [Measured CPU GHz],
       100.0 * SUM(Stat.spins - Snap.SpinTotal) * @TicksPerSpin / @TotalTicksInInterval as [%]
FROM @SpinlockSnapshot Snap
JOIN sys.dm_os_spinlock_stats Stat
     ON Snap.SpinLockName = Stat.name
GROUP BY ROLLUP (Snap.SpinLockName)
HAVING SUM(Stat.spins - Snap.SpinTotal) > 0
ORDER BY [Spins In Interval] DESC;

This is what I see on a very healthy server (r5.24xlarge). The server was using 14% cpu. And .03% of that is spent spinning (or somewhere in that ballpark).

A screen shot showing an example of results

More Troubleshooting Steps

So what’s going on? What is that XVB_LIST category? Microsoft says “internal use only” But I can guess. Paul Randal talks about the related latch class Versioning Transaction List. It’s an instance-wide list that is used in the implementation of features like Read Committed Snapshot Isolation (RCSI) which we do use.

Microsoft also has a whitepaper on troubleshooting this stuff Diagnose and resolve spinlock contention on SQL Server. They actually give a technique to collect call stacks during spinlock contention in order to try and maybe glean some information about what else is going on. We did that, but we didn’t learn too much. We learned that we use RCSI with lots of concurrent queries. Something we really can’t give up on.

So Then What?

What We Did

Well, we moved away from the u instance with its hundreds of CPUs and we went back to our r5 instance with only (only!) 96 logical CPUs. We’re dealing with the limits imposed by that hardware and accepting that we can’t scale higher using that box. We’re continuing to do our darnedest to move data and activity out of SQL Server and into other solutions like DynamoDb. We’re also trying to partition our databases into different deployments which spreads the load out, but introduces a lot of other challenges.

Basically, we gave up trying to scale higher. If we did want to pursue this further (which we don’t), we’d probably contact Microsoft support to try and address this spinlock contention. We know that these conditions are sufficient (if not necessary) to see the contention we saw:

  • SQL Server 2016 SP2
  • U-series instance from Amazon
  • Highly concurrent and frequent queries (>200K batch requests per second with a good mix of writes and reads on the same tables)
  • RCSI enabled.

Thank you Erin Stellato

We reached out to Erin Stellato to help us through this issue. We did this sometime around the “Painful Symptoms” section above. We had a stressful time troubleshooting all this stuff and I really appreciate Erin guiding us through it. We learned so much.

October 23, 2020

In Memory OLTP Defeated Our Tempdb Problems

Filed under: Miscelleaneous SQL,Technical Articles — Michael J. Swart @ 10:29 am
Scaling SQL Server High
The beginning of the school year is behind us and what a semester start! 2020 has been tough on many of us and I’m fortunate to work for a company whose services are in such high demand. In fact we’ve seen some scaling challenges like we’ve never seen before. I want to talk about some of them.

At D2L, we’re the perfect candidate customer for In Memory OLTP features, but we’ve held off adopting those features for years. Our servers handle tons of super quick but super frequent queries and so we find ourselves trying to address the same scaling challenges we read about in Microsoft’s customer case studies.

But there’s only one In Memory feature in particular that I care about. It’s the Memory Optimized Table Types. Specifically, I’ve always wanted to use that feature to avoid tempdb object allocation contention. Recently I finally got my chance with a lot of success. So even though I could say I’m happy with In Memory features, I think it’s more accurate to say that I feel relieved at having finally squashed my tempdb issues.

Summary of Article

The Trouble With Tempdb

We use table valued parameters with our procedures a lot (like thousands a second). We’re lucky that the table variables are not created on each execution, they’re cached. We rely heavily on the reduced overhead that this gives us. It’s for that reason we much prefer table variables over temp tables.

But when we crank up the demand, we can still run into catastrophic trouble. When tempdb contention hits us, throughput doesn’t just plateau, it drops hard. This kind of contention we see is like a kind of traffic jam where anyone who needs to use tempdb (i.e. everyone) has to wait for it. These tempdb traffic jams are rough. We even created a lighter version of sp_whoisactive that avoids tempdb issues for times like those.

I won’t go on too long about our troubles (I’ve written about tempdb issues a few times already: 1, 2, 3, 4, 5, 6, 7). The usual advice is to increase the number of tempdb data files. We were using 48 data files and really looking hard for other options.

SQL Server 2019 has some promising options. In TEMPDB – Files and Trace Flags and Updates, Oh My! Pam Lahoud points out how SQL Server can use all the PFS pages in the tempdb data files, not just the first available one. But we couldn’t move to 2019 that quickly. So we looked at Memory Optimized Table Types to help us.

Memory Optimized Table Types Can Help

Improving temp table and table variable performance using memory optimization tells us how. Our main goal is to avoid tempdb contention and memory optimized table variables don’t use tempdb at all. As long as we can be sure that the number of rows stored in these table variables is small, it’s all pros and no cons.

But it wasn’t easy for us to implement. In 2017, I wrote about Postponing Our Use Of In Memory OLTP. There were just some challenges that we couldn’t overcome. We’re not alone in struggling with the limitations of In Memory features. But our challenges weren’t the usual limitations that folks talk about and so they’re worth exploring.

The Challenge of Sardines and Whales

We have one product that we deploy to many clients. Each client gets their own database. The big ones (whales) have their own servers but the small ones (sardines) get grouped together.

Sardines and Whales

So the overhead of enabling In Memory on all the sardines was going to cause issues. The In Memory OLTP filegroup requires up to 4GB of disk space which isn’t easy to handle with hundreds of sardines. So we’re left with this dilemma. We’d like to use In Memory on the biggest whales, but not on the sardines. We tackled that in two ways

  1. Decide that the choice to add the In Memory filegroup is configuration, not product. This still required some changes though. Our backup and restore processes needed to at least handle the new filegroups, but they couldn’t expect it.
  2. Add an exception to our processes that allow schema drift in the definition table types. Our plan was to manually alter the table types to be memory optimized, but only on the largest whales. Introducing schema drift is not ideal, but we made this choice deliberately.

This whole challenge could have been avoided if memory optimized table types didn’t require an In Memory OLTP file group. I get the sense that the memory optimized table types don’t actually use that folder because I noticed something interesting. SQL Server 2019 introduces memory optimized tempdb metadata tables without a memory optimized filegroup! How did they pull that off? I’m a bit jealous. I asked Pam Lahoud and it turns out that the In Memory filegroups are still required for memory optimized table types and will continue to be. It turns out that Microsoft can make certain assumptions about the tempdb metadata tables that they can’t with regular table types. 😟

Some Implementation Surprises

As we implemented our plan, we also encountered some interesting things during testing that might be useful for you if you’re considering In Memory features.

  • The default directory for storing database files should point to a folder that exists on the database server and on any secondary nodes in the same availability group. So if the default location is E:\SQLData then make sure there’s an E drive on every node. SQL Server will need to create an xtp folder in there.
  • When adding the In Memory OLTP file group, the folder that contains it should also exist on all secondary nodes.
  • In SQL Server 2014, I noticed that the addition of the memory optimized file group required up to 4 Gb of space. In SQL Server 2016, I see that that still happens, but the space isn’t taken until the first memory optimized table type I create. That’s also when the xtp folder gets created.
  • Adjusting the table types to be memory optimized was a challenge because we wanted the process to be online. I wrote about how we pulled that off earlier this week in How to Alter User Defined Table Types (Mostly) Online

Success!

Things worked out really really well for us. Our main goal was to avoid tempdb contention and we succeeded there. But there’s an additional performance boost. When you insert into a regular table variable, that data gets written to tempdb’s transaction log. But that’s not the case for memory optimized tables. So even though I really just care about avoiding contention, the boost in performance is significant and measurable and really nice.

In testing we were finally able to push a 96 CPU machine up to 100% CPU on every core and only then did throughput plateau. No tempdb contention in sight.

In production we also saw the same behavior and we were able to sustain over 200K batch requests per second. No tempdb contention in sight.

Those numbers are nice, tempdb contention has been such a thorn in my side for so long, it’s such a relief to squash that issue once and for all. I now get to focus on the next bottleneck and can leave tempdb contention in the past.

October 19, 2020

How to Alter User Defined Table Types (Mostly) Online

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 2:41 pm

Last year, Aaron Bertrand tackled the question, How To Alter User Defined Table Types. Aaron points out that “There is no ALTER TYPE, and you can’t drop and re-create a type that is in use”. Aaron’s suggestion was to create a new type and then update all procedure to use the new type.

I think I’ve got a bit of improvement based on sp_rename and sp_refreshmodule. Something that works well with

  • blue-green deployments,
  • both ad-hoc queries and procedures,
  • imperfectly understood schemas, like schemas that may have suffered from a little bit of schema drift.

Example

Say I have… I don’t know, let’s pick an example out of thin air. Say I have a simple table type containing one BIGINT column that I want to make memory optimized:

What I’ve Got

CREATE TYPE dbo.BigIntSet 
  AS TABLE ( 
    Value BIGINT NOT NULL INDEX IX_BigIntSet );

What I Want

CREATE TYPE dbo.BigIntSet 
  AS TABLE ( 
    Value BIGINT NOT NULL INDEX IX_BigIntSet )
  WITH (MEMORY_OPTIMIZED=ON);

I can’t directly ALTER this table type, but I can do this three-card monte trick using sp_rename to put the BigIntSet in its place.

The Migration Script

IF NOT EXISTS (
  SELECT * 
  FROM sys.table_types
  WHERE name = 'BigIntSet' 
  AND is_memory_optimized = 1
)
BEGIN
 
  CREATE TYPE dbo.BigIntSet_MO 
    AS TABLE ( 
      Value bigint NOT NULL 
      INDEX IX_BigIntSet ) 
    WITH (MEMORY_OPTIMIZED=ON);
 
  -- the switcheroo!
  EXEC sp_rename 'dbo.BigIntSet', 'zz_BigIntSet';
  EXEC sp_rename 'dbo.BigIntSet_MO', 'BigIntSet';
 
  --refresh modules
  DECLARE @Refreshmodulescripts TABLE (script nvarchar(max));
 
  INSERT @Refreshmodulescripts (script)
  SELECT 'EXEC sp_refreshsqlmodule ''' + QUOTENAME(referencing_schema_name) + '.' + QUOTENAME(referencing_entity_name) + ''';'
  FROM sys.dm_sql_referencing_entities('dbo.BigIntSet', 'TYPE');
 
  DECLARE @SQL NVARCHAR(MAX) = N'';
  SELECT @SQL = @SQL + script FROM @Refreshmodulescripts;
 
  EXEC sp_executesql @SQL;
 
END

But Is This Online?

Mostly. All queries that are in progress (whether ad-hoc or via procedures), continue to execute with no issues. However, there may be an issue with other queries that begin their execution during this migration.

If someone sends a query that uses the table type in the split second between the two sp_rename statements, then the query may fail.

If someone executes a procedure in the time between the first sp_rename and when sp_executesql gets around to refreshing that procedure, then the procedure may fail.

In practice, even on a busy server, I saw no such errors the few times I’ve tried this method, but of course, that’s no guarantee. In my case, even when refreshing close to 300 modules, this script takes about one second with no issues.

I actually tried adding a transaction around this whole migration script, and I did in fact see issues on a busy server. The schema modification lock that needs to be taken and held on all 300 objects was too much. It caused excessive blocking and I had to abandon that approach. In practice, I avoided trouble by ditching the explicit transaction.

Older Posts »

Powered by WordPress