Michael J. Swart

March 12, 2018

My Trip to the West Coast

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 11:22 am

I was at the MVP Global Summit last week in Redmond, Washington. I was excited to go because I hadn’t been to the summit since 2014. Since I was travelling to the west coast anyway, I applied to speak at SQL Saturday Victoria the weekend before and I got accepted. Then, I expanded my trip to include a visit to the D2L office near Vancouver making it a week and a half tour of the West coast.

The bonus for me is that I got to visit British Columbia. I’ve spent my whole life in Canada, but I’ve never been to British Columbia and I’m so glad I went. That is one good looking province.

West Coast D2L

I started my trip by visiting D2L first.

Everyone single person there was awesome. D2L has a great work culture and the Vancouver office was no different. They treated me like family.

On the first day, we did a Q&A on anything database related. There were some good questions and I get the feeling that some of the answers were surprising. For example, I don’t advocate for stored procedures over inline SQL.

At the end of the week, I got to practice my session that I prepared for SQL Saturday Victoria in front of my colleagues. Now that was interesting. I wrote the talk for a general audience. It’s always useful to imagine an audience member when writing a talk. I imagined someone who wanted to learn more about databases and chose to give up their Saturday to do it. This imaginary audience member chose to come to a talk called “100 Percent Online Migrations” and presumably were interested in schema migrations. So when I gave the talk to my D2L colleagues, it was interesting to be able to get into specifics because I know the business challenges we’re all dealing with. The Vancouver office gave awesome feedback and they didn’t heckle me too much.

Other sights included a pikachu car, a build-is-broken light (just like home!) and an impromptu tour of the best coffee roaster in Richmond, BC

Before I left for Victoria, I got to spend a little time in downtown Vancouver

Downtown Vancouver

Even though I didn’t have a lot of time in downtown Vancouver, I did get to snap a couple photos before the sun went down. Mostly in and around Gastown.

Gassy Jack, Steam Clock, Vancouver Street


Another thing I noticed is that no one seems to smoke tobacco any more.

Before I left, I did manage to find the alley that Bastian ran down when he was chased by bullies in the Neverending Story. But the alley hasn’t aged well. It didn’t look like luck-dragons frequented that place any more so I didn’t take a picture.

Then it was time to head to SQL Saturday Victoria

Ferry to Victoria

I spent Friday on busses and ferries. Long trips in Ontario are dull. If you’ve seen one corn field, you’ve seen them all. But in British Columbia, the passing scenery is beautiful. Here’s a timelapse video of part of the ferry ride to Swartz Bay (not Swart’s bay).

SQL Saturday Victoria

SQL Saturday in Victoria was great. I was relaxed, and the talk (I thought) went really really well.

Randolph tries not to heckle Michael


I wasn’t super-thrilled with the turn-out. There were about ten people there (five were fellow speakers). And I have an uncomfortably long streak of having someone fall asleep during my talks. I know I’m not the most dynamic speaker but it’s starting to get on my nerves. That streak remains unbroken thanks in part to Erland’s jetlag.

But it remained a special SQL Saturday for me. I discovered that the venue was Camosun College in Victoria. Not only is Camosun College a client of D2L, but they’ve got a beautiful campus. Fun fact, spring has already arrived on campus there:

Spring!

Thanks to Janice, Scott and everyone else who ran a very very successful event.

On Sunday, Randolph, Mike and I waited in line for breakfast and walked around Victoria until the evening when we took a boat to meet Angela in Seattle, then on to Bellevue just in time for the MVP Global Summit in Redmond!

The 2018 MVP Global Summit

It’s been a while since I attended the MVP Summit. And this one was a good one. Everyone behaved for the most part. In past years, I remember a lot of people looking gift horses in the mouth. The feedback from MVPs to Microsoft sometimes took the form of angry complaints. Even if it was about something they didn’t know existed until that day. There was much less of that this year.

Personally I got to give feedback about some SQL Server features, and I got to hear about new stuff coming up. I’m always very grateful for Microsoft for putting on this event. I returned home with a lot more blue stuff than I started with.

What does this photobooth do?

(Special thanks to Scott, Mike, Angela and Randolph for making this trip extra fun. You’re the best)

January 17, 2018

SHA1 Collisions in SQL Server

Takeaway: It’s been frowned on for a while, but SHA1 is definitely broken for security purposes.

In October of 2010, Michael Coles created a contest on his blog called “Find a Hash Collision, Win $100“. The contest was part of a discussion at the time about whether the SHA1 hash was useful for detecting changes. For what it’s worth, I still think SHA1 is valuable as a consistency check if not for security.

At the time no SHA1 hash collisions were known, but in 2017, the news broke that some researchers finally generated a collision. So I looked up the research paper and downloaded the files. I used OPENROWSET to get the binary strings and I created my entry for Michael Coles’ contest:

--  Begin script
DECLARE @A varbinary(8000),
      @B varbinary(8000),
      @hA binary(20),
      @hB binary(20);
 
-- Replace the ? below with binary strings
 
SELECT @A = 0x255044462D312E330A25E2E3CFD30A0A0A312030206F626A0A3C3C2F57696474682032203020522F4865696768742033203020522F547970652034203020522F537562747970652035203020522F46696C7465722036203020522F436F6C6F7253706163652037203020522F4C656E6774682038203020522F42697473506572436F6D706F6E656E7420383E3E0A73747265616D0AFFD8FFFE00245348412D3120697320646561642121212121852FEC092339759C39B1A1C63C4C97E1FFFE017F46DC93A6B67E013B029AAA1DB2560B45CA67D688C7F84B8C4C791FE02B3DF614F86DB1690901C56B45C1530AFEDFB76038E972722FE7AD728F0E4904E046C230570FE9D41398ABE12EF5BC942BE33542A4802D98B5D70F2A332EC37FAC3514E74DDC0F2CC1A874CD0C78305A21566461309789606BD0BF3F98CDA8044629A10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000FFFE00FE0000000000000000FFE000104A46494600010101004800480000FFFE00134372656174656420776974682047494D50FFDB00430001010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFDB00430101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFC20011080008000803011100021101031101FFC40014000100000000000000000000000000000008FFC40014010100000000000000000000000000000009FFFE0006FFFE002FFFDA000C03010002100310000001539DC51FFFC4001510010100000000000000000000000000001626FFFE0006FFFE0033FFDA0008010100010502A953FFC4001F1100000309000000000000000000000000141517001316274563658695FFFE0006FFFE0041FFDA0008010301013F019A8AA56D533BB238739E612166B90E605BFFC4001E11000004070000000000000000000000000014151713274563658594FFFE0006FFFE0040FFDA0008010201013F01984E1555C155B66CDC3E04A21A444C40FFC4001E10000101090000000000000000000000001413001215164462648594FFFE0006FFFE0033FFDA0008010100063F02AD9A4DB175DCE6086D743B05BFFFC40014100100000000000000000000000000000000FFFE0006FFFE0012FFDA0008010100013F216001FFFE0006FFFE002BFFDA000C030100020003000000101FFFC40014110100000000000000000000000000000000FFFE0006FFFE0028FFDA0008010301013F106980FFC40014110100000000000000000000000000000000FFFE0006FFFE0028FFDA0008010201013F106BC7FFC40014100100000000000000000000000000000000FFFE0006FFFE0014FFDA0008010100013F10153FFFD9414E4745FFE000104A46494600010101004800480000FFFE00134372656174656420776974682047494D50FFDB00430001010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFDB00430101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFC20011080008000803011100021101031101FFC40014000100000000000000000000000000000009FFC4001501010100000000000000000000000000000607FFDA000C03010002100310000001524A5FFF00FFC40014100100000000000000000000000000000000FFDA00080101000105027FFFC40018110002030000000000000000000000000000F037A7B7FFDA0008010301013F01DFDBFD9FCFFFC40018110002030000000000000000000000000000F036A6B6FFDA0008010201013F01CA3546287FFFC40018100002030000000000000000000000000000F03767A7FFDA0008010100063F02A99C99898FFFC40014100100000000000000000000000000000000FFDA0008010100013F217FFFDA000C030100020003000000101FFFC40014110100000000000000000000000000000000FFDA0008010301013F100FFFC40014110100000000000000000000000000000000FFDA0008010201013F100FFFC40014100100000000000000000000000000000000FFDA0008010100013F100FFFD90A656E6473747265616D0A656E646F626A0A0A322030206F626A0A380A656E646F626A0A0A332030206F626A0A380A656E646F626A0A0A342030206F626A0A2F584F626A6563740A656E646F626A0A0A352030206F626A0A2F496D6167650A656E646F626A0A0A362030206F626A0A2F4443544465636F64650A656E646F626A0A0A372030206F626A0A2F4465766963655247420A656E646F626A0A0A382030206F626A0A313639330A656E646F626A0A0A392030206F626A0A3C3C0A20202F54797065202F436174616C6F670A20202F5061676573203130203020520A3E3E0A656E646F626A0A0A0A31302030206F626A0A3C3C0A20202F54797065202F50616765730A20202F436F756E7420310A20202F4B696473205B3131203020525D0A3E3E0A656E646F626A0A0A31312030206F626A0A3C3C0A20202F54797065202F506167650A20202F506172656E74203130203020520A20202F4D65646961426F78205B302030203820385D0A20202F43726F70426F78205B302030203820385D0A20202F436F6E74656E7473203132203020520A20202F5265736F75726365730A20203C3C0A202020202F584F626A656374203C3C2F496D302031203020523E3E0A20203E3E0A3E3E0A656E646F626A0A0A31322030206F626A0A3C3C2F4C656E6774682033303E3E0A73747265616D0A710A2020382030203020382030203020636D0A20202F496D3020446F0A510A656E6473747265616D0A656E646F626A0A0A0A0A787265660A30203133200A303030303030303030302036353533352066200A30303030303030303137203030303030206E200A30303030303031383631203030303030206E200A30303030303031383739203030303030206E200A30303030303031383937203030303030206E200A30303030303031393232203030303030206E200A30303030303031393435203030303030206E200A30303030303031393732203030303030206E200A30303030303031393939203030303030206E200A30303030303032303230203030303030206E200A30303030303032303736203030303030206E200A30303030303032313432203030303030206E200A30303030303032333039203030303030206E200A0A747261696C6572203C3C202F526F6F74203920302052202F53697A652031333E3E0A0A7374617274787265660A323339310A2525454F460A,
       @B = 0x255044462D312E330A25E2E3CFD30A0A0A312030206F626A0A3C3C2F57696474682032203020522F4865696768742033203020522F547970652034203020522F537562747970652035203020522F46696C7465722036203020522F436F6C6F7253706163652037203020522F4C656E6774682038203020522F42697473506572436F6D706F6E656E7420383E3E0A73747265616D0AFFD8FFFE00245348412D3120697320646561642121212121852FEC092339759C39B1A1C63C4C97E1FFFE017346DC9166B67E118F029AB621B2560FF9CA67CCA8C7F85BA84C79030C2B3DE218F86DB3A90901D5DF45C14F26FEDFB3DC38E96AC22FE7BD728F0E45BCE046D23C570FEB141398BB552EF5A0A82BE331FEA48037B8B5D71F0E332EDF93AC3500EB4DDC0DECC1A864790C782C76215660DD309791D06BD0AF3F98CDA4BC4629B10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000FFFE00FE0000000000000000FFE000104A46494600010101004800480000FFFE00134372656174656420776974682047494D50FFDB00430001010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFDB00430101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFC20011080008000803011100021101031101FFC40014000100000000000000000000000000000008FFC40014010100000000000000000000000000000009FFFE0006FFFE002FFFDA000C03010002100310000001539DC51FFFC4001510010100000000000000000000000000001626FFFE0006FFFE0033FFDA0008010100010502A953FFC4001F1100000309000000000000000000000000141517001316274563658695FFFE0006FFFE0041FFDA0008010301013F019A8AA56D533BB238739E612166B90E605BFFC4001E11000004070000000000000000000000000014151713274563658594FFFE0006FFFE0040FFDA0008010201013F01984E1555C155B66CDC3E04A21A444C40FFC4001E10000101090000000000000000000000001413001215164462648594FFFE0006FFFE0033FFDA0008010100063F02AD9A4DB175DCE6086D743B05BFFFC40014100100000000000000000000000000000000FFFE0006FFFE0012FFDA0008010100013F216001FFFE0006FFFE002BFFDA000C030100020003000000101FFFC40014110100000000000000000000000000000000FFFE0006FFFE0028FFDA0008010301013F106980FFC40014110100000000000000000000000000000000FFFE0006FFFE0028FFDA0008010201013F106BC7FFC40014100100000000000000000000000000000000FFFE0006FFFE0014FFDA0008010100013F10153FFFD9414E4745FFE000104A46494600010101004800480000FFFE00134372656174656420776974682047494D50FFDB00430001010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFDB00430101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101FFC20011080008000803011100021101031101FFC40014000100000000000000000000000000000009FFC4001501010100000000000000000000000000000607FFDA000C03010002100310000001524A5FFF00FFC40014100100000000000000000000000000000000FFDA00080101000105027FFFC40018110002030000000000000000000000000000F037A7B7FFDA0008010301013F01DFDBFD9FCFFFC40018110002030000000000000000000000000000F036A6B6FFDA0008010201013F01CA3546287FFFC40018100002030000000000000000000000000000F03767A7FFDA0008010100063F02A99C99898FFFC40014100100000000000000000000000000000000FFDA0008010100013F217FFFDA000C030100020003000000101FFFC40014110100000000000000000000000000000000FFDA0008010301013F100FFFC40014110100000000000000000000000000000000FFDA0008010201013F100FFFC40014100100000000000000000000000000000000FFDA0008010100013F100FFFD90A656E6473747265616D0A656E646F626A0A0A322030206F626A0A380A656E646F626A0A0A332030206F626A0A380A656E646F626A0A0A342030206F626A0A2F584F626A6563740A656E646F626A0A0A352030206F626A0A2F496D6167650A656E646F626A0A0A362030206F626A0A2F4443544465636F64650A656E646F626A0A0A372030206F626A0A2F4465766963655247420A656E646F626A0A0A382030206F626A0A313639330A656E646F626A0A0A392030206F626A0A3C3C0A20202F54797065202F436174616C6F670A20202F5061676573203130203020520A3E3E0A656E646F626A0A0A0A31302030206F626A0A3C3C0A20202F54797065202F50616765730A20202F436F756E7420310A20202F4B696473205B3131203020525D0A3E3E0A656E646F626A0A0A31312030206F626A0A3C3C0A20202F54797065202F506167650A20202F506172656E74203130203020520A20202F4D65646961426F78205B302030203820385D0A20202F43726F70426F78205B302030203820385D0A20202F436F6E74656E7473203132203020520A20202F5265736F75726365730A20203C3C0A202020202F584F626A656374203C3C2F496D302031203020523E3E0A20203E3E0A3E3E0A656E646F626A0A0A31322030206F626A0A3C3C2F4C656E6774682033303E3E0A73747265616D0A710A2020382030203020382030203020636D0A20202F496D3020446F0A510A656E6473747265616D0A656E646F626A0A0A0A0A787265660A30203133200A303030303030303030302036353533352066200A30303030303030303137203030303030206E200A30303030303031383631203030303030206E200A30303030303031383739203030303030206E200A30303030303031383937203030303030206E200A30303030303031393232203030303030206E200A30303030303031393435203030303030206E200A30303030303031393732203030303030206E200A30303030303031393939203030303030206E200A30303030303032303230203030303030206E200A30303030303032303736203030303030206E200A30303030303032313432203030303030206E200A30303030303032333039203030303030206E200A0A747261696C6572203C3C202F526F6F74203920302052202F53697A652031333E3E0A0A7374617274787265660A323339310A2525454F460A;
 
SELECT @hA = HASHBYTES('SHA1', @A),
      @hB = HASHBYTES('SHA1', @B);
 
SELECT CASE WHEN @A = @B
                  THEN '@A Equals @B'
                  ELSE '@A Is Not Equal To @B'
                  END AS AB_Equal,
            CASE WHEN @hA = @hB
                  THEN '@hA Equals @hB'
                  ELSE '@hA Is Not Equal To @hB'
                  END AS Hash_Equal;
-- End script

This gives me the output that wins the contest:

Unfortunately upon closer inspection, I see that the rules of the contest say that entries must be received prior to midnight U.S. Eastern Standard Time on October 31, 2010.

Rats, 7 years too late!

January 15, 2018

100 Percent Online Deployments: Stage and Switch

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:42 pm
100 Percent Online Deployments
How to deploy schema changes without scheduled downtime

In the first draft of this series, this post didn’t exist. I wanted to show a really simple example of a column switch and include it in the Blue-Green (Details) post. I planned for something simple. But I ran into some hiccups that I though were pretty instructive, so I turned it into the post you see here.

The Plan

For this demo, I wanted to use the WideWorldImporters database. In table Warehouse.ColdRoomTemperatures I wanted to change the column

ColdRoomSensorNumber INT NOT NULL,

into

ColdRoomSensorLabel NVARCHAR(100) NOT NULL,

because maybe we want to track sensors via some serial number or other code.

The Blue-Green plan would be simple:

The Trouble

But nothing is ever easy. Even SQL Server Data Tools (SSDT) gives up when I ask it to do this change with this error dialog:

Never Easy

There’s two things going on here (and one hidden thing):

  1. The first two messages point out that a procedure is referencing the column ColdRoomSensorNumber with schemabinding. The reason it’s using schemabinding is because it’s a natively compiled stored procedure. And that tells me that the table Warehouse.ColdRoomTemperatures is an In-Memory table. That’s not all. I noticed another wrinkle. The procedure takes a table-valued parameter whose table type contains a column called ColdRoomSensorLabel. We’re going to have to replace that too. Ugh. Part of me wanted to look for another example.
  2. The last message tells me that the table is a system versioned table. So there’s a corresponding archive table where history is maintained. That has to be dealt with too. Luckily Microsoft has a great article on Changing the Schema of a System-Versioned Temporal Table.
  3. One last thing to worry about is a index on ColdRoomSensorNumber. That should be replaced with an index on ColdRoomSensorLabel. SSDT didn’t warn me about that because apparently, it can deal with that pretty nicely.

So now my plan becomes:

Blue The original schema

Aqua After the pre-migration scripts are run

An extra step is required here to update the new column and keep the new and old columns in sync.

Green After the switch, we clean up the old objects and our schema change is finished:

Without further ado, here are the scripts:

Pre-Migration (Add Green Objects)

In the following scripts, I’ve omitted the IF EXISTS checks for clarity.

-- Add the four green objects
 
ALTER TABLE Warehouse.ColdRoomTemperatures
ADD ColdRoomSensorLabel NVARCHAR(100) NOT NULL 
    CONSTRAINT DF_Warehouse_ColdRoomTemperatures_ColdRoomSensorLabel DEFAULT '';
GO
 
ALTER TABLE Warehouse.ColdRoomTemperatures
ADD INDEX IX_Warehouse_ColdRoomTemperatures_ColdRoomSensorLabel (ColdRoomSensorLabel);
GO
 
CREATE TYPE Website.SensorDataList_v2 AS TABLE(
    SensorDataListID int IDENTITY(1,1) NOT NULL,
    ColdRoomSensorLabel VARCHAR(100) NULL,
    RecordedWhen datetime2(7) NULL,
    Temperature decimal(18, 2) NULL,
    PRIMARY KEY NONCLUSTERED (SensorDataListID)
)
GO
 
CREATE PROCEDURE Website.RecordColdRoomTemperatures_v2
    @SensorReadings Website.SensorDataList_v2 READONLY
AS
    --straight-forward definition left as exercise for reader
GO

Pre-Migration (Populate and Keep in Sync)

Normally, I would use triggers to keep the new and old column values in sync like this, but you can’t do that with In-Memory tables. So I altered the procedure Website.RecordColdRoomTemperatures to achieve something similar. The only alteration I made is to set the ColdRoomSensorLabel value in the INSERT statement:

ALTER PROCEDURE Website.RecordColdRoomTemperatures
@SensorReadings Website.SensorDataList READONLY
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH
(
    TRANSACTION ISOLATION LEVEL = SNAPSHOT,
    LANGUAGE = N'English'
)
    BEGIN TRY
 
        DECLARE @NumberOfReadings int = (SELECT MAX(SensorDataListID) FROM @SensorReadings);
        DECLARE @Counter int = (SELECT MIN(SensorDataListID) FROM @SensorReadings);
 
        DECLARE @ColdRoomSensorNumber int;
        DECLARE @RecordedWhen datetime2(7);
        DECLARE @Temperature decimal(18,2);
 
        -- note that we cannot use a merge here because multiple readings might exist for each sensor
 
        WHILE @Counter <= @NumberOfReadings
        BEGIN
            SELECT @ColdRoomSensorNumber = ColdRoomSensorNumber,
                   @RecordedWhen = RecordedWhen,
                   @Temperature = Temperature
            FROM @SensorReadings
            WHERE SensorDataListID = @Counter;
 
            UPDATE Warehouse.ColdRoomTemperatures
                SET RecordedWhen = @RecordedWhen,
                    Temperature = @Temperature
            WHERE ColdRoomSensorNumber = @ColdRoomSensorNumber;
 
            IF @@ROWCOUNT = 0
            BEGIN
                INSERT Warehouse.ColdRoomTemperatures
                    (ColdRoomSensorNumber, ColdRoomSensorLabel, RecordedWhen, Temperature)
                VALUES (@ColdRoomSensorNumber, 
                        'HQ-' + CAST(@ColdRoomSensorNumber AS NVARCHAR(50)), 
                        @RecordedWhen, 
                        @Temperature);
            END;
            SET @Counter += 1;
        END;
 
    END TRY
    BEGIN CATCH
        THROW 51000, N'Unable to apply the sensor data', 2;
 
        RETURN 1;
    END CATCH;
END;

That keeps the values in sync for new rows. But now it’s time to update the values for existing rows. In my example, I imagine that the initial label for the sensors are initially: “HQ-1”, “HQ-2”, etc…

UPDATE Warehouse.ColdRoomTemperatures
SET ColdRoomSensorLabel = 'HQ-' + CAST(ColdRoomSensorNumber as nvarchar(50));

Eagle-eyed readers will notice that I haven’t dealt with the history table here. If the history table is large use batching to update it. Or better yet, turn off system versioning and then turn it back on immediately using a new/empty history table (if feasible).

Post-Migration

After a successful switch, the green application is only calling Website.RecordColdRoomTemperatures_v2. It’s time now to clean up. Again, remember that order matters.

DROP PROCEDURE Website.RecordColdRoomTemperatures;
 
DROP TYPE Website.SensorDataList;
 
ALTER TABLE Warehouse.ColdRoomTemperatures
DROP INDEX IX_Warehouse_ColdRoomTemperatures_ColdRoomSensorNumber;
 
ALTER TABLE Warehouse.ColdRoomTemperatures
DROP COLUMN ColdRoomSensorNumber;

January 12, 2018

100 Percent Online Deployments: Keep Changes OLTP-Friendly

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:00 am
100 Percent Online Deployments
How to deploy schema changes without scheduled downtime

Using the Blue-Green deployment method, database changes are decoupled from applications changes. That leaves us with one last challenge to tackle. The schema changes have to be performed while the application is online. It’s true that you can’t always write an online script for every kind of schema change you want.

I got the moves like Jagger

The challenge of writing online schema changes is essentially a concurrency problem and the guiding principle I follow is: Do whatever you need to do, but avoid excessive blocking.

Locks Are Hot Potatoes

You can’t hold them for long. This applies to schema changes too. Logically if you don’t hold a lock long, you can’t block activity. One exception might be the SCH-M lock which can participate in blocking chains:

SCH-M locks

There are two main kinds of SQL queries. SELECT/INSERT/UPDATE/DELETE statements are examples of Data Manipulation Language (DML). CREATE/ALTER/DROP statements are examples of Data Definition Language (DDL).

With schema changes – DDL – we have the added complexity of the SCH-M lock. It’s a kind of lock you don’t see with DML statements. DML statements take and hold schema stability locks (SCH-S) on the tables they need. This can cause interesting blocking chains between the two types where new queries can’t start until the schema change succeeds:

Some suggestions:

  • Don’t rebuild indexes while changing schema
  • Rely on the OLTP workload which has many short queries. In an OLTP workload, the lead blocker shouldn’t be a lead blocker for long. Contrast that with an OLAP workload with long-running and overlapping queries. OLAP workloads can’t tolerate changing tables without delays or interruptions.
  • When using Enterprise Edition, use ONLINE=ON for indexes. It takes and holds a SCH-M lock only briefly.

Changes to Big Tables

Scripts that change schema are one-time scripts. If the size of the table is less than 50,000 rows, I write a simple script and then move on.

If the table is larger, look for metadata-only changes. For example, these changes are metadata-only changes:

If a table change is not a meta-data change, then it’s a size-of-data change. Then it’s time to get creative. Look for my other post in this series for an example of batching and an example of a column switcheroo.

Pragmatism Example

If you think “good enough” is neither, you may want to skip this section. There are some schema changes that are still very difficult or impossible to write online. With some creativity, we’ve always been able to mitigate these issues with shortcuts and I want to give an example which I think is pretty illustrative.

When a colleague asked for a rowversion column on a humongous table. We avoided that requirement by instead creating a datetime column called LastModifiedDate. Since 2012, new columns with constant default values are online. So we added the column with a constant default, and then changed the default value to something more dynamic:

alter table dbo.MYTABLE
add LastModifiedDate DATETIME NOT NULL 
    CONSTRAINT DF_TABLE_LastModifiedDate DEFAULT '20000101'
 
alter table dbo.MYTABLE
drop CONSTRAINT DF_TABLE_LastModifiedDate;
 
alter table dbo.MYTABLE
add CONSTRAINT DF_TABLE_LastModifiedDate 
      DEFAULT GETUTCDATE() for LastModifiedDate;

It’s a cool situation because it seems like the column has two defaults, one constant default for rows with missing values. And another definition to be used for new rows:

select pc.default_value, d.definition as [default definition]
from sys.system_internals_partitions p
join sys.system_internals_partition_columns pc 
	on p.partition_id = pc.partition_id
join sys.default_constraints d
	on d.parent_object_id = p.object_id
	and d.parent_column_id = pc.partition_column_id
where p.object_id = object_id('MYTABLE')
and pc.partition_column_id = 2
 
/* Gives 
 
default_value  default definition
-------------  ------------------
2000-01-01     (getutcdate())
 
*/

So be creative and pragmatic. Successful 100% online schema changes involve creativity and close collaboration between everyone involved.

January 10, 2018

100 Percent Online Deployments: Blue Green Details

Filed under: SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:54 am
100 Percent Online Deployments
How to deploy schema changes without scheduled downtime

So now for the nitty gritty. In my last post, Blue-Green Deployment, I talked about replacing old blue things with new green things as an alternative to altering things. But Blue-Green doesn’t work with databases, so I introduced the Blue-Aqua-Green method. This helps keep databases and other services online 24/7.

The Aqua Database

What does the Aqua database look like? It’s a smaller version of Blue-Green, but only for those database objects that are being modified. Borrowing some icons from Management Studio’s Object Explorer, here’s what one Blue-Aqua-Green migration might look like:

Start with a database in the original blue state:

After the pre-migration scripts, the database is in the aqua state, the new green objects have been created and are ready for traffic from the green application servers. Any type of database object can use the Blue-Green method. Even objects as granular as indexes or columns.

Finally when the load has switched over to the green servers and they’re nice and stable, run the post-migration steps to get to the green state.

Blue-Green for Database Objects

How is the Blue-Green method applied to each kind of database object? With care. Each kind of object has its own subtle differences.

PROCEDURES:
Procedures are very easy to Blue-Green. Brand new procedures are added during the pre-migration phase. Obsolete procedures are dropped during the post-migration phase.

If the procedure is changing but is logically the same, then it can be altered during the pre-migration phase. This is common when the only change to a procedure is a performance improvement.

But if the procedure is changing in other ways. For instance, when a new parameter is added, or dropped, or the resultset is changing. Then use the Blue-Green method to replace it: During the pre-migration phase, create a new version of the procedure. It must be named differently and the green version of the application has to be updated to call the new procedure. The original blue version of the procedure is deleted during the post-migration phase. It’s not always elegant calling a procedure something like s_USERS_Create_v2 but it works.

VIEWS:
Views are treated the same as procedures with the exception of indexed views.
That SCHEMA_BINDING keyword is a real thorn in the side of Blue-Green and online migrations in general. If you’re going to use indexed views, remember that you can’t change the underlying tables as easily.

Creating an index on a view is difficult because (ONLINE=ON) can’t be used. If you want to get fancy go look at How to Create Indexed Views Online.

INDEXES:
The creation of other indexes are nice and easy if you have Enterprise Edition because you can use the (ONLINE=ON) keyword. But if you’re on Standard Edition, you’re a bit stuck. In SQL Server 2016 SP1, Microsoft included a whole bunch of Enterprise features into Standard, but ONLINE index builds didn’t make the cut.

If necessary, the Blue-Green process works for indexes that need to be altered too. The blue index and the green index will exist at the same time during the aqua phase, but that’s usually acceptable.

CONSTRAINTS:
Creating constraints like CHECKS and FOREIGN KEYS can be tricky because they require size-of-data scans. This can block activity for the duration of the scan.

My preferred approach is to use the WITH NOCHECK syntax. The constraint is created and enabled, but existing data is not looked at. The constraint will be enforced for any future rows that get updated or inserted.

That seems kind of weird at first. The constraint is marked internally as not trusted. For peace of mind, you could always run a query on the existing data.

TABLES:
The creation of tables doesn’t present any problems, it’s done in the pre-migration phase. Dropping tables is done in the post-migration phase.

What about altering tables? Does the Blue-Green method work? Replacing a table while online is hard because it involves co-ordinating changes during the aqua phase. One technique is to create a temporary table, populate it, keep it in sync and cut over to it during the switch. It sounds difficult. It requires time, space, triggers and an eye for detail. Some years ago, I implemented this strategy on a really complicated table and blogged about it if you want to see what that looks like.

If this seems daunting, take heart. A lot of this work can be avoided by going more granular: When possible, Blue-Green columns instead.

COLUMNS:
New columns are created during the pre-migration phase. If the table is large, then the new columns should be nullable or have a default value. Old columns are removed during the post-migration phase.

Altering columns is sometimes easy. Most of the time altering columns is quick like when it only involves a metadata change.

But sometimes it’s not easy. When altering columns on a large table, it may be necessary to use the Blue-Green technique to replace a column. Then you have to use triggers and co-ordinate the changes with the application, but the process is much easier than doing it for a whole table. Test well and make sure each step is “OLTP-Friendly”. I will give an example of the Blue-Green method for a tricky column in the post “Stage and Switch”.

Computed persisted columns can be challenging. When creating persisted computed columns on large tables, they can lock the table for too long. Sometimes indexed views fill the same need.

DATA:
Technically, data changes are not schema changes but migration scripts often require data changes to so it’s important to keep those online too. See my next post “Keep Changes OLTP-Friendly”

Automation

Easy things should be easy and hard things should be possible and this applies to writing migration scripts. Steve Jones asked me on twitter about “some more complex idempotent code”. He would like to see an example of a migration script that is re-runnable when making schema changes. I have the benefit of some helper migration scripts procedures we wrote at work. So a migration script that I write might look something like this:

declare @Columns migration.indexColumnSet;
 
INSERT @Columns (column_name, is_descending_key, is_included_column)
VALUES ('UserId', 0, 0)
 
exec migration.s_INDEX_AlterOrCreateNonClustered_Online
	@ObjectName = 'SOME_TABLE',
	@IndexName = 'IX_SOME_TABLE_UserId',
	@IndexColumns = @Columns;

We’ve got these helper scripts for most standard changes. Unfortunately, I can’t share the definition of s_INDEX_AlterOrCreateNonClustered_Online because it’s not open source. But if you know of any products or open source scripts that do the same job, let me know. I’d be happy to link to them here.

Where To Next?

So that’s Blue-Green, or more accurately Blue-Aqua-Green. Decoupling database changes from application changes allows instant cut-overs. In the next post Keep Changes OLTP-Friendly I talk about what migration scripts are safe to run concurrently with busy OLTP traffic.

January 8, 2018

100 Percent Online Deployments: Blue-Green Deployment

Filed under: SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:06 pm
100 Percent Online Deployments
How to deploy schema changes without scheduled downtime

The Blue-Green technique is a really effective way to update services without requiring downtime. One of the earliest references I could find for the Blue-Green method is in a book called Continuous Delivery by Humble and Farley. Martin Fowler also gives a good overview of it at BlueGreenDeployment. Here’s a diagram of the typical blue green method (adapted from Martin Fowler).

When using the Blue-Green method, basically nothing gets changed. Instead everything gets replaced. We start by setting up a new environment – the green environment – and then cut over to it when we’re ready. Once we cut over to the new environment successfully, we’re free to remove the original blue environment. The technique is all about replacing components rather than altering components.

IndianaBlueGreen

Before I talk about the database (databases), notice a couple things. We need a router: Load balancers are used to distribute requests but can also be used to route requests. This enables the quick cut-over. The web servers or application servers have to be stateless as well.

What About The Database Switch?

The two databases in the diagram really threw me for a loop the first time I saw this. This whole thing only works if you can replace the database on a whim. I don’t know about you, but this simply doesn’t work for us. The Continuous Delivery book suggests putting the database into read-only mode. Implementing a temporary read-only mode for applications is difficult and rare (I’ve only ever heard of Stackoverflow doing something like this succesfully).

But we don’t do that. We want our application to be 100% online for reads and writes. We’ve modified the Blue-Green method to work for us. Here’s how we change things:

Modified Blue-Green: The Aqua Database

Leave the database where it is and decouple the database changes from the applications changes. Make database changes ahead of time such that the db can serve blue or green servers. We call this forward compatible database version “aqua”.

The changes that are applied ahead of time are “pre-migration” scripts. The changes we apply afterwards are “post-migration” scripts. More on those later. So now our modified Blue-Green migration looks like this:

Start with the original unchanged state of a system:

Add some new green servers:

Apply the pre-migration scripts to the database. The database is now in an “aqua” state:

Do the switch!

Apply the post-migration scripts to the database. The database is now in the new “green” state:

Then remove the unused blue servers when you’re ready:

We stopped short of replacing the entire database. That “aqua” state for the database is the Blue-Green technique applied to individual database objects. In my next post, I go into a lot more detail about this aqua state with examples of what these kind of changes look like.

Ease in!

It takes a long time to move to a Blue-Green process. It took us a few years. But it’s possible to chase some short-term intermediate goals which pay off early:

Start with the goal of minimizing downtime. For example, create a pre-migration folder. This folder contains migration scripts that can be run online before the maintenance window. The purpose is to reduce the amount of offline time. New objects like views or tables can be created early, new indexes too.

Process changes are often disruptive and the move to Blue-Green is no different. It’s good then to change the process in smaller steps (each step with their own benefits).

After adding the pre-migration folder, continue adding folders. Each new folder involves a corresponding change in process. So over time, the folder structure evolves:

  • The original process has all changes made during an offline maintenance window. Make sure those change scripts are checked into source control and put them in a folder called offline: (offline)
  • Then add a pre-migration folder as described above: (pre, offline)
  • Next add a post-migration folder which can also be run while online: (pre, offline, post)
  • Drop the offline step to be fully online: (pre, post)

Safety

Automated deployments allow for more frequent deployments. Automated tools and scripts are great at taking on the burden of menial work, but they’re not too good at thinking on their feet when it comes to troubleshooting unexpected problems. That’s where safety comes in. By safety, I just mean that as many risks are mitigated as possible. For example:

Re-runnable Scripts
If things go wrong, it should be easy to get back on track. This is less of an issue if each migration script is re-runnable. By re-runnable, I just mean that the migration script can run twice without error. Get comfortable with system tables and begin using IF EXISTS everywhere:

-- not re-runnable:
CREATE INDEX IX_RUN_ONCE ON dbo.RUN_ONCE (RunOnceId);

The re-runnable version:

-- re-runnable:
IF NOT EXISTS (SELECT * 
                 FROM sys.indexes 
                WHERE name = 'IX_RUN_MANY' 
                  AND OBJECT_NAME(object_id) = 'RUN_MANY' 
                  AND OBJECT_SCHEMA_NAME(object_id) = 'dbo')
BEGIN
    CREATE INDEX IX_RUN_MANY ON dbo.RUN_MANY (RunManyId);
END

Avoid Schema Drift
Avoid errors caused by schema drift by asserting the schema before a deployment. Unexpected schema definitions lead to one of the largest classes of migration script errors. Errors that surprise us like “What do you mean there’s a foreign key pointing to the table I want to drop? That didn’t happen in staging!”

Schema drift is real and almost inevitable if you don’t look for it. Tools like SQL Compare are built to help you keep an eye on what you’ve got versus what’s expected. I’m sure there are other tools that do the same job. SQL Compare is just a tool I’ve used and like.

Schema Timing
When scripts are meant to be run online, duration becomes a huge factor so it needs to be measured.

When a large number of people contribute migration scripts, it’s important to keep an eye on the duration of those scripts. We’ve set up a nightly restore and migration of a sample database to measure the duration of those scripts. If any script takes a long time and deserves extra scrutiny, then it’s better to find out early.

Measuring the duration of these migration scripts helps us determine whether they are “OLTP-Friendly” which I elaborate on in Keep Changes OLTP Friendly.

Tackle Tedious Tasks with Automation

That’s a lot of extra steps and it sounds like a lot of extra work. It certainly is and the key here is automation. Remember that laziness is one of the three great virtues of a programmer. It’s the “quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs…”. That idea is still true today.

Coming Next: Blue-Green Deployment (Details).

January 5, 2018

100% Online Deployments

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 10:01 am
100 Percent Online Deployments
How to deploy schema changes without scheduled downtime

There’s increasing pressure to keep software services available all the time. But there’s also pressure to deploy improvements frequently. How many of us would love to reduce the duration of migration windows or better yet eliminate them entirely. It’s always been challenging to make changes safely without interrupting availability, especially database schema changes.

And when responsibilities are split between different groups – DBAs responsible for availability, developers responsible for delivering improvements – it causes some tension:

You truly belong with us in the cloud.

So I’m beginning a series describing the “Blue Green” continuous delivery technique. It’s a technique that really works well where I work, It helps us manage thousands of databases on hundreds of servers with monthly software updates and zero downtime.

Outline

Blue-Green
We use the Blue-Green deployment techniques described in Continuous Delivery by Humble and Farley and made popular by Martin Fowler in Blue Green Deployment. I’ll describe what the Blue-Green deployment technique is and how we use it.

We actually don’t follow the book perfectly. I’ll describe what we do differently, and why.

OLTP-Friendly
With some effort and creativity, we can break our database migrations into small chunks and deploy them while the application is still live.

Many changes to schema will lock tables for longer than we can tolerate. Often, the schema changes will need to take a brief SCH-M lock on certain objects and so this technique works best with OLTP workloads (workloads that don’t send many long-running queries).

I explore ways to make schema changes that run concurrently with an OLTP workload. What kinds of changes are easy to deploy concurrently and what kind of changes are enemies of concurrency?

Co-ordination is Key

This series is meant to help people investing in automation and other process improvements. It takes careful co-ordination between those responsible for uptime and those responsible for delivering improvements. So in your organization, if Dev Vader and DBA Calrissian are on the same team (or even the same person) then this series is especially for you.

I know this topic really well. It should be fun.

Coming Next: Blue-Green Deployment

December 20, 2017

When Measuring Timespans, try DATEADD instead of DATEDIFF

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 11:25 am

Recently I tackled an issue where a DateTime field was getting updated too often. The query looked something like this:

UPDATE dbo.SOMETABLE
SET MyDateTime = GETUTCDATE()
WHERE SomeTableId = @SomeTableId;

So I decided to give up accuracy for concurrency. Specifically, I decided to only update MyDateTime if the existing value was more than a second ago.

First Attempt: Use DATEDIFF With SECOND

My first attempt looked like this:

UPDATE dbo.SOMETABLE
SET MyDateTime = GETUTCDATE()
WHERE SomeTableId = @SomeTableId
AND DATEDIFF(SECOND, GETUTCDATE(), MyDateTime) > 1;

But I came across some problems. I assumed that the DATEDIFF function I wrote worked this way: Subtract the two dates to get a timespan value and then return the number of seconds (rounded somehow) in that timespan.

But that’s not how it works. The docs for DATEDIFF say:

“Returns the count (signed integer) of the specified datepart boundaries crossed between the specified startdate and enddate.”

There’s no rounding involved. It just counts the ticks on the clock that are heard during a given timespan.

Check out this timeline. It shows three timespans and the DATEDIFF values that get reported:

But that’s not the behavior I want.

How About DATEDIFF with MILLISECOND?

Using milliseconds gets a little more accurate:

UPDATE dbo.SOMETABLE
SET MyDateTime = GETUTCDATE()
WHERE SomeTableId = @SomeTableId
AND DATEDIFF(MILLISECOND, GETUTCDATE(), MyDateTime) > 1000;

And it would be good for what I need except that DATEDIFF using MILLISECOND will overflow for any timespan over a month. For example,

SELECT DATEDIFF (millisecond, '2017-11-01', '2017-12-01')

gives this error:

Msg 535, Level 16, State 0, Line 1
The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.

SQL Server 2016 introduced DATEDIFF_BIG to get around this specific problem. But I’m not there yet.

Use DATEADD

I eventually realized that I don’t actually need to measure a timespan. I really just need to answer the question “Does a particular DateTime occur before one second ago?” And I can do that with DATEADD

UPDATE dbo.SOMETABLE
SET MyDateTime = GETUTCDATE()
WHERE SomeTableId = @SomeTableId
AND MyDateTime < DATEADD(SECOND, -1, GETUTCDATE()) ;

Update: Adam Machanic points out another benefit to this syntax. The predicate AND MyDateTime < DATEADD(SECOND, -1, GETUTCDATE()) syntax is SARGable (unlike the DATEDIFF examples). Even though there might not be a supporting index or SQL Server might not choose to use such an index it in this specific case, I prefer this syntax even more.

So How About You?

Do you use DATEDIFF at all? Why? I'd like to hear about what you use it for. Especially if you rely on the datepart boundary crossing behavior.

November 10, 2017

Postponing Our Use of In Memory OLTP

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 9:30 am

Devops, Agile and Continuous Delivery

Sometimes I get asked about work and our software development practices. Often these questions use words like agile, devops, or continuous delivery as in “Are you agile?” or “Do you do continuous delivery?”. But these questions rarely have yes or no answers. I always want to answer “It’s a work in progress”.

One of the things I like best about the PASS Summit is the opportunity to talk to people to find out how they do development. After speaking with others, it seems like everyone’s processes are always going to be works-in-progress. But I’ve come to realize that at D2L, we’re pretty far along. Here are just some of the things we do:

Things We Do

  • Deployments to production are scheduled and almost completely automatic. This lets us deploy way more often than we used to.
  • All code – including procedures and table definitions – are checked in.
  • Automatic tests are run on every pull request and merge.
  • Safety is a huge concern. We use feature flags and other techniques, but it remains difficult to maintain large complicated systems safely.
  • We use blue-green deployments for zero downtime.
  • The database layer is an exception to this blue-green technique. So it’s very important to be able to rollback any changes or configuration.

Sardines and Whales

This means we must also support thousands of copies of our main database. They’re used for client sites, test sites, qa sites, or whatever. So that leads to a variety of server configurations that I refer to as sardines and whales:

Look at those sardines. They’re quite happy where they are. The server can handle up to a thousand databases when there’s almost no activity.

But that whale is on a huge server and is extremely busy. Because of the high volume of transactions, we sometimes encounter tempdb contention due to our frequent use of table valued parameters. One technique I’ve been looking forward to evaluating is using memory optimized table types.

Maybe We Can Use In Memory OLTP?

I’m actually not very interested in memory optimized tables. I’m much more interested in the memory optimized table types. Those types can be used for table valued parameters. I can’t tell you how excited I was that it might solve my tempdb pet peeve.

But our dreams for the feature died
Memoriam

We’re leaving the feature behind for a few reasons. There’s an assumption we relied on for the sardine servers: Databases that contain no data and serve no activity should not require significant resources like disk space or memory. However, when we turned on In Memory OLTP by adding the filegroup for the memory-optimized data, we found that the database began consuming memory and disk (about 2 gigabytes of disk per database). This required extra resources for the sardine servers. So for example, 1000 databases * 2Gb = 2Tb for a server that should be empty.

Another reason is that checkpoints began to take longer. Checkpoints are not guaranteed to be quick, but on small systems they take a while which impacts some of our Continuous Integration workflows.

At the PASS Summit, I talked to a Hekaton expert panel. I also talked to a couple people in the Microsoft SQL Server clinic about some of my issues. They all recommended that we upgrade to SQL Server 2016 (which we can’t yet). Maybe I didn’t phrase my questions well, but I didn’t come away with any useful strategy to pursue.

I later talked to a Speaker Idol contestant Brian Carrig (@briancarrig) after hearing him talk briefly about his experiences with In Memory OLTP. He mentioned his own hard-fought lessons with In Memory OLTP including some uncomfortable outages.

The final nail in the coffin, as it were, is that once you turn on In Memory OLTP, you can’t turn it off. Once the In Memory OLTP filegroup is added, it’s there for good. Like I said, safety is a huge concern for us so we’re giving up on the feature for now.

Resurrecting the Feature?

The feature was designed for whales, not sardines. Maybe someday we will try to get those sardine servers to not fail with In Memory OLTP. Until then, the feature goes back on the shelf.

August 2, 2017

Problem With Too Many version_ghost_records

Filed under: Miscelleaneous SQL,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 8:00 am

We encountered a CPU issue which took over a month to understand and fix. Since then, it’s been another couple months and so I think it may be time for a debrief.

The cause was identified as a growing number of ghost records that SQL Server would not clean up no matter what. Our fix was ultimately to restart SQL Server.

When there's somethin' strange in your neighborhood

Symptoms and Data Collection

Here’s what we found.

  • At time marked ‘A’ on the graph, we noticed that CPU increased dramatically. It was hard not to notice.
  • We used sp_whoisactive to identify a frequent query that was taking a large amount of CPU. That query had never been a problem before. It was a select against a queue table – a table whose purpose was to store data for an implementation of a work queue. This table had a lot of churn: many INSERTS and DELETES. But it was small, never more than 100 rows.
  • So next, we ran the query manually in Management Studio. SET STATISTICS IO, TIME ON gave us a surprise. A simple COUNT(*) of the table told us there were 30 rows in the table, but reading it took 800K logical page reads!
  • What pages could it be reading? It’s impossible for a table to be that fragmented, it would mean less than one row per page. To look at the physical stats we ran:
    select 
           sum(record_count) as records,
           sum(ghost_record_count) as ghost_records,
           sum(version_ghost_record_count) as version_ghost_records
      from sys.dm_db_index_physical_stats(db_id(), object_id('<table_name>'), default, default, 'detailed')
     where index_id = 1
           and index_level = 0

    And that gave us these results:


    Interesting. The ghost records that remain are version_ghost_records, not ghost_records. Which sounds like we’re using some sort of snapshot isolation (which we’re not), or online reindexing (which we are), or something else that uses row versions.

  • Over time, those version_ghost_records would constantly accumulate. This ghost record accumulation was also present in all other tables, but it didn’t hurt as much as the queue table which had the most frequent deletes.

Mitigation – Rebuild Indexes

Does an index rebuild clean these up? In this case, yes. An index rebuild reduced the number of version ghost records for the table. At the time marked ‘B’ in the timeline, we saw that an index rebuild cleaned up these records and restored performance. But only temporarily. The version_ghost_records continued to accumulate gradually.

At time ‘C’ in the timeline, we created a job that ran every 15 minutes to rebuild the index. This restored performance to acceptable levels.

More investigation online

Kendra Little – Why Is My Transaction Log Growing In My Availability Group?
This is a great video. Kendra describes a similar problem. Long running queries on secondary replicas can impact the primary server in an Availability Group (AG). But we found no long running queries on any replica. We looked using sp_whoisactive and DBCC OPENTRAN. We didn’t see any open transactions anywhere.

Amit Banerjee – Chasing the Ghost Cleanup in an Availability Group
Amit also mentions that log truncation would be prevented in the case of a long-running query on a replica. However, in our case, log truncation was occurring.

Uwe Ricken – Read Committed Snapshot Isolation and high number of version_ghost_record_count
Uwe Ricken also blogged recently about a growing number of version_ghost_records. He talked about looking for open transactions that use one of the snapshot isolation levels. Unfortunately it didn’t apply to our case.

Bitemo Erik Gergely – A very slow SQL Server
Another example of a long running query keeping version_ghost_records around.

dba.stackexchange – GHOST_CLEANUP Lots of IO and CPU Usage
This stackexchange question also describes a problem with lots of CPU and an inefficient, ineffective ghost cleanup task for databases in an AG. There’s an accepted answer there, but it’s not really a solution.

Calling Microsoft Support

So we called Microsoft support. We didn’t really get anywhere. We spoke with many people over a month. We generated memory dumps, PSSDiag sessions and we conducted a couple screen sharing sessions. Everyone was equally stumped.

After much time and many diagnostic experiments. Here’s what we did find.

  • Availability Groups with readable secondaries are necessary (but not sufficient) to see the problem. This is where the version_ghost_records come from in the first place. Readable secondaries make use of the version store.
  • We ran an extended event histogram on the ghost_cleanup event. There was a ridiculous amount of events. Like millions per minute, but they weren’t actually cleaning up anything:
    CREATE EVENT SESSION ghostbusters ON SERVER 
    ADD EVENT sqlserver.ghost_cleanup( ACTION( sqlserver.database_id ) )
    ADD TARGET package0.histogram( SET filtering_event_name=N'sqlserver.ghost_cleanup', source=N'sqlserver.database_id' )
  • Microsoft let me down. They couldn’t figure out what was wrong. Each new person on the case had to be convinced that there were no open transactions. They couldn’t reproduce our problem. And on our server, we couldn’t avoid the problem.

Resolution

Ultimately the time came for an unrelated maintenance task. We had to do a rolling upgrade for some hardware driver update. We manually failed over the Availability Group and after that, no more problem!

It’s satisfying and not satisfying. Satisfying because the problem went away for us and we haven’t seen it since. After the amount of time I spent on this, I’m happy to leave this problem in the past.

But it’s not satisfying because we didn’t crack the mystery. And restarting SQL Server is an extreme solution for a problem associated with an “Always On” feature.

If you’re in the same boat as I was, go through the links in this post. Understand your environment. Look for long running queries on all replicas. And when you’ve exhausted other solutions, mitigate with frequent index rebuilds and go ahead and restart SQL Server when you can.

« Newer PostsOlder Posts »

Powered by WordPress