Loading Into Columnstore: Avoid Trickle Loads

I’m going to tell this one in story format, so here’s the short of it up front:

tl;dr — Clustered columnstore indexes don’t like the combination of wipe-and-replace with multi-threaded trickle loaders.  Avoid that pattern.

The Setup

In the olden days, we had a large fact table with a standard clustered index and some standard non-clustered indexes.  Because the primary use of this fact table was to aggregate fairly large amounts of data, performance was less than stellar.

Then came SQL Server 2014, with its promises of updatable, clustered columnstore indexes.  I jumped on this immediately and replaced my large fact table with one built off of a clustered columnstore index.  Initial testing looked great, giving me a 2-5X performance gain depending upon the query, and that was good enough for our PM to authorize moving forward with the project.

After that project went live, all was well…for about a week.  Then things started slowing down.  It took a while before we were able to tell that there was a problem and, from there, find the root cause.

The Root Cause

To understand the problem, let’s talk a little about our infrastructure.  We load the warehouse on a per-client, per-day basis and we have a number of processes which load data concurrently.  In other words, one process loads data for Client A on June 1st while another process may load data for Client B on June 1st and a third process loads data for Client C on June 2nd, all at the same time.  Loading data includes two steps:  deleting current data and loading new data.  The first step of deleting current data happens because it turned out to be much more efficient in our scenario to delete old records and then insert new records rather than trying to merge data in (either using the MERGE keyword or combination INSERT/UPDATE/DELETE statements).

There are two major issues that we experienced with this pattern against a clustered columnstore index in SQL Server 2014.  First, there was no way to reorganize or rebuild the index online in SQL Server 2014, meaning that the only way I could clean up deleted records would be to rebuild an entire partition.  Given that our hot load partitions are also hot access (specifically, the current and prior months) and we’re a 24/7 company, rebuilding those partitions is pretty much out of the question.  This means that I wouldn’t be able to clean out partitions which are full of deleted records.  That means that my compressed columnstore rowgroups were woefully under-populated.

At the same time, we experienced large numbers of open rowgroups in the deltastore, many of which contained just a few records.  My best understanding of why this happened is as follows:  when a process goes to delete records for a customer-day combination, that process can lock multiple deltastore rowgroups.  If other processes are trying to insert data into the deltastore while that first process tries to delete records, they’ll open new rowgroups because the current ones are locked.  After a while, we’d end up with hundreds or thousands of open rowgroups in the deltastore, many of which contained well under 10,000 rows apiece but which added up to tens of millions of records in total.  Given the way the deltastore works (it’s a big heap), having to scan a giant heap made our queries slower.  The worst part is that because these rowgroups tended not to grow much in size, the tuple mover wouldn’t do anything with them, so they’d just accumulate as new deltastore rowgroups get created and populated.

SQL Server 2016 gave me the ability to reorganize indexes online, which was a great addition as it allowed us to keep those columnstore tables online while reorganizing the partitions and smashing together all of those open rowgroups and combine together the mostly-empty rowgroups.  But that’s not a real solution to the problem; it just buys a bit of time and masks the symptoms.

Possible Solutions

Now we’re in the architecture portion of today’s post.  There are three potential solutions I want to bring up, two of which I’ve implemented in production at one time or another.

Rowstore Front-End

The first architecture involves putting a rowstore table in front of the columnstore.

rowstorefrontend

In this design, I have data coming from the transactional system, undergoing some ETL processing, and going into a staging table on the warehouse.  From there, I perform the remainder of the ETL work and insert into a rowstore table.  This rowstore table has the same attribute names and data types as the columnstore table, but instead of having a clustered columnstore index, it has a standard B-tree index and can have additional non-clustered indexes.  From there, I expose the combination table using a view which simply unions the two sets of data so the application doesn’t have to see rowstore versus columnstore tables.

To move data from the rowstore to the columnstore, I have an external migration process.  This migration process waits until one of two conditions is met:  either there are at least 250,000 records in a single partition, or there is data from at least 4 partitions ago.  In other words, for last three months (including the current), I’d hold off on migrating data until I hit the magic number of 250K records, so that I could migrate that as a single batch and bulk insert the results, bypassing the deltastore altogether.  For older data, my expectation was that these are typically one-off or smaller data moves, and so waiting for 250K records was folly, as that might never come.  Instead, move those immediately to keep the rowstore table compact.  The migration process I wrote looked at data by partition, so I could pull in data from 6 months ago while still waiting for the current partition to accumulate enough records to make that move worthwhile.

Advantages
  1. I get immediate access to the data once it makes its way into the rowstore.  This gets me access to the data earlier than the other alternatives.
  2. It solves one of the pain points I expressed above.  Hot records don’t go into the deltastore, so we don’t see a proliferation of open rowgroups.  Also, depending upon how quickly we reload data, it might solve the other problem as well:  if data doesn’t get reloaded very frequently, letting it sit in a rowgroup for a day or so means that if we delete and reinsert data, we aren’t deleting from the clustered columnstore index.
Disadvantages
  1. This is a relatively complex design to implement, especially with a zero down-time release and a significant amount of existing code looking for tables.
  2. If a huge number of rows get into the rowstore table, query performance won’t be that great because we’re unioning rowstore values and columnstore values and performing calculations later (which negates some of the effect of having a columnstore index).
  3. If I have to reprocess months worth of data, that columnstore gets hit hard.
  4. It seems harder to follow.  I had other database engineers regularly ask about these tables and some of our software engineers and testers found them vexing.
  5. You have to keep two tables in sync, so whenever I add a column to one side, I have to add the column to the other side and to the migration procedure and to the view.

I used this model for SQL Server 2014, but then removed it after we moved to SQL Server 2016 and went back to direct columnstore insertion.  My hope was that I would not need to re-implement something for 2016, but that ended up not being the case.

Staging Table Direct Insertion

Once I started experiencing the same problems in SQL Server 2016, I had to act.  Instead of once more putting a rowstore in front of my columnstore table, I decided to increase complexity in the ETL process to simplify application querying.  To wit, I created a two-step load process.  The first step of the load process, moving data from our transactional system into the warehouse, remains a trickle load, inserting records for a customer-date combination into a memory-optimized staging table.  Once that load process is complete, the ETL process inserts a row into a warehouse queue table, letting the next step know that this customer-day combo is ready to go.

stagingtabledirectinsertion

From there, I let the staging table grow a bit and run a job periodically to bulk move the staging table rows into the columnstore.  Now I delete and insert larger chunks of rows, usually in the 250-800K range.  This means that I avoid the deltastore completely and get a pretty decent row count in each compressed columnstore rowgroup.

Advantages
  1. Seamless post-ETL process.  No new tables and no views.
  2. Calculations don’t need to hit the rowstore table, so I can take full advantage of columnstore aggregations.
Disadvantages
  1. Data loads are no longer “instantaneous.”  I have to wait a bit longer for the second step of the process to commit before records show up in the warehouse.
  2. Additional ETL complexity means there are more things that could break.
  3. There is a possibility of weird data issues.  For example, if I’m trying to load a customer-day combo while the first phase of the ETL process is trying to re-insert that data, I could get inconsistent results.  I needed to add in checks preventing this from happening.

This is my current architecture.  I’m happier with it than with the original rowstore table architecture.

Bulk Insertion

The third architecture is simple:  don’t do trickle load at all.  For many companies, it’s okay to do a major load of data once a day or once every several hours.  If that’s the case, I’d recommend doing a bulk insertion over trying to implement a trickle load.  Just like the other two methods, bulk insertion bypasses the deltastore when you’re loading enough records.

Ideally, this bulk insertion would be a straight insert, never updating or deleting data.  If you can get away with it, the ideal pattern would be something like this:

bulkswap

If I need to load data for July 5th, I’m going to load all of the data for the partition which contains July 5th into a new table with that partition, and then I’m going to swap out the corresponding partition on the columnstore side.  I would want to partition by the normal load unit—for example, if we load data monthly, I’d partition by month; if we load data daily, I’d partition by day if possible.

Advantages
  1. No deletes from the columnstore means no fragmentation.
  2. Columnstore rowgroups are as packed as possible.  If there’s any doubt, we can run an index rebuild on the new partition before swapping it, as nothing reads that table.
  3. Just like the staging table direct insert, I don’t need to make any application changes or create new supporting objects outside of ETL.
Disadvantages
  1. Data loading must be periodic, and will probably be slower than trickle loads.  You’re probably loading once a day or once a month at this point.
  2. If just one row changes, you have to rebuild the entire partition.  That can be a time sink when partitions get very large.

This is the “ideal” solution, but making it work when customers expect data in less than 20 minutes is tricky.  The staging table direct insert architecture seems to be a reasonable compromise between spray loading data and waiting a long time for data.

Use SQL Server 2016

Regardless of the architecture, SQL Server 2016 is a must-have for clustered columnstore indexes.  The ability to reorganize indexes online is a life-saver.  There is the possibility for these reorganizations to block queries for short amounts of time, but that’s a huge benefit if you do find yourself deleting a lot of data in the columnstore.

Conclusion

In this post, we looked at three architectures for loading data into columnstore indexes, with a special focus on trickle loading data.  The common denominator for all of these is good staging tables to absorb the first wave of changes and move data into the columnstore in bulk.

R For The DBA: Graphing Rowcounts

Something I am trying to harp upon is that R isn’t just a language for data analysts; it makes sense for DBAs to learn the language as well.  Here’s a really simple example.

The Setup

I have a client data warehouse which holds daily rollups of revenue and cost for customers.  We’ve had some issues with the warehouse lately where data was not getting loaded due to system errors and timeouts, and our services team gave me a list of some customers who had gaps in their data due to persistent processing failures.  I figured out the root cause behind this (which will show up as tomorrow’s post), but I wanted to make sure that we filled in all of the gaps.

My obvious solution is to write a T-SQL query, getting some basic information by day for each customer.  I could scan through that result set, but the problem is that people aren’t great at reading tables of numbers; they do much better looking at pictures.  This is where R comes into play.

The Solution

My solution is just a few lines of R code, as well as a few lines of T-SQL.  I’m using SQL Server 2016 but we don’t use SQL Server R Services (yet!), so I’m doing this the “old-fashioned” way by pulling data across the wire.  Here’s the code:

install.packages("ggplot2")
install.packages("RODBC")
library(RODBC)
library(ggplot2)

conn <- odbcDriverConnect("Driver=SQL Server;Server=MYSERVER;Initial Catalog=MYDATABASE;Provider=SQLNCLI11.1;Integrated Security=SSPI")

wh <- sqlQuery(conn, "SELECT fc.CustomerWK, fc.DateWK, COUNT(1) AS NumRecords, SUM(fc.Revenue) AS Revenue, SUM(fc.Cost) AS Cost FROM MYDATABASE.dbo.FactTable fc WHERE fc.CustomerWK IN (78296,80030,104098,104101,104104,108371) AND fc.DateWK > 20160901 GROUP BY fc.CustomerWK, fc.DateWK;")

wh$CustomerWK <- as.factor(wh$CustomerWK)
wh$DateWK <- as.Date(as.character(wh$DateWK), "%Y%m%d")
str(wh)

ggplot(wh, aes(x=DateWK, y=NumRecords, colour=CustomerWK, group=CustomerWK)) +
  geom_line() +
  xlab("Date") +
  ylab("Number Of Records")

Let’s walk through this step by step.  After installing and loading the two relevant packages (RODBC to connect to SQL Server and ggplot2 to help us create a pretty graph), I open a connection to my server (with the name replaced to protect the innocent).

Next, I create a data frame called wh and populate it with the results of a query to the warehouse.  This is a pretty simple SQL query which gets the number of rows by customer by day, and also shows me revenue and cost.  I’m not using revenue and cost in this graph, but did look at them as part of my sanity checks.

Next up, I want to “fix” a couple data types.  CustomerWK is an int and represents the customer’s surrogate key.  Although this is an integer type, it’s really a factor.  I have a small, unique set of categories; there’s no mathematical relationship between CustomerWK 1 and CustomerWK2.  Anyhow, I replace CustomerWK with this new, factorized attribute.

After taking care of the CustomerWK factor, I convert DateWK to a date type.  DateWK is an integer representation of the date in ISO format, so January 15th, 2016 would be represented as 20160115.  I need to convert this from an integer to a character string, and then I can convert it to a date.  I replace the DateWK value with this date type.  I included the str(wh) call to show that my data frame really does have the correct types.

Finally, I call ggplot, passing in my warehouse data frame.  I create an aesthetic, which tells the graphing engine what I want to see on the screen.  I want to see the number of records in the fact table per day for each customer, so my Y coordinate is defined by NumRecords, my X coordinate by DateWK, and my group by CustomerWK.  To make it easier to read, I color-code each customer.

After creating the aesthetic, I plot the results as a line graph using the geom_line() function, and then give meaningful X and Y axis labels.

The Results

What I get in return is a decent-enough looking graph:

numrecordsbyday

I can easily see that customer 108371 experienced a major dropoff sometime in mid-October, and fell off the cliff in early November.  The other customers have been fairly stable, leading me to believe that just one customer (in this cohort) has an issue.  I was able to investigate the issue and determine the root cause of the falloff—that the customer stopped sending us data.

Conclusion

This is another example where knowing a little bit of R can be very helpful.  Even if you aren’t building predictive models or performing advanced regressions, the ability to throw results into a plot and quickly spot outliers makes the job much easier.  If I had to discern results from a Management Studio result set, I could still do the job, but I might have been thrown for a loop with customer 78296, whose counts fluctuated over a fairly wide band.

SSIS And HDInsight

Not too long ago, I decided to try connecting to HDInsight using SSIS.  One of the Integration Services updates for SQL Server 2016 is Hadoop support.  This Hadoop support comes via the Hadoop Connection Manager, which allows for two connection methods:  WebHCat and WebHDFS.  My preference is WebHDFS, but it appears that we cannot use WebHDFS at all with HDInsight.  So that leaves WebHCat.

What Is WebHCat?

WebHCat is a web-based REST API for HCatalog, a management layer for dealing with files in HDFS.  If you’re looking for configuration settings for WebHCat, you’ll want generally to look for “templeton” in config files, as Templeton was the project name before WebHCat.  In Ambari, you can go to the Hive configs and look at webhcat-site.xml for configuration settings.  For WebHCat, the default port in HDInsight is 30111, which you should find in the templeton.port configuration setting.

templetonport

Connect To Port 30111

Now that we know this, it’s as easy as putting your cluster name into the WebHCat host, putting in the Hadoop user account, and hitting the Test Connection button:

port30111

And, if you’re following along at home, prepare to wait about 2 minutes and have it fail.

It turns out that HDInsight clusters are accessible through port 443 via URL rewrites, which is pretty cool:  you always go through SSL connections, and based on the URL you go to, you’ll get forwarded on to the correct service.  In practice, this makes it fairly easy to connect to WebHCat via curl.

Port 443

Given that we need to connect on port 443 via an HTTPS connect (thus SSL), it seems that we should set the WebHCat port to 443 and turn on HTTPS and we’ll be good:

port443

This also failed to connect when I hit Test Connection, giving me a weird System.AggregateException error at that (though I don’t have a screenshot of it).  So what do we do now?

Capture Those Packets!

Wireshark is your friend, assuming that you’re allowed to install it in your environment.  Make sure of it first; people have been fired for less…

So let’s fire up a packet capture and see what exactly is happening.  I’ve removed any IP addresses or other identifiers, as it’s going out to a public IP, but we can easily see what’s going on here.

wiresharkfailure

The red boxes show me trying to connect directly to port 30111 and we can see why we get errors:  we’re sending SYN packets (to try to open a connection) but not receiving an ACK (acknowledgement) packet from the server.  After a while, we have a connection timeout.  Our HDInsight cluster is not listening to public requests on port 30111.

The blue box shows what’s happening when I use port 443 instead of 30111.  First, we have SYN and ACK packets, which establish the connection.  Next, SSIS tries to connect to the server on port 443 and…wait, it sends an HTTP request instead of HTTPS!  Notice that we try to re-transmit a few times, assuming that the server simply lost track of time or got busy or left the phone on vibrate or something, when in reality, port 443 is studiously ignoring our packet because it’s not over SSL.  Eventually our client gets the hint and sends a FIN packet to tell our the server that we’re just going to to shaddup about that whole HTTP request thing.

How Should It Look?

By contrast, I’ve taken a screenshot of what a successful SSL connection looks like when I use my browser to connect to the cluster:

wiresharkhttps

We start with SYN and ACK, followed by an SSL Hello, which the server acknowledges.  Then we have a key exchange and the server is happy, so it responds to our request and sends the data we want.  That’s how it should look.

Fixes And Workarounds

Right now, there is no fix for Test Connection.  At PASS Summit, I had a chance to discuss this issue with members of the SSIS team and it sounds like they are aware of the bug and are working on a fix.

In the meantime, it actually can work; if you hit OK and create the connection manager, the SSIS processes will actually connect via SSL.

PASS Summit Follow-Up

This was quite a busy Summit…  My original plan was to write one PASS Summit-related post per day (hence the Wednesday through Friday gap last week), but too much time and not enough to do.

The good news is that PASS Summit just defined the next three months for me.

Polybase

I already have several posts on Polybase, but this year’s Summit convinced me that I need to dig deeper into it.  Much deeper.  My plan is to have a comprehensive series on Polybase, covering not just on-prem Hadoop (like I’ve done so far), but also other experiences like HDInsight, Azure Blob Storage, and Azure Data Warehouse.

Re-Introducing Hadoop

Barely two years ago, I wrote my Much Ado About Hadoop presentation and ran it through 2015.  Looking back at this talk, I still think it was a good intro for its time, but two years is huge in the Hadoop space.  For 2017, I plan to rebuild that talk and focus on a true overview.  I think it made sense to discuss the concepts behind MapReduce and Hive then, but today, I’m going to start with a higher-level overview of streaming and integration technologies which weren’t generally available at that point.

This series will probably run alongside my Polybase posts.

Driving User Group And SQL Saturday Attendance

Hi, I’m Kevin, and I’m terrible at marketing!  Despite that, I’m going to look for ways to engage people in the Raleigh-Durham community, getting the interested in going to local user groups (like the SQL Server User Group) and attending events (like SQL Saturday).  I picked up some great ideas talking to people at PASS Summit and I want to implement them.  In the course of this, the plan is to figure out good metrics and see what drives these metrics.  I also plan to dig through Andy Warren’s posts, as he is a font of knowledge.

This includes a two-pronged approach:  first, I want to reach out to DBAs and data engineers who either have never heard of the user group or who haven’t really given it much thought.  Second, I want to reach out to people who deal with SQL but don’t think of themselves as DBAs:  software developers, QA engineers, system administrators, etc.

Of course, if you have ideas, I absolutely want to hear them.

Restoring An Azure SQL Database

Not too long ago, I had a chance to restore a database in Azure SQL Database, as someone had accidentally run an UPDATE statement without a WHERE clause.  That happens to the best of us, and it’s one of many reasons why we keep database backups around.  To correct the mistake, I had to restore a backup and run an update statement to set some values back to their prior values.  Overall, the process was pretty easy, but I figured it was worth a blog post.

First, log in to the Azure portal.  Once you’re in there, go to the SQL Databases option and select the database you want to restore.  Once you’ve chosen your database, hit the Restore button in the top bar.

restore-database

Clicking that button brings up a new blade which allows you to choose a new name for your restored database.  Note that you cannot (at least as of the writing of this blog post) restore over an existing database; you can only restore to a new database.

restore-database-choose-name

You will need to select your restore point as well.  In this case, I decided to restore back to midnight UTC on a particular date.  Note that the dates are UTC rather than your local timezone!

After selecting your restore point, you pick the target server and can decide a couple of things.  First, you can put this database into an elastic database pool, which makes cross-database connections a lot easier.  Second, you can choose a different pricing tier.  Because I only needed this database for a few minutes, keeping it at P2 Premium was fine; the total restore time meant that we spent less than a dollar bringing the data back to its pristine condition.

Once you’ve selected the name, restore point, and database size, hit the OK button and prepare to wait.  And wait.  And wait.  Azure database restorations can be slow.  And the latest version (v12) had some issues with restoration performance, although in the comments, the Azure SQL Database Product Manager notes that they’ve improved performance times since that post.

Anyhow, it took somewhere between 5 and 10 minutes for my 40 MB (yeah just 40 MB) database to restore.  We weren’t in crisis mode or anything—we’re using Azure Redis cache, so customers were hitting the cached values during this window—but it’s an important part of the restoration process to set expectations so people have a good idea of when that server will be available (or fixed) again.

SQL Saturday Raleigh

SQL Saturday Raleigh 2017 is official.  The event will be Saturday, March 11th at William Peace University in Raleigh, North Carolina.  SQL Saturdays are free, one-day training events for database administrators, database developers, and anybody in the broader data platform space.  Attendance is free, and there is an optional $15 charge for lunch.  Our call for speakers is open through Tuesday, January 10th, so sign up!

For this year’s SQL Saturday, I hope to get a fairly broad range of topics, covering plenty of core SQL Server administration and development topics, but also topics like Hadoop and R, bringing in some groups of people who might not normally think of going to a SQL Saturday.

TIL: Database Mirroring, TDE, And Encryption

I recently had to build database mirroring for a database with Transparent Data Encryption enabled. Ahmad Yaseen had a very nice walkthrough which covered most of what I needed to know. There were a couple things I needed to do to get everything working, so I figured it was worth a blog post.

Set Up Certificates

The first thing I had to do was set up a master key and certificate on my primary instance:

USE master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Some Master Key Password';
go
CREATE CERTIFICATE TDECertificate WITH SUBJECT = Some TDE Certificate';
go

Then we need to turn encryption on for the database, which is a two-step process:

USE [SomeDatabase]
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE TDECertificate;
GO
ALTER DATABASE [SomeDatabase] SET ENCRYPTION ON;

At this point, TDE is on for the primary instance.

Backups

Now it’s time to take some backups. First, let’s back up the various keys and certificates:

USE [master]
GO
--Back up the service master key
--Note that the password here is the FILE password and not the KEY password!
BACKUP SERVICE MASTER KEY TO FILE = 'C:\Temp\ServiceMasterKey.key' ENCRYPTION BY PASSWORD = 'Service Master Key Password';
GO
--Back up the database master key
--Again, the password here is the FILE password and not the KEY password.
BACKUP MASTER KEY TO FILE = 'C:\Temp\DatabaseMasterKey.key' ENCRYPTION BY PASSWORD = 'Database Master Key Password';
GO
--Back up the TDE certificate we created.
--We could create a private key with password here as well.
BACKUP CERTIFICATE [TDECertificate] TO FILE = 'C:\Temp\TDECertificate.cert'
	WITH PRIVATE KEY (FILE = 'C:\Temp\TDECertificatePrivateKey.key', ENCRYPTION BY PASSWORD = 'Some Private Key Password');
GO

Then we want to take a database backup and log file backup. I’ll let you take care of that part.

Now I want to get mirroring set up.

Mirror Certificates

On the mirror instance, let’s restore the various certificates. I’m assuming that this is a true mirroring instance and that you haven’t created any keys. I also moved the keys, certificates, and backups over to the mirroring instance’s C:\Temp folder.

USE [master]
GO
--Test restoration of the keys and certificate.
RESTORE SERVICE MASTER KEY FROM FILE = 'C:\Temp\ServiceMasterKey.key' DECRYPTION BY PASSWORD = 'Service Master Key Password';
GO
--For the master key, we need to use the file decription and then the original password used for key encryption.  Otherwise,
--your restoration attempt will fail.
RESTORE MASTER KEY FROM FILE = 'C:\Temp\DatabaseMasterKey.key'
	DECRYPTION BY PASSWORD = 'Database Master Key Password'
	ENCRYPTION BY PASSWORD = 'Some Master Key Password' FORCE;
GO
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'Some Master Key Password';
GO
CREATE CERTIFICATE [TDECertificate] FROM FILE = 'C:\Temp\TDECertificate.cert'
WITH PRIVATE KEY (FILE = 'C:\Temp\TDECertificatePrivateKey.key', DECRYPTION BY PASSWORD = 'Some Private Key Password');
GO

I needed to use the FORCE directive when restoring the master key. Otherwise, this part went smoothly.

Database And Log Restoration

Before restoring the database files, I needed to open the master key file.

OPEN MASTER KEY DECRYPTION BY PASSWORD = 'Some Master Key Password';
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY;
RESTORE DATABASE [SomeDatabase] FROM DISK = 'C:\Temp\SomeDatabase.bak' WITH NORECOVERY, REPLACE;
RESTORE LOG [SomeDatabase] FROM DISK = 'C:\Temp\SomeDatabase.trn' WITH NORECOVERY, REPLACE;

Now we have the mirror database in Restoring mode, and it’s using our service master key, so TDE is ready to work on this database as well.

From there, I was able to finish setting up mirroring. Note that I didn’t need to do anything special to the witness server–it doesn’t need any certificates or keys to do its job.

Additional Links

In addition to Ahmad’s post, I read though this post on the topic. I also read Don Castelino’s post on mirroring configuration failure scenarios. I ran into a couple of these errors, so it was a helpful article.

Finally, I want to leave this Simon McAuliffe post on the uselessness of Transparent Data Encryption. I’m not sure I’m 100% in agreement, but it’s a thought-provoking post.