Building a Docker Container of a SQL Server Database

Today, we’re going to go through the process of turning a database you’ve built into a Docker container. Before we get started, here are the expectations:

  1. I want a fully running copy of SQL Server with whatever database I’m using, as well as key components installed.
  2. I want this not to be on a persistent volume. In other words, when I destroy the container and create a new one from my image, I want to reset back to the original state. I’m using this for technical demos, where I want to be at the same starting point each time.
  3. I want this to be as easy as possible for users of my container. I consider the use of a container here as not particularly noteworthy in and of itself, so the more time I make people think trying to set up my demo environment, the more likely it is that people will simply give up.

With that preamble aside, let’s get to work!

Step One: Have a Database

This might seem a little obvious, but I want to make it clear that we need the database set up exactly how we want it. This includes user configuration (usually with SQL authentication, given that we’re using Linux containers and passing them out to who-knows-where), database objects like tables and procedures, and whatever else we need. After that, I will take a database backup and save it to my local disk.

For this example, I’m going to talk through how I containerized the demos for my Approaching Zero talk. This is a fairly straightforward database with some pre-staged data and objects, but no features such as PolyBase, SQL Agent, or Machine Learning Services to throw a spanner in the works. Incidentally, if you do use those types of features, I was able to get them working in my Native Scoring demo, so it’s possible to do and the process actually isn’t that far removed from what we’re doing here.

Step Two: Spin Up a Container

It takes a container to beat a container. Well, not exactly, but this is the route that I like to take.

I’m going to spin up a container. Here’s the command I run:

docker run -d -e MSSQL_PID=Developer -e ACCEPT_EULA=Y -e SA_PASSWORD=YourPasswordGoesHereButDontUseThisOneBecauseItIsntAGood1! --name approaching-zero-db -p 51433:1433

Taking this step by step, I want to run a docker container in the background (that’s what the -d command means). I’m going to set several environment variables: MSSQL_PID (to tell SQL Server which edition to use), ACCEPT_EULA (to promise Microsoft we won’t sue them), and SA_PASSWORD (to throw off our enemies with deceptively bad passwords). The name of my container will be called approaching-zero-db. You can, of course, name it whatever you’d like, and it doesn’t even have to be the same as what we’re going to push out to Docker Hub, so get creative if you really want.

The -p flag says that we’d like the container’s port 1433 to be represented on our host as port 51433. I already have SQL Server running on my host machine, so I’m selecting a totally different, unused port for the container. But as far as the container is concerned, it is listening on its port 1433, so it has that going for it.

Microsoft has their own container repository for SQL Server, and we’re getting the version tagged as 2019-latest, which is CU4 as of the time of this blog post going live. Downloading this may take a while, so try not to do this on a really slow internet connection.

Incidentally, you might get the following error:

C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: Ports are not available: listen tcp bind: An attempt was made to access a socket in a way forbidden by its access permissions.

If you get this error, it means that something else is already listening on port 51433 on your host, so you’ll have to use a different port instead. Maybe 52433 or 51434 or something. You’ll need to run docker rm approaching-zero-db to clean up the mess before you try again, but don’t shed too many tears over the containers we have to slaughter along the path to glory.

Step Three: Restore the Dauphin

We have a container and a database backup, so let’s do what we do best: fitting square pegs into round holes.

To do this, first I will make a backup directory within our container:

docker exec -it approaching-zero-db mkdir /var/opt/mssql/backup

This command is pretty straightforward: we’re going to execute a command in interactive mode (-i) and allocate a pseudo-TTY (-t). Don’t know what a pseudo-TTY is? If it really matters, learn all about tty and then come back here.

Our backup has a home, so now let’s move it in and make it feel comfortable.

docker cp D:\SQLServer\Backup\ApproachingZero.bak approaching-zero-db:/var/opt/mssql/backup/ApproachingZero.bak

We’re using Docker to copy the backup from my local directory into the container. You would, of course, modify this to fit your system.

After the backup is up there, I like to run the following command in Powershell. The prior commands you could run in cmd, but this one’s a lot easier to read as a multi-liner:

docker exec -it approaching-zero-db /opt/mssql-tools/bin/sqlcmd -S localhost `
    -U SA -P YourPasswordGoesHereButDontUseThisOneBecauseItIsntAGood1! `
    -Q "RESTORE DATABASE ApproachingZero FROM DISK =
        '/var/opt/mssql/backup/ApproachingZero.bak' WITH MOVE 'ApproachingZero' TO '/var/opt/mssql/data/ApproachingZero.mdf', MOVE 'ApproachingZeor_Log' TO '/var/opt/mssql/data/ApproachingZero.ldf'"

Some builds of SQL Server containers don’t have mssql-tools installed, so you might need to install them separately. Let’s talk about that in a sub-header.

Step Three Point A: What to Expect when you Expected mssql-tools

If, for some reason, your container does not have mssql-tools installed, that’s okay. As long as you have an internet connection, you can get this done.

We’re first going to open up a shell on the container:

docker exec -it approaching-zero-db /bin/bash

Why bash and not ksh? Because I’m not hardcore enough to live my life in vi. I’m a frequent vi tourist but not a native.

Next up, we’re going to install some stuff. Unlike in the Microsoft instructions, we are Master and Commander of this container and so sudo won’t do much.

curl | apt-key add -
curl | tee /etc/apt/sources.list.d/msprod.list

apt-get update 
apt-get install mssql-tools unixodbc-dev

And now we have mssql-tools and we can return to step 4, already in progress.

Step Four: Perform Sanity Tests

After you’ve restored the database, connect to it in SQL Server Management Studio, Azure Data Studio, or your tool of choice. Make sure the objects are there, that you can log in with the accounts you need, etc. You might need to create logins for those SQL authenticated accounts, for example—unless you’re like me and giving away sa access like it’s free candy at Halloween.

Here’s the important thing, though: any changes you make here will be on the container’s permanent record, so if you want a pristine experience every time you reconstitute that container, you’ll want to handle this step lightly.

Step Five: Admit Defeat on Sanity, Get Committed

We’ve made sure that the database is what we expected, that all of the pieces are there, and that queries are running as we expect. From there, it’s time to push this to Docker Hub.

The first step in pushing a container is to commit the current container as a new image with the docker commit command:

docker commit -m="Creating the Approaching Zero database." -a "{Your Name Here}" approaching-zero-db approaching-zero-db

We are running the commit command and passing in a message with -m, setting the author with -a, selecting the running container to commit (approaching-zero-db) and naming the local repository to which we want to commit this thing (approaching-zero-db).

At this point, you can spin up a local container off of your approaching-zero-db model, but nobody else can. So let’s fix that.

Step Six: Bust Out and Let the Whole World See You

We want to push this new image to Docker Hub. If you have not already, you’ll probably need to log in:

docker login

You might be able to authenticate with cached credentials, but if not you’ll enter your username and password and be good to go.

Now, we want to tag our local image and tell Docker what it will represent up in Docker Hub:

docker tag approaching-zero-db

This takes my local repository named approaching-zero-db (with the default latest tag) and ties it back to Docker Hub, where in my feaselkl account I have a presentations repository and a tag called approaching-zero-db.

My last step is to push the fruits of my labor into Docker Hub for all to see.

docker push

Step Seven: Using the Image

Now that I have the image up in Docker Hub, I can run the following commands from any machine with Docker installed and spin up a container based on my image:

docker pull

docker run --name approaching-zero-db -p 51433:1433

And that’s it. I connect to port 51433 on localhost and authenticate with my extra-great sa username and password, and can run through whatever I need to do. When I’m done, I can stop and kill the image whenever I’d like:

docker stop approaching-zero-db
docker rm approaching-zero-db


In this post, we learned how to take an existing database in SQL Server—whether that be for Windows or Linux—and create a SQL Server container which includes this database.

Coda: Discourses and Distractions

I wanted to save a couple of notes here to the end in order not to distract you with too many tangents in the main body of the post.

Let’s do this as Q&A, as I haven’t done that in a while.

Couldn’t you just use Dockerfiles for all of this?

Short answer, yes. Long answer, I didn’t want to. Dropping a Dockerfile in my GitHub repo makes it easier for me, sure, but then makes it more difficult for people to follow along. As an experiment, I did include all of the steps in my Map to Success repository. Compare the section on “Run Docker Images” to “Build Docker Images” and tell me which one is easier.

If I were putting together a training on Docker, then it would make perfect sense. But I’m using containers as a background tool, and I want to get past it as soon as possible and with as little learner friction as possible.

Can’t you just script all of these steps?

Short answer, yes. Long answer, yes you can.

The only thing to watch out for in scripting is that I noticed a timing issue between when you copy the database backup to the container and when sqlcmd recognizes that the backup is there. I wasn’t able to get it all working in the same Powershell script, even when I did hacky things like adding Start-Sleep. But maybe you’ll have better luck.

Why did you label the local repository as approaching-zero-db when you knew you were going to push this as presentations:approaching-zero-db?

That’s a really good question.

Why not expose a volume in the container, put the database backup there, and commit the image without the database backup once you’re done?

Credit Bartosz Ratajzyk for the great question.

The short version is that I actually forgot about volumes when I was writing this up. But using docker cp isn’t the worst thing, especially when copying one file. Bartosz makes a great point though that we should remove the backup files before committing, regardless of the technique we use.

So why would you actually use this technique in real life?

Androgogy is real life.

Aside from that, this technique is quite useful for building automated test environments. Create a pre-defined database with a known beginning state, spin up a container, hammer that database with all of your automated tests, and destroy the container when you’re done. Unlike most cases, where you want to save the data permanently, these sorts of tests cry out for ephemeral databases.

If you’re looking at containers for most other purposes (like, say, production databases), you’d definitely be interested in persisted volumes and I’d hand you off to Andrew Pruski for that.

Why would you want to restore the Dauphin? You don’t seem like an Armagnac.

Yeah, if you pressed me on it I’d probably admit to Burgundian sympathies, but that’s mostly because of the role it played in the subsequent Dutch republic. But if you pressed me even further, I’ll admit that I was just rooting for injuries in that whole tete-a-tete.