Approaching Zero: Wrapping Up

This is part nine in a series on near-zero downtime deployments.

What Have We Learned?

Let’s summarize what we’ve learned in this series. I’ll do that with a series of headers and way too short text bits. Like this one. And links. Like this one to the introduction.

Is This Something You NEED?

I think this is the first question you have to ask yourself. Are you in an environment where extended downtime isn’t an option? I’ve been in places where the last person touching a server is usually done by 8 or 9 PM local time so I’d have 8-10 hours of time to futz about before anybody noticed.

In that kind of scenario, you can make a reasonable argument that no, you should not follow these practices. These practices extend the amount of time developers spend writing database code to get the same amount of end product done, so from a developer’s perspective it can feel like wasted effort.

Get Your Pieces in Place First

There are a few things you really want to be the case (and a few more things) before you pare down to near-zero downtime:

  • Code in source control with a repeatable release process and some semblance of continuous integration.
  • Most of your data access going through stored procedures. This doesn’t need to be 100% but it needs to be way higher than 0%.
  • You have acknowledgement from the business side that this is a trade-off: they’re trading developer time on certain business features for product reliability (which is itself a feature). The things developers need to do to reduce downtime windows necessarily means slower delivery of features compared to the alternative.

Think in Terms of Steps

Even if you work in a pure continuous deployment environment, I think it’s still good to think in terms of individual steps: pre-release, database release, code release, database post-release. They don’t need to be cordoned-off blocks of time on a calendar (though it’s fine if they are). They need to be concepts to keep in mind. Pre-release is all about preparation to move from one state to the next. Database release gets you ready for code release by ensuring that old and new versions of code can play nice with the same database. Database post-release gets rid of the cruft you’ve built up.

Procedure Changes are Easy

Working with stored procedures makes near-zero downtime deployments easy. Working with ad hoc code makes it difficult to the point where certain changes may become practically impossible.

Table Changes are Usually Easy, as Are Index Changes

Changing tables is usually pretty easy if you have stored procedures fronting your database code. The downside is that “usually” doesn’t mean “always” and some types of table changes might require extended foresight. For example, changing a column type from NTEXT to NVARCHAR(MAX) is not a one-step operation if you care about blocking.

Index changes independent of constraint changes are also pretty easy, with a minor schema modification lock at the time the index becomes available for us to use.

Constraint Changes are Usually NOT Easy

Most constraint changes are tricky. Creating primary and unique keys matches creating indexes means minor blocking for a very short time frame. The story is not as nice with foreign key or check constraints. For those, we cause blocking throughout the entire time—for foreign key constraints, we block both the table we create the key on and the table we reference. With those constraints, you’re going to want to create a new table, backfill, and swap names at the end.

Identity Column Changes are Sort of Easy

Some kinds of identity column changes, such as reseeding values, are easy. We also learned about some techniques that make adding an identity value after the fact or changing the increment pretty easy. If you want to add a primary key on top of that, you’ll have more process but can pull it off.

Parting Thoughts

The upside to getting into near-zero downtime is that you really get a stronger understanding of how the database engine you’re using works with respect to locking and changes. Being able to reason through changes with limited customer impact is a key consideration for a great brownfield developer and at the end of the day, most work is brownfield. If you want a specific case study of brownfield development, here you go.

This wraps up the Approaching Zero series of posts. If you liked it, good for you. If you hated it, good for you as well, but I have to wonder why you’d take the time to read tens of thousands of words of something you hated. Still, I appreciate that commitment to hatred.

If you have something to add, I’d love to hear about it in the comments or as a series of passive-aggressive sub-tweets.

Advertisements

4 thoughts on “Approaching Zero: Wrapping Up

  1. Nice! Can you end the post with a link to the rest of the posts in the series? (Going to link this in our weekly links, and I know people are gonna wanna jump around to the different sections to see the details on what was discussed.) Nice work!

      • Ah, sorry – I didn’t catch it until I hovered my mouse over different areas of the post to see what you meant. It might be that I’m red/green colorblind, but the links aren’t visible to me. Sorry about that!

        • Entirely my fault. I’d never run this theme through Coblis until now (something that 5-years-ago me hadn’t thought about), so I’ve added underlines to all links. Thanks for pointing that out.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s