Last time around, I walked through some basics of database testing. Now I’d like to dig into it a little bit more, especially focusing on tools we can use to help us out.
At the end of the last post, I mentioned that “natural” database testing—in which the starting point for your tests is the set of queries you use to develop the procedure in the first place—can be a viable mechanism for understanding what and how to test your stored procedures, but they come with a potential maintenance cost. If your table schema changes regularly, the cost of maintaining all of these tests increases. Your major trade-off here for testing is that you can have a rigid table schema but flexible data, or a flexible table schema but rigid data. Let’s look at an example to illustrate what I mean.
In the previous post, I built a simplistic test, getting the expected number of records from tables a and b based on a filter.
DECLARE @ExpectedNumberOfRecords INT; SELECT @ExpectedNumberOfRecords = COUNT(1) FROM a INNER JOIN b ON a.keyb = b.keyb WHERE b.y = @y;
If this table structure changes such that attribute y moves from table b up to table a, I need to re-write this portion of the test, as otherwise it will fail due to a syntax error. The benefit to doing this, however, is that I make no assumptions about the underlying data other than its structure: we might have 1 record come back or 1 million; either way, my test will succeed.
On the other hand, let’s say that I don’t want to formalize my table structure in quite this way—I want to leave it a bit more flexible so that I don’t need to re-write my tests every time I make a schema change. In that case, I need to get a bit less flexible with data: I need to hard-code the results in my test, telling the test that I expect 3 rows back. This means that my underlying data can’t change (or at least the subset I’m testing), but it does maximize schema flexibility.
Ideally, however, I’d like the best of both worlds: data flexibility within my development environment, but also schema flexibility within my tests. I know that I don’t want to modify my stored procedures to force in dependency injection. Instead, we can use a testing framework called tSQLt to help us out here. tSQLt acts as an isolation and unit testing framework, and one of the things it does is help us fake tables. That way, we can set up our own version of the table with a known data set and execute our normal, static stored procedure against our faked table. If you have a Pluralsight subscription, you can check out Dave Green’s series on tSQLt. I also recommend his article on tips and tricks around using tSQLt. You might also be interested in Red Gate’s SQL Test product.
My main problem with tSQLt and other database unit testing platforms is that they leave a lot of cruft in my database. I know that it’s all in its own schema (tsqlt by default), but I already have thousands of stored procedures I have to dig through; hundreds more doesn’t sound very appealing to me. That’s one reason why I like the database test project Microsoft provides in Visual Studio 2013. I can create all of my tests in Visual Studio and execute them whenever I want, knowing that they won’t leave objects in the database. The downside is that this is not as comprehensive as tSQLt or SQL Test, as I cannot fake objects or do many of the things you would expect to do with real unit testing. Microsoft’s database unit tests are more akin to integration tests, but I’m generally okay with that.
Regardless of the tools you use—even if your tool is nothing more than a folder with a bunch of .sql files that you run in SQL Server Management Studio—there are a few things we can do to make writing tests easier. The first thing we can do is choose on which side of the flexibility problem we’ll live. We can have a flexible table schema with relatively fixed data or flexible data schema with relatively fixed tables. Because the advice for these is almost mirror opposites, I want to go over each in order.
So first, let’s look at having a flexible table schema with fixed data. In this case, we want to have data scripts which populate the development database. We’ll need to keep those up to date as the table schema changes, and those data scripts should have relevant data for all database unit tests. If you have a good data architect modeling your databases (and who has time to do this!), you may want to go down this path. The largest benefit here is that you can remove table dependencies from your unit tests altogether by hard-coding expected values and result sets. You won’t need to build up the expected results from an SQL query, which makes your unit tests more robust to table refactoring. This also allows you to take advantage of certain features within Visual Studio database projects, such as asserting actual values (either scalar values, such as “the fourth column of third row of the second result set should have a value equal to 6” or a checksum of the entire result set). Your database tests will also run more quickly, as you have fewer queries to execute per test.
But I’ll be honest with you: keeping this kind of data script up to date is hard. Most of us don’t have data architects who have the time to maintain appropriate data scripts for our projects, and when you’re working with legacy code, you may not even know what is “appropriate” when it comes to data collections. So let’s look at the alternative now, which is a flexible data set with a fixed table schema. In this case, instead of hard-coding expected values, we determine the expected values based on what relevant data is currently in the database. If we have a stored procedure whose end result set is supposed to have a certain shape, we can run a SELECT statement first which gets either the expected result set or some bits of information which can identify that yes, we have the correct data—for example, maybe the number of rows and the sum of revenue. Then, we execute the stored procedure, saving its results in a table variable and compare the results of our table variable to the expected results. When I go down this route, I tend to end up with two result sets: my expected results and my actual results. Rather than having Visual Studio assert something about the two result sets, I combine the result sets together. In the following example, we’ll say that @a is my actual result set and @e my expected result set. Let’s pretend that I executed a stored procedure to build up @a (rather than simply inserting records).
--Arrange DECLARE @a TABLE ( Id INT, Something CHAR(1) ); DECLARE @e TABLE ( Id INT, Something CHAR(1) ); --Act INSERT into @a(Id, Something) VALUES(1, 'A'), (2, 'B'), (3, 'C'); INSERT into @e(Id, Something) VALUES(1, 'D'), (2, 'B'), (3, 'C'); --Assert ( SELECT Id, Something FROM @a EXCEPT SELECT Id, Something FROM @e ) UNION ALL ( SELECT Id, Something FROM @e EXCEPT SELECT Id, Something FROM @a )
If our assertion returns zero rows, we know that the two result sets are exactly the same and we can pass the test. If any rows come back (like they will in this example), we know that the test failed and we can investigate further. This means that, with a flexible data structure, in contrast to a fixed data structure, we want to use “empty result set” tests in Visual Studio more than checksums. Another difference between the two is that I have now introduced implementation details in my database tests, meaning that I will probably need to update my database tests more often.
Sometimes, you may need a combination of fixed-data and flex-data tests. For example, there may be “stale” data which you can use as quasi-fixed data, as well as “fresh” data which you would use as flexible data. This comes with its own set of advantages and disadvantages—the biggest disadvantage is that your “stale” data might change, either due to garbage collection or somebody going back and updating data that you assumed was fixed.
Regardless of whether you end up with a fixed-data or flexible-data scheme for your tests (or some combination of the two), there are a few things which are common to database testing in general. The first commonality is banal but important: your database tests should be idempotent. Your test should, given the same inputs, return the same outputs. Secondly, you should strive to keep the inputs the same by making your tests leave as small a footprint as possible on the data. All tests should begin with BEGIN TRANSACTION and all tests should end with ROLLBACK TRANSACTION. This helps us with the next rule: test order should not affect results. You should not depend upon test 14 being run before test 15; if your test order does affect things, you have a serious problem and need to re-think your test strategy. Is it possible that tests 14 and 15 have common setup code that you can refactor out? Or at the very least, can you repeat the setup code for these two tests?
Finally, coming up with test cases is hard. You might be able to come up with a few dozen good test cases per stored procedure, meaning that if you have 5000 stored procedures in your environment, you may be looking at 60,000 tests. That’s a lot of test writing, and even if you can bang out 10 per hour (which is generous), you’ll be working on this for 3 years straight, assuming 40-hour work weeks and 50-week years. So instead, we have to pick and choose our testing battles. There are a couple of ways that help us pick and choose. The most important way is to create test cases based on bugs. Whenever a bug ticket shows up in your tracking system, the first step in development is to try to replicate the issue. Once you have this issue replicated, create a test case around that issue. That way, you know when you have the bug fixed and you can also prevent that bug from re-appearing in the future. If the bug turns out to be complex, it may require several test cases—that’s fine; create all of the test cases you need to cover this bug.
Aside from bugs, another source of good test cases is production logs. If you have error logging or call logging for parts of your production data, you have known samples of how end users have actually used your application. Converting these into unit tests should be easy—you may need to change some input parameters—but for an extremely flexible stored procedure, this might be the best way to test common paths. For example, suppose we have a stored procedure which generates dynamic SQL and supports different clauses as inputs—the columns in the select statement, filters (in the WHERE and HAVING clauses), and sort order. Even calculating the number of possible combinations with a relatively complex query is time-consuming, and creating a full set of covering cases could take years for one procedure. Instead of trying to define a priori what people “probably” will select, go look and see what they actually are selecting and build your tests off of that. If you don’t have this type of logging information, it might make sense to invest in it, either by using SQL Server Extended Events to capture query results or by modifying your procedures to start capturing this information somewhere.
Once you have tests for production data and bug cases, fleshing out other tests gets easier: any time you change a stored procedure, write at least one test which covers that change (and up to as many tests as you need to verify that your change works). Over time, you’ll build a suite of test cases which you can re-run to protect against regression failures. This will let you increase your velocity of code change and support the refactoring that we will cover in the rest of our series.
2 thoughts on “Testing, Part 2: The Slightly Less Basics”