A Steve Jones editorial from a few weeks ago got me thinking about In-Memory OLTP. We implemented this in our data warehouse about a month ago to mixed success. Here are my notes from that forum post as well as a couple of areas in which I figured I could expand upon my comments there.
We use memory-optimized objects in two places: one staging table to load data warehouse fact tables, and various memory-optimized table-valued parameters as part of warehouse loading (streaming certain dimension rows from .NET code into our warehouse).
My original plan was to use memory-optimized queue tables in our OLTP system to feed memory-optimized TVPs into memory-optimized staging tables, and possibly convert a couple of our more heavily-read OLTP tables to become memory-optimized in the process. That fell apart for a few reasons.
The best (i.e., most nefarious) problem that we’ve had with In-Memory OLTP is XTP_CHECKPOINT issues. This happened even before we had a single memory-optimized object; we just had the filegroup. The first time this happened in production, we suffered a 10-hour downtime on a warehouse node and had to re-queue a lot of data because the transaction log had filled up and we couldn’t do a thing due to this XTP_CHECKPOINT. We ended up recovering to shortly before the database failure and didn’t lose any data, but needless to say, causing a database failure before even a single memory-optimized object was created was…problematic… We opened a support case with Microsoft and that case is still open. As best as we can tell, if the transaction log fills up and autogrows at any point in time, the XTP checkpoint mechanism stops and never recovers, meaning that the In-Memory checkpoint mechanism never actually checkpoints and thus your log continues to fill up until you take the database offline and force it to reconcile the log. Our operations team has been on the lookout for these situations since then and have prevented another disaster, but it’s one more thing that the team needs to consider,
Aside from that, the hard limitation on ~8000 bytes per row and no LOB meant that some of the places where I might have wanted to use a memory-optimized TVP (like TVPs for some of the larger dimensions) wouldn’t work out.
We ended up not pushing the memory-optimized filegroup out to production on our OLTP instances because you cannot take a database snapshot if the database has a memory-optimized filegroup, and we use database snapshots for some system processes. On the OLAP side, we did implement memory-optimized objects, and saw significant performance improvements. The TVPs were significantly faster than their on-disk counterparts and our staging table–which previously had been one of the top sources of deadlocks–zooms. Unfortunately, even with these performance benefits, the limitations have been too strict for adoption across our platform. Even if I got the go-ahead to move memory-optimized objects to our production OLTP system (and not just our OLAP system), there would have been another problem preventing general adoption: the inability, within an explicit transaction, to use memory-optimized tables as well as tables in external databases on the same instance. We use replication to push static data around in our environment, and all of that static data goes into one database on each instance. Many of the queries whose performance I wanted to improve join to this static data database, and I did not want to put all of the in-memory work in a separate transaction from the static data work.
I’m still hopeful that 2016 will solve at least some of these problems and make memory-optimized objects practicable, but to be honest, even coming in knowing most of the code-writing limitations of memory-optimized objects and natively compiled stored procedures (of which we have zero), I think In-Memory OLTP really burned me, and even if 2016’s version is a lot better, I’d have to think long and hard before building a project around it.
One thought on “Thoughts On In-Memory OLTP”