I’ve just recently implemented the first memory-optimized tables at our company. My original plan was to create a few memory-optimized staging tables on our transactional systems, one memory-optimized staging table on the warehouse side, and several memory-optimized TVPs to help with shuttling data around.
This plan quickly fell apart. Here’s a quick summary of the problems I ran into during the sprint. I knew about many of them, but there were a couple fascinating new issues I had not yet seen.
- Memory-optimized tables are limited to 8000 bytes per row. Some of my dimensions have LOB data (a memory-optimized no-no) and other things which are not allowed. That meant that a couple of my TVPs were going to stay on-disk.
- I created my memory-optimized queue tables for staging, and we use triggers to load these queue tables. This isn’t a great thing overall—I’m eventually going to replace the triggers with Service Broker messages—but it’s what I have to deal with. Anyhow, I ran into a very interesting error message in stored procedures which eventually call triggers which insert into the memory-optimized tables: “A user transaction that accesses memory optimized tables or natively compiled procedures cannot access more than one user database or databases model and msdb, and it cannot write to master.” I got this message because our transactional procedures often join to a replicated database to get static data that we need, and that causes an error. The way around this problem is to use separate transactions, but I don’t really want to do this. This ruined my using memory-optimized tables on the OLTP systems.
- On one of our instances with a very small, fixed transaction log, we found a very interesting problem: the transaction log was not truncating due to XTP_CHECKPOINT and because it was full, we could not make any alterations due to the log being full. Oh, “any alterations” includes expanding the log size… There is an open Connect item which basically says that the message may or may not be a lie. One person mentioned that taking a full backup fixed the problem, but in our case, we had to restart the database engine service and re-size the log.
- As a side note, you cannot take database snapshots if you have a memory-optimized filegroup. You also cannot drop the memory-optimized filegroup unless you drop the entire database. This means that, in practice, you cannot get rid of a memory-optimized filegroup once you create it, and once you create it, you’ve limited your available DBA options.
All of this has made In-Memory OLTP in SQL Server 2014 less than ideal for a real production environment. I’m hoping that 2016 has significant enough improvements that we can expand our use of In-Memory OLTP, but for now, our initial scenario has collapsed down to a few memory-optimized TVPs and one staging table on our warehouse instances.
One thought on “Why We Can’t Have Good Things: In-Memory OLTP”