Ranking the Star Wars games I’ve played

As an ode to the wonderful guide provided by PC Gamer, here is a ranked list of all of the Star Wars games I played.

Dishonorable mention: Force Commander. I played it for about five minutes because the camera controls are wretchedly terrible. It’s an RTS — the camera should be the least complicated part of the game, by far.

10. Rogue Squadron — At the time, I considered it superior to X-Wing vs. TIE Fighter, mostly because my two weakest game genres are flight simulators and racing games. I just wanted to blow shit up in space, and this game most certainly lets you blow shit up in space.

9. Dark Forces — I’m still not convinced I like the game. I recognize it for its merits as a DOOM-clone that went so much farther, and the story is great, but the gameplay itself was lacking.

8. Force Unleashed/Force Unleashed II — I consider these one game, especially since the latter doesn’t add much to the core formula. This is the Rogue Squadron of lightsaber games, and very God of War-esque, but I liked them.

7. X-Wing vs. TIE Fighter — I don’t remember if I ever actually played either X-Wing or TIE Fighter, but I do remember this one. I, objectively, know it’s better than Rogue Squadron. It’s also much, much harder. Still, it’s probably the best space-fighting game Star Wars has produced to date, and if you have a flight stick, it still holds up.

6. Knights of the Old Republic II — The story was absolutely incredible, the game itself a buggy mess, but loads of fun. Characters actually changed as you changed, reacted to you, instead of being fixed archetypes.

5. Jedi Knight — The non-lightsaber parts are brutal and little more than a somewhat nicer version of Dark Forces. Blasters are the least fun part of Star Wars, people. Once you get the lightsaber, though, it’s very solid, and I personally liked the FMV (even if lots of other people didn’t).

4. Mysteries of the Sith — Expansion to #4, introduced new Force powers and a compelling new protagonist, Mara Jade, who was eventually incorporated into some of the novels. A better version of Jedi Knight; I only wish the new powers were retroactive.

3. Jedi Knight II — You don’t start out with a lightsaber, but getting it is more fun, and the sections where you don’t have one are miles better than Dark Forces or Jedi Knight. A fitting end to the Kyle Kataran saga, somebody I hope one day makes it into a movie.

2. Jedi Knight III — As close to an open world lightsaber adventure as you’re going to get, you could play as multiple races, and the lightsaber combat was brilliant.

1. Knights of the Old Republic — This. Game. It is amazingly incredible, holds up well (even if widescreen support is somewhat buggy) and is still one of the best Bioware RPGs ever. One of the most shocking twists I can remember in a video game. If you don’t have a copy, I recommend reading John Walker’s “Bastard of the Old Republic.” It’s a tribute to some of the moral choices that KOTOR offers that no other game has.

Get Row Counts For Partitioned Tables

I’m adding this query to my SQL utility belt.  This query tells you, for a specific table, all of the partitions, the lowest value in that partition’s range, and the (estimated) number of records in that partition.

	OBJECT_SCHEMA_NAME(o.object_id) AS SchemaName,
	o.name AS TableName,
	ddps.partition_number AS PartitionNumber,
	prv.value AS EndValue,
	ddps.row_count AS [RowCount]
FROM sys.indexes i
	INNER JOIN sys.objects o
		ON i.object_id = o.object_id
	INNER JOIN sys.dm_db_partition_stats AS ddps
		ON i.object_id = ddps.object_id
		AND i.index_id = ddps.index_id
	INNER JOIN sys.partition_schemes ps
		ON i.data_space_id = ps.data_space_id
	INNER JOIN sys.partition_range_values prv
		ON ps.function_id = prv.function_id
		AND ddps.partition_number = prv.boundary_id
	i.index_id < 2
	AND o.is_ms_shipped = 0

This query has a couple of good uses.  The first is that you can monitor a backfill with it.  In my scenario, I’m loading several years worth of data into a new fact table.  SSIS is loading data one day at a time so that I don’t leave a connection open so long that it gets killed.  Running this query shows me approximately how far along I am, as I know the valid date range for data in this table and I can watch the partitions as they fill up.

Another use for this is to check to make sure that your partitions are not skewed too heavily.  For example, my partitions are pretty tightly defined:  they are all (except for the current partition) within a 15-20% of average, meaning that they’re pretty close to a uniform distribution.  That really helps in cases in which I can eliminate partitions, as this means that when the optimizer can throw away 47 out of 48 partitions, it can throw away about 47/48ths of the data, regardless of which partition remains.  I would consider that a good partition.  At the opposite extreme, where almost all data is stored in a single partition, you get zero benefit from partitioning but still have to deal with the DBA overhead involved, and that table might be a candidate for re-thinking a partitioning strategy or eliminating partitioning altogether.

Thoughts On In-Memory OLTP

A Steve Jones editorial from a few weeks ago got me thinking about In-Memory OLTP.  We implemented this in our data warehouse about a month ago to mixed success.  Here are my notes from that forum post as well as a couple of areas in which I figured I could expand upon my comments there.

We use memory-optimized objects in two places: one staging table to load data warehouse fact tables, and various memory-optimized table-valued parameters as part of warehouse loading (streaming certain dimension rows from .NET code into our warehouse).

My original plan was to use memory-optimized queue tables in our OLTP system to feed memory-optimized TVPs into memory-optimized staging tables, and possibly convert a couple of our more heavily-read OLTP tables to become memory-optimized in the process. That fell apart for a few reasons.

The best (i.e., most nefarious) problem that we’ve had with In-Memory OLTP is XTP_CHECKPOINT issues. This happened even before we had a single memory-optimized object; we just had the filegroup. The first time this happened in production, we suffered a 10-hour downtime on a warehouse node and had to re-queue a lot of data because the transaction log had filled up and we couldn’t do a thing due to this XTP_CHECKPOINT.  We ended up recovering to shortly before the database failure and didn’t lose any data, but needless to say, causing a database failure before even a single memory-optimized object was created was…problematic…  We opened a support case with Microsoft and that case is still open.  As best as we can tell, if the transaction log fills up and autogrows at any point in time, the XTP checkpoint mechanism stops and never recovers, meaning that the In-Memory checkpoint mechanism never actually checkpoints and thus your log continues to fill up until you take the database offline and force it to reconcile the log.  Our operations team has been on the lookout for these situations since then and have prevented another disaster, but it’s one more thing that the team needs to consider,

Aside from that, the hard limitation on ~8000 bytes per row and no LOB meant that some of the places where I might have wanted to use a memory-optimized TVP (like TVPs for some of the larger dimensions) wouldn’t work out.

We ended up not pushing the memory-optimized filegroup out to production on our OLTP instances because you cannot take a database snapshot if the database has a memory-optimized filegroup, and we use database snapshots for some system processes. On the OLAP side, we did implement memory-optimized objects, and saw significant performance improvements. The TVPs were significantly faster than their on-disk counterparts and our staging table–which previously had been one of the top sources of deadlocks–zooms. Unfortunately, even with these performance benefits, the limitations have been too strict for adoption across our platform. Even if I got the go-ahead to move memory-optimized objects to our production OLTP system (and not just our OLAP system), there would have been another problem preventing general adoption: the inability, within an explicit transaction, to use memory-optimized tables as well as tables in external databases on the same instance. We use replication to push static data around in our environment, and all of that static data goes into one database on each instance. Many of the queries whose performance I wanted to improve join to this static data database, and I did not want to put all of the in-memory work in a separate transaction from the static data work.

I’m still hopeful that 2016 will solve at least some of these problems and make memory-optimized objects practicable, but to be honest, even coming in knowing most of the code-writing limitations of memory-optimized objects and natively compiled stored procedures (of which we have zero), I think In-Memory OLTP really burned me, and even if 2016’s version is a lot better, I’d have to think long and hard before building a project around it.

A few random thoughts (to prove I’m still alive)

I’ve been writing like a fiend at this stage to get this dissertation done, but I’ve still had some time for gaming, mostly PS4 at this point. I finished Infamous Second Son, getting my platinum for that. I liked it, particularly the innovative Paper Trail DLC, but I liked Cole better as a character. It seemed harder as an Evil character until you got the Video powers. “Evil” Deslin was more of a dick than he was genuinely a super villain (until the very end).

Right now, I’m playing through Sleeping Dogs: Definitive Edition, in which you are an undercover cop trying to infiltrate the Hong Kong triads. It’s a GTA clone, but there’s a different sense to it. It takes itself pretty seriously, more so than Saints Row, for example. You can’t be a maniac, most of the time, because you’re a cop. I do like it, except for the cheap ass bike races.

After I finish with that, I’m not sure. I spent a lot of time in front of the PC to write, so I don’t like playing PC games as much these days, but I’ve got a pretty good backlog of stuff there. I have three other PS4 games, but one is the Last of Us and I’m not sure I could play that again. Watch_Dogs is something I’d like to finish, for sure, and I have Shadow of Mordor too. I’m also watching Red vs. Blue again on Netflix to pass the time.

Upcoming Events

Over the next six weeks, I will be speaking four times.  All of these talks are free and open to the public—although SQL Saturdays do require payment for lunch if you choose.


On Wednesday, June 17th, I will speak at the TriNUG Data SIG (link forthcoming).  My topic is Working Effectively with Legacy SQL.

SQL Saturday Chattanooga

I will speak at SQL Saturday Chattanooga on Saturday, June 27th.  My session begins at 4:15 PM, and the topic is Much Ado About Hadoop.

SQL Saturday Columbus

I will speak at SQL Saturday Columbus on Saturday, July 11th.  My session begins at 1 PM and I’m also giving my Hadoop talk there.

TriNUG F#/Analytics Group

On Tuesday, July 28th, I will give a modified version of my Hadoop talk to the F#/Analytics group.