On offense in Madden NFL 16

With a full season, two preseasons, and an absolute jolly stomping of the Ravens in game 1 of Season 2 (71-10 — and the score shouldn’t have even been that close), I wanted to add a little more about the passing game and the offense I run with Cleveland.

The new passing mechanics make it difficult for quarterbacks to play too far out of their comfort zone — which means deep balls for Johnny Football. He has a decent, but not great arm strength (89), and I’ve developed both his short and medium yardage accuracy to the point where he’s well above average at both of those (both 80+). His real value comes from his speed and toughness; one play, he broke a defensive lineman’s tackle and hit a tight end fifteen years down the field.

The secret to playing well with Manziel is the West Coast offense. That means relying on running the ball, and I’ve found Isaiah Crowell to be a great power back, behind a Cleveland offensive line that’s only gotten better with my #1 pick, a right tackle, and a surprisingly great blocking tight end that I drafted as a pass catcher (see below).

When I do throw the ball, I rely on slants (which are not the world beaters they used to be; DBs can and will jump the ball if you get complacent), crossing routes, curls, and the occasional screen pass. Corner post routes are great in man to man — you’ll always find a mismatch — but regular corner routes or double moves are still more or less broken because of how long they take to develop.

My biggest weapon is that I have not one, not two, but three amazing tight ends. I will actually pass out of goal line sets or singleback jumbo, which makes the computer weep if it blitzes. Antonio Gates is my starter, and we all know about him. Emmanuel Bibbs (you can read his real life scouting report here) was a TE that Cleveland signed as UDA before this season. He isn’t very fast, but his hands are amazing (80+), making him a reliable possession target. Rounding out the group is Neil Weatherford, the TE I drafted in the third round, is extremely fast (83 speed), strong (70+), a wonderful run blocker, and has hands that would make Shannon Sharpe weep. Weep because they suck. (This is why he was a third rounder). He’s also dumb as a post and a bad route runner. However, these are fixable flaws, and he’s such a top notch athlete that I think he’ll move very fast up the depth chart once those flaws are corrected.

When I do have to throw deep, I use my other secret weapon: Dwayne Bowe. He is tall and big, so when Manziel throws one of his dying quail “bombs,” Bowe will fight any DB for the ball, making some great hands-type catches. Josh Gordon is the superior athlete, but for whatever reason, he doesn’t hang onto the ball like Bowe. Gordon does have lower awareness, perhaps his greatest weakness, and doesn’t jump quite like Bowe. Still, Gordon could be as good as Bowe, if not much better.

The last secret weapon (the SUPER secret weapon) is Duke Johnson, who people that watched Cleveland play San Diego will admit is no real secret any more. Johnson is the fastest back on my team (90 speed) and has great hands (70+). Crowell might not be able to run sweeps or pitches, but Johnson can.

My offense does have one fatal flaw: if a team can stop me from running, especially if they can do it without blitzing, I’m in trouble. One thing my offense lacks is speed, which means that every deep ball is a jump ball. One-on-one, either Gordon or especially Bowe can and will fight for the ball, but if the defense gets safety help, Manziel doesn’t have the power to throw them open. It also takes away play action and bootlegs, both of which Manziel excels at. My new QB, who I took in the second round, is as good at Manziel in every passing category, but he lacks speed and can’t salvage a dead play. He’s a great system QB, and Clay shined in the preseason, but he would be even more doomed with a bad day running the ball.

My new kicker is really tremendous, both at kicking and because he is as white as I am yet wears Ricky Williams style dreadlocks. It never ceases to make me laugh.

Tomorrow, you can read about how I run my defense!

ChuWi Vi10 Thoughts

About a month or so ago, I purchased a ChuWi Vi10 tablet.  Now that I’ve had the device for a bit, here are my thoughts on the tablet.

First of all, dual booting is not a joke nor is it a novelty.  It really runs Windows, and it runs Windows pretty dog-gone well, even with just 2 GB of RAM.  The tablet does have a four-core processor, which helps some.  Now, when running in Windows, it can stutter and drag every once in a while.  I also haven’t pushed it beyond running one or two applications at a time.  Normally, I’ll run Firefox and Windows Live Writer, but not too much else.  It obviously won’t play real games, but streaming on Twitch is smooth and graphics are good enough for my low-rent web surfing.

In Android, things are pretty zippy, but the screen is blurry and out of focus.  It’s something you can ignore when page zoom is relatively high, but if you’re trying to read small print, you’ll notice the weakness.  Fortunately, you can root the device and fix the display problem.  After following these instructions, my screen is noticably crisper and I can read ebooks in Android.

As far as size and form factor go, this is a landscape tablet; it’s not something you want to try to carry around and read in portrait mode.  This means that it’s good for webpages and video, but not as good for technical books.  I’ll admit that I haven’t done much reading with this device, especially because I have a 7” tablet which is much better designed for reading.

The big benefit to the device is its keyboard.  When combined with the keyboard, the Vi10 is basically a low-price Surface Pro.  It’s not as powerful as a Surface Pro 3, not by any stretch, but it’s also about a quarter of the price, and because I don’t have laptop money to throw at a tablet, I’d much rather have the ChuWi.  As far as the keyboard itself goes, I’ve had problems with the touchpad.  It’s right in the middle of the keyboard area and you can easily move the mouse and click somewhere you didn’t mean to.  I end up turning the mouse off when I start typing blog posts or longer messages, and instead use the touch screen.

Touch screen precision is OK, but I’ve had some problems.  My fingers aren’t particularly fat, but I do seem to need to double-press or triple-press because I’m ever so slightly off.  If I lowered the resolution, I’m sure that would fix a lot of the problems, and I might end up doing that.  It runs in 1366×768 resolution, and maybe bumping magnification to 125% is the smart play.  I’m going to try that out and see if it’s the answer to my problems or if losing that much screen real estate isn’t worth it.  For precision work, I just use a mouse.

My biggest concern with this device is that the keyboard scratches up the glass—I see scratches all along the edges of my screen.  There haven’t been any deleterious effects and I haven’t tried buffing them out, but it does leave me concerned about the tablet’s longevity and whether it can withstand my…rough habits with devices.

All in all, if you can get this tablet for $150 or less—and right now you can—it could be a very good purchase.  There are rumors of more powerful ChuWi tablets coming out soon, so you might possibly want to wait, but I’m glad I made this purchase.

Presentation Redundancy

As a presenter, it’s hard enough getting up in front of a group of people and talking about a topic.  We run the risk of demo failure, disengaged audiences, and equipment failure.  Practice and preparation can help with the first and second, but sometimes stuff just breaks.  This week’s SQL Saturday Pittsburgh provides a good example of this.

As I set up for my early-morning session, I set up my laptop just like usual and put in my HDMI to VGA adapter.  The adapter worked…sort of.  It would make a connection but then disconnect within a couple seconds, and then re-connect.  This connect-disconnect cycle obviously wasn’t going to fly, so I needed to do something about it.  I checked my laptop out in another room and found that my VGA connection worked fine there, so I went to the help desk technician.  He and I tried to troubleshoot the setup, but somehow, during the process, things ended up getting worse—now I couldn’t connect at all with my adapter, even with a new VGA cable.  I didn’t have a backup adapter and most speakers are moving to Thunderbolt or Display Port adapters, whereas I’ve got HDMI.  But even when I tried a different, working adapter, I just got back to a flickering screen.  I had to use somebody’s laptop which didn’t have SQL Server installed—just Management Studio—to give my talk.  I was glad that I could give a talk, but honestly, I should have done better for the people who woke up early on a rainy Saturday and came to watch me speak.

To fix this, I’m going full-bore with redundancy.  Here’s what I have:

  1. VMs on a separate USB drive.  I had this before, so no major change here.  This means that if I have another computer with vmWare installed, I can swap PCs and be up and running without missing a beat.  Of course, I might need to reboot my VM and change how powerful it is if I’m getting some less-powerful equipment.
  2. Two separate computers available for presentation.  I have my presentation laptop, but I’m also going to start bringing my tablet.  The tablet is pretty weak but it can run SQL Server Management Studio and is powerful enough for me to do some of my talks.  I couldn’t do the Hadoop talk on this tablet, but I should be able to do the rest of them.
  3. Spare adapters and cables.  The failure on Saturday showed me that I have a single point of failure with respect to adapters.  If it were just a simple adapter failure, I might not have found somebody else who has the right adapter.  I ended up purchasing two HDMI to VGA adapters in addition to the one I have now.  I’m going to test my current adapter with a VGA projector I have at home to see if it’s still functional; if so, I’ll have three adapters at my disposal.  I also purchased two Micro-HDMI to HDMI adapters.  My tablet uses Micro-HDMI, so if I end up needing to use it, I need the right adapters.
  4. Portable projector.  This is an emergency projector, not something I’m planning to use very often.  For that reason, I decided to go cheap—I don’t get paid to speak, after all.  I picked up an AAXA LED Pico projector projector.  It’s about the size of a smartphone and fits nicely into my presenter bag.  It also has a built-in battery which should be good enough for a one-hour presentation with some time to spare.  The downside is that it’s a ridiculously weak bulb, putting out just 25 lumens.  This means that my presentation room would need to be more or less dark for people to see the screen clearly, but again, this is a worst-case emergency scenario in which the alternative is not presenting at all.
  5. Azure VM.  I have an Azure subscription, so it’d make sense to grab all of my code and have a VM I can start up before presentations just in case my laptops fail.  That way, I can at least run the presentation remotely.  That Azure VM will have Management Studio and look very similar to my on-disk VM, but probably will be a lot less powerful.  It should be just powerful enough to do my Hadoop presentation.
  6. Phone with data plan.  In case I need to get to my Azure VM and can’t get an internet connection at my presentation location, I need a backup data plan.  Fortunately, I already have this.  Unfortunately, the app I’m using for tethering requires installation on the PC.  I might decide to wait until getting a new phone before getting software which allows my phone to become a wireless access point.

With all of these in place, I’ll have redundancy at every level and hopefully will not experience another scenario like I did in Pittsburgh.  I’m grateful that my reviews were generally good and people I respect said I did a good job recovering, but I’d rather prevent the need to recover quickly.  This isn’t as important as protecting corporate assets, but the principles are the same:  defense in depth, redundancy in tools, and preparation.

Windows Live Writer Is Still Alive

Not too long ago, I decided to start blogging regularly once more.  In order to do this, I want to have a tool which allows me to write blog posts offline.  The WordPress editor is fine when you’re online, but sometimes I’ll be on an airplane or in a location without ready Internet access.  When I started researching blog editors, I landed on Windows Live Writer.  Although the product is in a dormant state, it’s still popular and for good reason.  It integrates with a number of services, gives you a pretty good idea of how your blog posts will look, lets you add images and links extremely easily (even easier than WordPress’s editor does), has seamless publishing, and lets you work offline.  It also lets you use one interface to publish against different blogs, although I only have this blog and so that benefit doesn’t do too much for me.

The biggest problem with Windows Live Writer is finding a working download link.  The Hanselman link above has it, but I also want to include the Windows Live Writer download link here.  I’ve confirmed that it works just fine with Windows 10.

I might look for something that works well with Android and Linux when I’m using tablets or laptops running those operating systems, but at least I have a workable product with my Windows tablet.

Season 1 of Madden finished

I have completed my first full franchise season in Madden. It was a roaring success, going 13-3 and winning the Super Bowl. I eventually progressed to 12 minute quarters with the accelerated clock burning off 25 seconds. Game pace feels about right, stats look more realistic, etc.

Gameplay wise, now that I’ve played about 30 games of Madden (including the supershort Draft Champions mode), just the tiniest bit of bloom has come off the rose. Two areas need attention, in my opinion: returning kicks is still all but impossible because blocking AI sucks (this is less true for punts, but I’ve still never run one all the way back) and there has to be a faster way of making defensive changes on the fly. I should, ideally, be able to tell my entire defense to concentrate on one play in just a couple of button presses — it’s more like four or five, which means it’s the only change I can make. The UI for this has to be streamlined.

I’m also thinking of going back to All-Pro, or maybe just adding some sliders to make Pro harder. Once I got back into my Madden rhythm, I was dominating most games. Cleveland has an awesome OL in real life, so my running successes aren’t that surprising, and I do feel properly limited when Manziel has to throw deep. It’s much harder to upgrade QBs (which is good; I turned him into a 99 in I think two seasons in Madden 15), so you have to be smart about their weaknesses. You can no longer get ungodly numbers of sacks either; I’d like to see slightly more intelligent AI play on defense, but it’s not a deal breaker. I might play one more season on Pro before making any other decisions.

Why did I lose three games? An annoying feature of Online Franchise mode. Every other PS4 game lets you suspend the game. I can start a game, watch Netflix with my wife, then go back to the same game having lost nothing. With Madden, you get disconnected for inactivity (even with a single player franchise)… which means your game is meaningless. Even if you reconnect. To be fair, the game does warn you, but for some reason I hoped it didn’t actually apply to me. Lesson learned. I lost a game I originally won (by a lot) and simmed the rest of the season until the playoffs.

As far as the offseason is concerned, some things haven’t changed much. Guys do, indeed, re-sign in the offseason for reasonable sums, which leads me to believe that there’s something wrong with the re-signing logic in mid-season. I was able to keep most of the players I really wanted without much fuss. Free agency is something of a crap shoot, but then I spent some of my excess funds (from which you pay signing bonuses) to improve my stadium. The trials and tribulations of the NFL owner. Free agents want LOTS of signing bonuses. I did grab Antonio Gates on a cheap two year deal because the tight ends I have now need some development time before they’ll be ready.

Which leads me to the draft. As I mentioned in an earlier article, during the season (and some of the offseason) you scout players. The offseason adds the combine data. You don’t get the Madden numbers (95 speed or what have you), but you do get the drill results and where they rank in the class. I drafted the sixth fastest halfback (by 40 time) in the third or fourth round and ended up with an 86 speed back. I could have looked harder and maybe gotten a gem, but I didn’t.

When you make a pick, you get an immediate reaction if the player was a reach, a good pick, an okay pick, or an excellent pick, based on the OVR numbers of everyone in the draft. You get all the stats after the pick is made, so you can see how good or not the player is. I miss having to go through the preseason to find out, which was a nice touch in Maddens past. Overall, I had a good draft. As the champ, I got lousy picks, but I came away with a starting right tackle, a really good backup QB that could replace Manziel if he gets greedy, a kicker (which the draft REALLY liked, calling it my best pick), and a project tight end who’s extremely fast (82 speed), along with some other pieces and parts. Scouting didn’t help me find much in the way of amazing Brady-esque picks (unless you count the 97 power kicker), but it does help you avoid busts. There was a third round receiver who graded out as “undrafted” once I unlocked his top three stats. Keep in mind that Madden makes awareness weigh heavily into OVR, so you could find a great project player who’s dumb as a post for cheap.

All of my players were in the 70s, one in the high 60s, and one who was awful that the game auto-drafted for me because I hit the wrong option. I don’t know if steals (80+ players in the later rounds) are possible or not; I was filling holes more than finding best player available. My right tackle was a tremendous pick, despite his high 70s OVR, because of his tremendous strength (94) and good across the board blocking ratings (80s). His awareness will need work, and I’m sure he’ll whiff on his share of blocks until he gets smarter, but he’s a keeper.

I’m eager to play the preseason to try out how these new parts will fit together. I will report on any new lessons I pick up from season 2!

The Search For A Better Browser

Browsers and I have a long and somewhat-inimical relationship.  Here is what I want in a browser:

  1. Fast.  The browser should load faster than Netscape 4 did.  This means you, Firefox.
  2. Secure.  I want to turn off Javascript by default and turn it on when necessary.  Firefox has NoScript, which is great for that.  Chrome has historically tried to avoid adding that functionality, but ScriptSafe used to be a good alternative.  Ever since a couple of months ago, ScriptSafe has started to break Google searches, so I moved on to uMatrix (also available on Firefox).  I’ve liked that experience so far, especially because you can set domain-specific privileges, so I could allow third-party YouTube scripts on one domain but not another.
  3. Convenient.  Remember my settings, bring me back to where I left off in case I reboot my PC, and make it so that I don’t have to fight your UI.  Chrome is the worst about this:  by default, they don’t re-open tabs if you close the browser, meaning that you could lose a bunch of tabs if, say, Windows decides to reboot your computer overnight.

Every single browser on the market seems to fail me in various ways.  Here’s my current (and definitely not comprehensive) complaint list:

  • Edge:  I like how fast it is and how well it does HTML 5, but you cannot right-click and save!  Seriously, who let a modern browser out which does not allow you to choose to download things?
  • Internet Explorer:  Yeah, I’ve heard that IE 10 and 11 don’t suck nearly as much as IE used to, but you burned that bridge with me years ago, Microsoft.
  • Firefox:  NoScript is cool, but Firefox seems to get more and more bloated, slower and slower, more and more memory-intensive.  Just like Mozilla did.  Just like Netscape did.  It’s about time for another group to blow up the browser and start over; maybe it’ll be good for 2-3 versions like these other browsers were.
  • Chrome:  When I’m on a touchscreen device and I have a keyboard attached, I don’t want the on-screen keyboard to show up whenever I click on an input box.  I have a device which provides input already.  You should know that I have a device which provides input because Firefox and Edge don’t behave this way.  So what’s the advice Chrome gives?  Shut off on-screen keyboard…which is terrible advice for someone who has a tablet.  Also, Chrome has felt more bloated over time as well and it soaks up memory.
  • Safari:  I’ll admit that I don’t use Safari for Windows.  I tried it a few years back, but it was a horrible knock-off of the Apple version.  If I want a horrible knockoff browser, I’d reinstall Konqueror.
  • Opera:  Nope.

Are there any browsers on the market which don’t suck?  I’ll take Linux or Windows browsers.  Over on Android, I’m OK with Dolphin Browser because of its LastPass integration, tabbed browsing experience, and decent speed.

Warehousing On The Cheap

Decision Support Systems (DSS) are excellent mechanisms for getting important information to business users in a fast and efficient manner.  These warehouses serve three vital purposes.  First, they reduce strain on On-Line Transactional Processing (OLTP) systems.  OLTP systems are designed for efficient insertion, updating, and deletion of data, as well as quick retrieval of a limited number of rows.  When a user wants to build reports off of a large number of rows, however, that can cause blocking on the OLTP system, preventing other users from modifying data quickly.

Secondly, warehouses allow you to collect data from disparate systems.  Often times, users might pull data from one system and mix it with data from other systems.  The finance department may take employee data from an HR system, budgeting data from a finance system, expenses data from the accounting system, not to mention hidden Excel spreadsheets or Access databases which contain vital business information.  A well-designed and maintained warehouse can pull from all of these systems, conform the data to business standards, and present it to end users as a single, unified data stream.  This makes building effective reports much easier.

Finally, warehouses simplify the view of the data for business users.  Well-designed warehouses (following the Kimball model) use schemas which minimize the number of joins necessary for reporting, and those joins make intuitive sense to end users.  An end user doesn’t need to understand bridge tables (which are how we model many-to-many relationships in a transactional system); can ignore business-inessential metadata like created & modified times and users; and can easily understand that the EmployeeID key ties to the Employee dimension, which contains all essential employee information.

At this point, most companies are sold on having warehouses, but not every company has the people, time, and money to do it “right.”  The ideal would be to have a separate warehousing team which has significant resources (including C-level support) available to build and maintain these separate systems.  I’ve never worked at a company of that scale; historically, I’ve worked at companies in which one or two database administrators or database engineers are responsible for “taking care of the warehouse” in addition to normal job duties.  And right now, I’m working at a client with no dedicated database people and little domain knowledge, meaning that even low-intensity warehousing may be out of the question.  In this blog post, I’m going to talk about a few ideas for how to put together a warehousing solution which requires relatively little maintenance.

The first thing to think about is tooling.  I love SQL Server Integration Services (SSIS) and will happily use it for warehousing ETL.  I also acknowledge that SSIS is not trivial to learn and can be difficult to master.  I’m pretty sure I would not want the average web developer to be responsible for maintaining an SSIS project, as there are just too many things that can go wrong, too many hidden options to toggle and properties to set to get it right; and developing an SSIS project from scratch sounds even scarier.  In an enterprise scenario, I’ll happily recommend SSIS, but for a small-scale company, that’s overkill.

The good news about most small-scale companies is that one of the big reasons for warehousing—collecting data from disparate systems—typically is inoperative.  Startups and small businesses tend to have relatively few data systems and almost no need to connect them together.  Yes, you might make the finance guy (who may also be the CEO) slightly more efficient, but they normally aren’t pushing the boundaries of software and Excel or some independent single product might actually be the best solution, even if it means double-entering some data.  They also tend not to need as many types of reports as people in a larger company might require, so you can scale down the solution.

Instead, for a smaller business, the main benefit to having a warehouse tends to come in improving application performance.  In the scenario I’m dealing with now, the company’s flagship application needs to perform an expensive calculation on a significant number of pages.  This calculation comes from aggregating data from several disparate tables, and their data model—although not perfect—is reasonably well-suited for their OLTP system.  To me, this says that a warehousing solution which pre-calculates this expensive calculation would improve application performance significantly.  But without any dedicated database people to support a warehouse, I want to look for things which are easy to implement, easy to debug, and easy to maintain as the application changes.  What I’m willing to trade off is ease of connecting multiple data sources and writing to a separate server—if I can improve query performance, the current production hardware is capable of handling the task without a dedicated warehouse instance.

Given this, I think the easiest solution is to build out a separate reporting schema with warehouse tables.  You don’t need to use a separate schema, but I like to take advantage of SQL Server’s schemas as a way of logically separating database functionality.  As far as design goes, I can see three potentially reasonable solutions:

  1. A miniature Kimball-style data model inside the reporting scheme.  You create facts and dimensions and load them with data.  The upside to this is that it’s the most extensible option, but the downside is that it requires the most maintenance, as you’ll need to create ETL processes for each fact and dimension and keep those processes up to date as the base tables change.
  2. A single table per report.  For people just starting out with reporting, or for people who only need one or two report tables, this could be the best option.  You would get the report results and store that data denormalized in a single table.  The major downside to this is extensibility.  If you start getting a large number of reports, or if several reports use the same base tables, you quickly duplicate the report table loading process and this can be disastrous for performance when modifying data.
  3. A miniature quasi-Inmon-style data model inside the reporting scheme.  You only include information relevant to the reports and your data model might be a bit more denormalized or munged-together than the base tables.  For example, suppose that you have a report which is the union of three separate tables.  Instead of storing those three separate tables, you would store the common outputs of those three tables in a manner which is somewhat-normalized but not perfectly so.  For example, let’s say that we’re dealing with travel information.  You want to report on some core details like departure and arrival times, method of travel, etc.  In the transactional system, we might care about details specific to airline flights (was a drink offered?  Which seat number?) that won’t apply to travel by taxi or train.  To solve this problem in the transactional system, we probably have child tables to store data for air flights, taxi rides, and train rides.  But in our reporting schema, we may only have a single Travel table which includes all of the necessary data.  The advantage to this is that it is extensible like the Kimball-style model and is still a simplification of the OLTP data model.  The downside is that it is more difficult to maintain than a single reporting table, and changes to the OLTP system may necessitate more reporting system change than the single report model.

None of these is the wrong solution; it all depends upon requirements.  In my case, I really only need one single reporting table, so option #2 is probably the best for me.  If it turns out that I need more tables in the future, I can migrate the data model to a proper Kimball-style model, and hopefully by that time, my client will have at least one data professional on staff to support the design.

Now that we’ve selected a model, it’s time to figure out the best way to get data into that reporting table.  I’ve struggled with this and I only see three realistic options:

  1. Have a separate ETL process which runs periodically.  If users are okay with relatively stale data, this solution can work well.  If you only need to update once every 6 hours or once a day, your ETL process could query the transactional tables, build today’s results, and load the reporting table.  This is a simple process to understand, a simple process to maintain, and a simple process to extend.  The big disadvantage is that you won’t get real-time reporting, so end users really need to be okay with that.
  2. Update stored procedures to dual-write.  The advantage to this is that you can easily see what is happening.  The disadvantage is that procedures are now writing out to two separate sources, meaning that procedures will take longer to write data and it is still possible for warehouse readers to block writers.  Also, if somebody creates a procedure (or has ad hoc code) which does not write to the warehouse, we lose those updates and our reporting table is now inconsistent.  Depending upon the size of data we’re dealing with, we might be able to put in compensating controls—like a job which runs regularly and synchronizes reporting data with transactional data—but that’s kind of a hacky solution and end users can see bad data in the meantime, making the application less trustworthy for users.
  3. Use database triggers to write.  The advantage to this is that code which inserts into the relevant OLTP tables does not need to know of the existence of these reporting tables, so there is no chance of accidentally missing an important insertion.  The biggest disadvantage is that database triggers are certainly not easily maintainable—it’s hard to remember that they exist and debugging them can be painful.  Also, with database triggers inserting into reporting tables, it is possible to have report readers block writers.  Allen White has an excellent presentation in which he uses Service Broker in conjunction with database triggers to populate warehouses, and that’s an excellent way to solve this problem.  I fully support using Service Broker, but for a company without data professionals on hand, maintaining Service Broker might be a bit too much for them.

In my case, we need real-time data and so I’m going to use database triggers.  I’m not sure yet if I’ll use Service Broker or not; I really want to, but I don’t want to get late-night troubleshooting calls asking me why the reporting table is out of date and things are failing.  I need a solution that web developers can maintain without significant domain expertise.

Your environment may differ, but a combination of stored procedures (if you can afford reporting latency) or database triggers (if you cannot) and SQL Agent jobs or Service Broker can provide a small IT team with limited SQL Server troubleshooting knowledge the ability to scale reporting calls.  In my client’s case, we are looking at cutting out almost 90% of the server’s resource requirements by going from nested table-valued and scalar functions to a simpler to use, simpler to maintain solution.