In the first post in this series, I described the basic problem: we need to fit fundamental, architectural design into an agile toolkit. The primary benefit to being agile is that a development team can respond to external events more quickly than a team following a longer pipeline. That development team can see changes, react to circumstances, and be more likely to survive than a competing team with a six-month turnaround time. Let me give you a good example of how this can succeed, based on a recent work experience.
Our development team had the task to implement a specific Google advertisement type. We support a few different types of Google advertisement types already, so we already had the foundation in place for this. We groomed the story, planned out work, and broke actual development out across several sprints. The first sprint was primarily tying together the Google advertisements with our system, importing their records into our database, and making sure that our application could see those advertisements. After getting this work done, we learned that almost none of our customers were using this particular advertisement type, meaning that supporting it would have been a net cost. We killed the epic after our first sprint, so although there was a negative ROI from this action, at least we were able to take advantage of some technical debt clean-up along the way. Just as importantly, we saved another two sprints’ worth of negative ROI work. This ability to stop and pivot a development team is a key feature for agile work.
Unfortunately, a lot of people take this key feature and turn it upside-down: because you can pivot a development team from sprint to sprint, there’s no need for longer-term thought. After all, all plans fail, so why plan at all? Why not just go with the flow?
The clichéd response to this is to use the trope, “Failing to plan is planning to fail,” but I want to dig into this a little bit more. Even in our story, more information early on would have saved us a full sprint of work (although I did get a lot of tech debt resolved with this story, so I’m not complaining). Our research and design was more along the lines of “How do we implement this feature?” rather than “Should we implement this feature?” If our team focused on the latter question—or if our product owner had the relevant information earlier—we could have spent time on more productive avenues of work.
Good design is critical in the same way. A development team needs to have a strong application foundation to do its best, and the more you fight with your code base, the worse off everybody is in the long run. Unfortunately, the pattern in most “agile” shops basically says, “Let’s take a shortcut here and we’ll fix the problem later; we’ll call this technical debt to clean up in the future.” Rather than take the time to get the design right and improve the odds of long-term success, management and developers tend to look for the easy way out. Throwing code together without a good design even has short-run advantages: without all that time looking at design or thinking about the future, you can get more work done in the present. A good development team can probably ride that wave for about a year before things start getting bad and developer-written hacks result in revenue lost, either because the code base is too buggy or rickety to support important features (leading to developers whose sole job is maintaining convoluted application code) or because developers spend all their time fighting the code base to implement anything new.
So let’s say that we agree that tight development iterations are a good thing, and also that taking the time for design is a good thing. These two things sound like they’re mutually exclusive, so how do we reconcile them? I believe the answer is in constant iteration. In my next post, I will talk about iterative design, showing how you can reconcile long-term strategy with short-term implementation.