Value Added Thoughts
Dorota Parad's blog

Quality or speed - why not both?

Hand drawn picture of a stick figure putting a slice of what looks like a strawberry cake in their mouth, while simultaneously holding another slice of said cake

One of the most annoying myths in software development that just won’t die is the supposed tradeoff between speed and quality. The common thinking goes: quality, speed, cost - pick 2. And because time is money, this in turn gets simplified to speed OR quality. It’s one of those things that makes me wonder if I live in some parallel universe with different rules of physics, surrounded by people with magical powers. It’s especially frustrating when people brag that their LLMs generate code fast AND of decent quality, as if it’s some sort of Christmass miracle.

I’m going to be blunt. If you believe that the quality in software costs you extra, either money or time, you’re doing software development wrong.

I understand where this madness comes from. The primary source is our infatuation with simple terms explaining complex phenomena, even if things get simplified beyond usefulness. That’s what happened here: we simplified “quality, speed, cost - pick 2” down to “speed or quality” because in software time implies developer time, which is very much money. But the “quality, speed, cost - pick 2” is already a simplification of the iron triangle concept from project management, where the quality of the project is constrained by scope, time, and cost. Notice how the triangle itself doesn’t include quality, it’s the quality that’s defined by trading off those three constraints. The triangle itself emerged quite a while back outside of software context and was quickly recognized as too simplistic, replaced by a theory of constraints. So our quality vs speed tradeoff is a simplification of a simplification of a simplification. And it’s not even true to begin with! But hey, it seems plausible and saying it makes you appear clever…

In reality, when building software the quality and speed go hand in hand. They’re inseparable like a young couple who just signed a joint mortgage (in this economy...) When the quality goes down, so does the speed. High speed is correlated with high quality. I’m not making this up, this has been a consistent albeit often overlooked result of each year’s DORA report. And while the report has its problems, this particular part is hard to fudge. Teams that report short lead time to changes and frequent deployments (that’s your speed) also report less rework and faster time to resolution (that’s your quality). Miracle! They must surely be costing a lot. Except we also know the opposite is true - it’s the big projects suffering from quality issues that end up expensive. Time is indeed money.

If we collectively, as an industry know there is no tradeoff between speed and quality, then why won’t this myth die? I have a theory. We’re human, we believe we’re rational and follow logic, but we are really prone to biases and distortions. The fact that speed and quality correlate when making software is, paradoxically, responsible for the belief that speed and quality need to be traded off.

To unpack it, let’s examine the factors that make individual engineers slow to deliver working software.

Now, most people without a deep understanding of how software is made - this includes a lot of company executives out there - tend to stop here. They start looking for solutions to these two very obvious problems. So we end up with training and hiring. But mostly hiring. We hire people who are supposed to be skilled in a given technology and problem space, and we hire more people to spread the scope across more individuals. Does that work? To an extent, it does. And it causes other problems. That’s because scope and lack of familiarity are just the tip of the iceberg. Everyone who has done even a bit of hands-on work in software knows there are more things that slow us down.

If we take all of this in - it’s a lot. A lot more than the obvious scope and lack of familiarity we started with. It seems like it’s impossible to do any work without hitting at least a few of these obstacles. And what’s especially tragic, trying to remediate any of the above points in isolation usually makes other points worse. Adding people to handle the scope introduces more dependencies, since now they have to coordinate. Freezing requirements means adding tech debt, since the world still changes and we’ll have to change the software in the future. Limiting out of band requests may result in less familiarity with the domain space and the bigger code base for the future work. This is how most companies end up being so slow - they look at each problem in isolation, trying to solve it one by one, performing a series of local optimizations which in turn generate more problems elsewhere.

So that’s the speed. What about quality? What are the factors in software development that result in lower quality?

…hold on! Is that the same list as the one above? Yes, yes it is. Anything that increases uncertainty and complexity will lead to more mistakes. And in fact, I could have reduced this list to only those two: uncertainty and complexity, but where’s fun in that? Any mess in software development is due to one of the two: uncertainty and complexity, and our tragic attempts to mitigate or control them.

Here we can clearly see that the mistakes caused by the same factors that slow us down, will force us to choose - do we spend time trying to fix the problems, or simply move on to save time. This is the illusionary tradeoff between quality and speed. It’s illusionary, because it’s a vicious cycle. Bugs and tech debt slow down adding new features and making changes, as well as fixing future bugs and removing future tech debt. But once we’re already so terribly slow, we perceive the time we spend on improving the quality as time we’re not working on delivery, and therefore another factor that slows us down. We start seeing quality as a time sink that could be traded off for a little more speed. Sadly, sacrificing quality is a local, short-term optimization that doesn’t buy us anything on the scale of our software’s lifetime.

But I have good news. As an industry, we have already figured out how to address the core issues that both slow us down and reduce the quality of our software. We already know how to mitigate the uncertainty and complexity that are at the root of it.

All of this should be very familiar to you. It’s the whole agile movement (which is over 20 years old at this point!) plus DevOps which is its natural extension (not much younger than agile). If you sprinkle in microservices (~15 years old or so), I won’t be mad - it’s DevOps plus modular architecture plus a business-centric approach to scope breakdown.

Notice how most of these solutions assume that the unit producing output is a software team rather than individual engineers. That changes a lot. As soon as we have a scope that’s bigger than one person, we really should ditch trying to measure the output of an individual, unless to adjust who’s on what team. Focusing on individuals means hyperlocal optimizations that always lead to suboptimal results.

Another thing to notice in the setup above is that you get quality pretty much for free. In fact, you have to have special talent to write buggy software under these conditions. (Aside: I’ve had the misfortune to encounter one or two such talented engineers in my past, but even then, the impact of their special talent was reduced to zero by their team.) And instead of quality, you can insert any -ility: reliability, security, even accessibility - it’s super easy to plug any of these into the system at no extra cost and very little slowdown, if at all. It’s because, as we’ve determined, low quality and slow delivery are caused by the same issues at the root - complexity and uncertainty in their many flavors as I’ve described in the beginning, so obviously the solutions addressing those root causes will dissolve both the slowness and low quality. That’s the beauty of systemically fixing the root causes instead of symptoms.

Ok then, if we have all these wonderful tools at our disposal to make software teams go faster and deliver high quality, why isn’t every software organization working this way? Why do startups still talk as if quality comes at a cost of speed and you’re supposed to break stuff if you want to move fast? The problem here is that the remedies I’ve listed are an all or nothing proposition. Adopting them piecemeal doesn’t make things better, on the contrary, it often introduces more complexity by adding friction at the boundaries. But if you already have a lot of people, each with their own vision and political standing, who have invested in locally-optimized solutions to deal with uncertainty and complexity, it’s really hard to adopt all these practices wholesale. Same goes for when you’re starting a new company and feel immense pressure to not waste any time. In that case, stopping to think about how to structure your work and investing in anything that pays off long term at a minor short term speed loss seems perilous. Which means the majority of software companies will be stuck with a suboptimal setup, paying the price in slowness and quality and struggling to break free of the vicious cycle.

I see some hope on the horizon though. At a risk of sounding like another one of those kooky out of touch tech executives, I see a chance that AI will save us. Not in the way that current AI companies peddle to investors, but indirectly. Much like COVID forced many crusty organizations to digitize at a speed previously unthinkable, the AI may just as well force other crusty organizations to adopt all the modern ways of working I've described, else they won’t see the benefits. But that feels like another article that I should probably write, so let me stop here.