Skip to main content

Analogies For Complexity

So, we've looked at some measures of software structure complexity. We can say "this is more complex than this" for a given piece of code or structure. We've also looked at three ways to manage it: Abstraction and Modularisation and via Dependencies.

However, we've not really said why complexity entails Risk. So let's address that now by looking at three analogies, Mass, Technical Debt and Mess

Complexity is Mass

The first way to look at complexity is as Mass : a software project with more complexity has greater mass than one with less complexity.

Newton's Second Law states:

F = ma, ( Force = Mass x Acceleration )

That is, in order to move your project somewhere new and make it do new things, you need to give it a push. The more mass it has the more Force you'll need to move (accelerate) it.

You could stop here and say that the more lines of code a project contains, the greater its mass. And that makes sense because in order to get it to do something new you're likely to need to change more lines.

But there is actually some underlying sense in which this is true in the real, physical world too, as discussed in a Veritasium video. To paraphrase:

"Most of your mass you owe due to E=mc², you owe to the fact that your mass is packed with energy because of the interactions between these quarks and gluon fluctuations in the gluon field... what we think of as ordinarily empty space... that turns out to be the thing that gives us most of our mass." - Your Mass is NOT From the Higgs Boson, Veritasium

I'm not an expert in physics at all so there is every chance that I am pushing this analogy too hard. But, by substituting quarks and gluons for pieces of software we can (in a very handwaving-y way) say that more connected software has more interactions going on, and therefore has more mass than simple software.

If we want to move fast we need simple code-bases.

At a basic level, Complexity Risk heavily impacts on Schedule Risk: more complexity means you need more force to get things done, which takes longer.

Technical Debt

The most common way we talk about Complexity Risk in software is as Technical Debt:

"Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organisations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise." - Ward Cunningham, 1992, Wikipedia, Technical Debt

Building a low-complexity first-time solution is often a waste: in the first version, we're usually interested in reducing Feature Fit Risk as fast as possible. That is, putting working software in front of users to get feedback. We would rather carry Complexity Risk than take on more Schedule Risk.

So a quick-and-dirty, over-complex implementation mitigates the same Feature Fit Risk and allows you to Meet Reality faster.

But having mitigated the Feature Fit Risk this way, you are likely exposed to a higher level of Complexity Risk than would be desirable. This "carries forward" and means that in the future, you're going to be slower. As in the case of a real debt, "servicing" the debt incurs a steady, regular cost.

Kitchen Analogy

It’s often hard to make the case for minimising Technical Debt: it often feels that there are more important priorities, especially when technical debt can be “swept under the carpet” and forgotten about until later. (See Discounting.)

One helpful analogy I have found is to imagine your code-base is a kitchen. After preparing a meal (i.e. delivering the first implementation), you need to tidy up the kitchen. This is just something everyone does as a matter of basic sanitation.

Now of course, you could carry on with the messy kitchen. When tomorrow comes and you need to make another meal, you find yourself needing to wash up saucepans as you go, or working around the mess by using different surfaces to chop on.

It's not long before someone comes down with food poisoning.

Complexity Risk and its implications

We wouldn't tolerate this behaviour in a restaurant kitchen, so why put up with it in a software project? This state-of-affairs is illustrated in the above diagram. Not only does Complexity Risk slow down future development, it can be a cause of Operational Risks and Security Risks.

Feature Creep

In Brooks' essay "No Silver Bullet - Essence and Accident in Software Engineering", a distinction is made between:

  • Essence: the difficulties inherent in the nature of the software.
  • Accident: those difficulties that attend its production but are not inherent. - Fred Brooks, No Silver Bullet

The problem with this definition is that we are accepting features of our software as essential.

Applying Risk-First, if you want to mitigate some Feature Fit Risk then you have to pick up Complexity Risk as a result. But, that's a choice you get to make.

Mitigating Feature Risk

Therefore, Feature Creep (or Gold Plating) is a failure to observe this basic equation: instead of considering this trade off, you're building every feature possible. This will impact on Complexity Risk.

Sometimes, feature-creep happens because either managers feel they need to keep their staff busy, or the staff decide on their own that they need to keep themselves busy. This is something we'll return to in Agency Risk.

Complexity Dead Ends

Imagine a complex software system composed of many sub-systems. Let's say that the Accounting sub-system needed password protection (so you built this). Then the team realised that you needed a way to change the password (so you built that). Then, you needed to have more than one user of the Accounting system so they would all need passwords (OK, fine).

Finally, the team realises that actually authentication would be something that all the sub-systems would need, and that it had already been implemented more thoroughly by the Approvals sub-system.

At this point, you realise you're in a Dead End:

  • Option 1: Continue. You carry on making minor incremental improvements to the accounting authentication system (carrying the extra Complexity Risk of the duplicated functionality).
  • Option 2: Merge. You rip out the accounting authentication system and merge in the Approvals authentication system, consuming lots of development time in the process, due to the difficulty in migrating users from the old to new way of working. There is Implementation Risk here.
  • Option 3: Remove. You start again, trying to take into account both sets of requirements at the same time, again, possibly surfacing new hidden Complexity Risk due to the combined approach. Rewriting code can seem like a way to mitigate Complexity Risk but it usually doesn't work out too well. As Joel Spolsky says:

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: It’s harder to read code than to write it. - Things You Should Never Do, Part 1, Joel Spolsky

Whichever option you choose, this is a dead end because with hindsight, it would probably have been better to do authentication in a common way once. But it's hard to see these dead ends up-front because of the complexity of the system in front of you.

Avoiding Dead Ends

Working in a complex environment makes it harder to see developmental dead ends.

Sometimes, the path across the Risk Landscape will take you to dead ends, and the only benefit to be gained is experience. No one deliberately chooses a dead end - often you can take an action that doesn't pay off, but frequently the dead end appears from nowhere: it's a Hidden Risk. The source of a lot of this hidden risk is the complexity of the risk landscape.

Version Control Systems like Git are a useful mitigation of dead ends, because using them means that at least you can go back to the point where you made the bad decision and go a different way. Additionally, they provide you with mitigations against the Reliability Risk of using hard-disk.