Coding is the main practice that identifies us as working on a software project: Actually entering instructions in a language that the computer will understand, be it Java, C++, Matlab, Excel or whatever. It is transferring the ideas in your head into steps that the computer can understand, or, as Wikipedia has it:
“…actual writing of source code.” – Wikipedia, Computer Programming
Often, this can be called “programming”, “hacking” or “development”, although that latter term tends to connotate more than just programming work, such as Requirements Capture or Documentation, but we’re considering those separately on different pages.
In Development Process we introduced the following diagram to show what is happening when we do some coding. Let’s generalize a bit from this diagram:
… and so on. As usual, the advice is to reduce risk in the most meaningful way possible, all the time. This might involve coding or it might not.
Since the focus of this site is on software methodologies, you shouldn’t be surprised to learn that all of the methodologies use Coding as a central focus.
Most commonly, the reason we are Coding is same as the one in the Development Process page: we want to put features in the hands of our customers.
That is, we believe our clients don’t have the features they need to see in the software, and we have Feature Risk.
By coding, we are mitigating Feature Risk in exchange for Complexity Risk in terms of the extra code we now have on the project, and Schedule Risk, because by spending time or money coding we now have less time or money to do other things. Bill Gates said:
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” - Bill Gates
And you can see why this is true: the more code you write, the more Complexity Risk you now have on the project, and the more Dead End Risk you’ve picked up in case it’s wrong. This is why The Agile Manifesto stresses:
“Simplicity -the art of maximizing the amount of work not done- is essential.” Agile Manifesto
Users often have trouble conceiving of what they want in software, let alone explaining that to developers in any meaningful way.
Let’s look at how that can happen.
Imagine for a moment, that there was such a thing as The Perfect Product, and a User wants to build it with a Coder:
The problem here is that this is a very protracted feedback loop. This is mitigated by prototyping, because that’s all about shortening the feedback loop as far as possible:
One assumption of Prototyping is that Users can iterate towards The Perfect Product. But it might not be so: the Conceptual gap between their own ideas and what they really need might prove too great.
After all, bridging this gap is the job of the Designer:
“It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” — Steve Jobs
The SkunkWorks approach is one small step up from Prototyping. Wikipedia describes this as:
A group within an organization given a high degree of autonomy and unhampered by bureaucracy, with the task of working on advanced or secret projects
To give some idea of the Conceptual Integrity Risk involved, initially, the team were building a tablet using the multi-touch technology that the iPhone introduced to the world, but pivoted towards the phones after the failure of the “Apple Phone” collaboration with Motorola.
Scott Forstall picked a small, secret team from within the ranks of Apple. By doing this, he mitigated Communication Risk and Coordination Risk within his team, but having fewer people in the room meant more Throughput Risk.
By having more people involved, the feedback loop will be longer than the two-man prototyping team, but that’s the tradeoff you get for mitigating those other risks.
In large companies, this is called Silo Mentality - the tendency for lines of business to stop communicating and sharing with one another. As you can imagine, this leads to a more Complex and bureaucratic structure than would be optimal.
But this can happen within a single coding team, too: by splitting up and working on different pieces of functionality within a project, the team specialises and becomes expert in the parts it has worked on. This means the team members have different Internal Models of the codebase.
This is perfectly normal: we need people to have different opinions and points-of-view. We need specialisation, it’s how humanity has ended up on top. It’s better to have a team who, between them all, know a codebase really well, than a group of people who know it poorly.
The downside of specialization is Coordination Risk:
Pair Programming however combines the review with the process of coding: there are now two heads at each terminal. What does this achieve?
But, conversely, there is a cost to Pair Programming:
Mob Programming goes one stage further and suggests that we can write better software with even more people around the keyboard. So, what’s the right number? Clearly, the usual trade-off applies: are you mitigating more risk than you’re gaining?
Pairing and Mobbing as mitigations to Coordination Risk are easiest when developers are together in the same room. But it doesn’t always work out like this. Teams spread in different locations and timezones naturally don’t have the same communication bandwidth and you will have more issues with Coordination Risk.
In the extreme, I’ve seen situations where the team at one location has decided to “suck up” the extra development effort themselves rather than spend time trying to bring a new remote team up-to-speed. More common is for one location to do the development, while another gets the Support duties.
There are some mitigations here: video-chat, moving staff from location-to-location for face-time, frequent show-and-tell, or simply modularizing accross geographic boundaries, in respect of Conway’s Law:
“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” - M. Conway
When we add Outsourcing into the mix, we also have to consider Agency Risk: the consultancy you’ve hired is definitely more interested in keeping themselves solvent than solving your business problems.
As team sizes grow, Coordination Risk grows fast.
To see this, let’s consider a made-up situation where all the developers are equal, and we can mitigate Coordination Risk at the cost of a 1-hour presentation each per week.
How many man-hours of presentations do we need?
|Team Size||Hours Of Presentations||Man-Hours In Presentations|
Adding the 7th person to the team (ceteris paribus) does absolutely nothing for productivity, it makes matters worse. Assuming everyone works a 40-hour week, we’re now 9 hours worse off than before.
This is a toy example, but is it better or worse than this in reality? If the new developers are arriving on an existing project, then 1 hour-per-week of training by the existing team members might be conservative.
This is why we get Brooks’ Law:
“adding human resources to a late software project makes it later”. - Fred Brooks, The Mythical Man-Month
Sometimes, you have too many developers on a project. This is not a blessing. As with Student Syndrome, having too many resources means that:
“Work expands so as to fill the time available for its completion” - Parkinson’s Law
One of the reasons for this is that Developers love to develop and it is, after all, their job. If they aren’t developing, then are they still needed? This is Agency Risk: people who are worried about their jobs will often try to look busy, and if that means creating some drama on the project, then so be it.
Sadly, this usually occurs when a successful project is nearing delivery. Ideally, you want to be decreasing the amount of change on a project as it gets closer to key Delivery Dates. This is because the risk of Missing the Date is greater than the risk of some features not being ready.
This can require some guts to do: you have to overcome your own ego (wanting to run a big team) for the sake of your project.
One of the key ways to measure whether your team is doing useful work is to look at whether, in fact, it can be automated. And this is the spirit of DevOps - the idea that people in general are poor at repeatable tasks, and anything people do repeatedly should be automated.
Since this is a trade-off, you have to be careful about how you weigh the Process Risk: clearly, it exists into the future.
You are making a bet that acting now will pay off in decreased Process Risk over the lifetime of the project. This is a hard thing to judge:
In general, unless the problem is somehow specific to your circumstances it may well be better to skip direct coding and pick up some new tools to help with the job.
Tools in general are good and worth using if they offer you a better risk return than you would have had from not using them.
But, this is a low bar - some tools offer amazing returns on investment. The Silver Bullets article describes in general some of these:
A really good tool offers such advantages that not using it becomes unthinkable: Linux is heading towards this point. For Java developers, the JVM is there already.
Picking new tools and libraries should be done very carefully: you may be stuck with your choices for some time. Here is a short guide that might help.
The term “refactoring” probably stems from the mathematical concept of (Factorization)[https://en.wikipedia.org/wiki/Factorization]. Factorizing polynomial equations or numbers means to identify and make clear their distinct components.
Most coders use the phrase “refactoring”, and intuitively understand what it is. It shouldn’t have been hard to find a clear explanation for this page, but sadly it was. There are some very woolly definitions of “refactoring” around, such as:
“Refactoring (n): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior”” – Refactoring.com
What do “easier to understand” (which makes sense) and “cheaper to modify” mean? Let’s try and be more specific. With Refactoring, we are trying to:
On Refactoring, Kent Beck says:
“If a programmer sees a one-minute ugly way to get a test working and a ten-minute way to get it working with a simpler design, the correct choice is to spend the ten minutes. “ – Kent Beck, Extreme Programming Explained
This is a bold, not-necessarily-true assertion. How does that ratio stack up when applied to hours or days? But you can see how it’s motivated: Kent is saying that the nine extra minutes of Schedule Risk are nothing compared to the carry cost of Complexity Risk on the project.
Found this interesting? Please add your star on GitHub to be invited to join the Risk-First GitHub group.