What Is It
Coding is the main practice that identifies us as working on a software project: Actually entering instructions in a language that the computer will understand, be it Java, C++, Matlab, Excel or whatever. It is transferring the ideas in your head into steps that the computer can understand, or, as Wikipedia has it:
“…actual writing of source code.” – Wikipedia, Computer Programming
Often, this can be called “programming”, “hacking” or “development”, although that latter term tends to connotate more than just programming work, such as Requirements Capture or Documentation, but we’re considering those separately on different pages.
How It Works
In Development Process we introduced the following diagram to show what is happening when we do some coding. Let’s generalize a bit from this diagram:
- We start with a Goal In Mind to implement something.
- We build an Internal Model of how we’re going to meet this goal (though coding, naturally)
- Then, we find out how well our idea stands up when we Meet Reality and try it out in our code-test-run-debug cycle.
- As we go, the main outcome is that we change reality, and create code, but along the way, we discover where our Internal Model was wrong, in the form of surfacing Hidden Risks.
- To Build or improve some features which our clients will find useful. - Feature Risk
- To Automate some process that takes too long or is too arduous. - Process Risk
- To Explore how our tools, systems or dependencies work (also called Hacking). - Dependency Risk internal model risk
- To Refactor our codebase, to reduce complexity. - Complexity Risk
- To Clarify our product, making our software more presentable and easier to understand. - Communication Risk
… and so on. As usual, the advice is to reduce risk in the most meaningful way possible, all the time. This might involve coding or it might not.
Where Its Used
Since the focus of this site is on software methodologies, you shouldn’t be surprised to learn that all of the methodologies use Coding as a central focus.
Most commonly, the reason we are Coding is same as the one in the Development Process page: we want to put features in the hands of our customers.
That is, we believe our clients don’t have the features they need to see in the software, and we have Feature Risk.
By coding, we are mitigating Feature Risk in exchange for Complexity Risk in terms of the extra code we now have on the project, and Schedule Risk, because by spending time or money coding we now have less time or money to do other things. Bill Gates said:
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” - Bill Gates
And you can see why this is true: the more code you write, the more Complexity Risk you now have on the project, and the more Dead End Risk you’ve picked up in case it’s wrong. This is why The Agile Manifesto stresses:
“Simplicity -the art of maximizing the amount of work not done- is essential.” Agile Manifesto
Users often have trouble conceiving of what they want in software, let alone explaining that to developers in any meaningful way.
Let’s look at how that can happen.
Imagine for a moment, that there was such a thing as The Perfect Product, and a User wants to build it with a Coder:
- The Perfect Product might be conceptually elusive, and it might take several attempts for the User to find its form. Conceptual Integrity Risk
- It might be hard for the User to communicate the idea of it in writing or words: where do the buttons go? What do they do? What are the key abstractions? Communication Risk
- It might be hard too, for the Coder to work with this description. Since his Internal Model is different from the User’s, they have different ideas about the meaning of what the User is communicating. Communication Risk
- Then, implementing the idea of whatever is in the Coder’s Internal Model takes effort, and therefore involves Schedule Risk.
- Finally, we have a feedback loop, so the User can improve their Internal Model and see the previously unforeseen Hidden Risks.
- Then, you can go round again.
The problem here is that this is a very protracted feedback loop. This is mitigated by prototyping, because that’s all about shortening the feedback loop as far as possible:
- By working together, you mitigate Communication Risk.
- By focusing on one or two elements (such as UI design), you can minimize Schedule Risk.
- By having a tight feedback loop, you can focus on iteration, try lots of ideas, and work through Conceptual Integrity Risk.
One assumption of Prototyping is that Users can iterate towards The Perfect Product. But it might not be so: the Conceptual gap between their own ideas and what they really need might prove too great.
After all, bridging this gap is the job of the Designer:
“It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” — Steve Jobs
The SkunkWorks approach is one small step up from Prototyping. Wikipedia describes this as:
A group within an organization given a high degree of autonomy and unhampered by bureaucracy, with the task of working on advanced or secret projects
To give some idea of the Conceptual Integrity Risk involved, initially, the team were building a tablet using the multi-touch technology that the iPhone introduced to the world, but pivoted towards the phones after the failure of the “Apple Phone” collaboration with Motorola.
Scott Forstall picked a small, secret team from within the ranks of Apple. By doing this, he mitigated Communication Risk and Coordination Risk within his team, but having fewer people in the room meant more Throughput Risk.
By having more people involved, the feedback loop will be longer than the two-man prototyping team, but that’s the tradeoff you get for mitigating those other risks.
In large companies, this is called Silo Mentality - the tendency for lines of business to stop communicating and sharing with one another. As you can imagine, this leads to a more Complex and bureaucratic structure than would be optimal.
But this can happen within a single coding team, too: by splitting up and working on different pieces of functionality within a project, the team specialises and becomes expert in the parts it has worked on. This means the team members have different Internal Models of the codebase.
This is perfectly normal: we need people to have different opinions and points-of-view. We need specialisation, it’s how humanity has ended up on top. It’s better to have a team who, between them all, know a codebase really well, than a group of people who know it poorly.
The downside of specialization is Coordination Risk:
- If your payroll expert is off ill for a week, progress on that stops.
- Work is rarely evenly spread out amongst the different components of a project for long.
- If work temporarily dries up on a specific component, what do the component owners do in the meantime?
- What if the developer of a particular component makes the wrong assumptions about other parts of the system or tool-set?
Pair Programming / Mob Programming
Pair Programming however combines the review with the process of coding: there are now two heads at each terminal. What does this achieve?
- Clearly, we mitigate Key-Man Risk as we’ve got two people doing every job.
- Knowledge is transferred at the same time, too, mitigating Specialist Risk.
- Proponents also argue that this mitigates Complexity Risk, as the software will be better quality.
- Since the pair spend so much time together, the communication is very high bandwidth, so this mitigates Communication Risk
But, conversely, there is a cost to Pair Programming:
- Having two people doing the job one person could do intimates Schedule Risk.
- Could the same Complexity Risk be mitigated just with more regular Code Reviews?
- Sometimes, asking members of a team to work so closely together is a recipe for disaster. Team Risk
- Not every pair programmer “shares” the keyboard time evenly, especially if ability levels aren’t the same.
- There is only one Feedback loop, so despite the fact you have two people, you can only Meet Reality serially.
Mob Programming goes one stage further and suggests that we can write better software with even more people around the keyboard. So, what’s the right number? Clearly, the usual trade-off applies: are you mitigating more risk than you’re gaining?
Offshoring / Remote Teams
Pairing and Mobbing as mitigations to Coordination Risk are easiest when developers are together in the same room. But it doesn’t always work out like this. Teams spread in different locations and timezones naturally don’t have the same communication bandwidth and you will have more issues with Coordination Risk.
In the extreme, I’ve seen situations where the team at one location has decided to “suck up” the extra development effort themselves rather than spend time trying to bring a new remote team up-to-speed. More common is for one location to do the development, while another gets the Support duties.
There are some mitigations here: video-chat, moving staff from location-to-location for face-time, frequent show-and-tell, or simply modularizing accross geographic boundaries, in respect of Conway’s Law:
“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” - M. Conway
When we add Outsourcing into the mix, we also have to consider Agency Risk: the consultancy you’ve hired is definitely more interested in keeping themselves solvent than solving your business problems.
As team sizes grow, Coordination Risk grows fast.
To see this, let’s consider a made-up situation where all the developers are equal, and we can mitigate Coordination Risk at the cost of a 1-hour presentation each per week.
How many man-hours of presentations do we need?
|Team Size||Hours Of Presentations||Man-Hours In Presentations|
Adding the 7th person to the team (ceteris paribus) does absolutely nothing for productivity, it makes matters worse. Assuming everyone works a 40-hour week, we’re now 9 hours worse off than before.
This is a toy example, but is it better or worse than this in reality? If the new developers are arriving on an existing project, then 1 hour-per-week of training by the existing team members might be conservative.
This is why we get Brooks’ Law:
“adding human resources to a late software project makes it later”. - Fred Brooks, The Mythical Man-Month
Too Many Cooks
Sometimes, you have too many developers on a project. This is not a blessing. As with Student Syndrome, having too many resources means that:
“Work expands so as to fill the time available for its completion” - Parkinson’s Law
One of the reasons for this is that Developers love to develop and it is, after all, their job. If they aren’t developing, then are they still needed? This is Agency Risk: people who are worried about their jobs will often try to look busy, and if that means creating some drama on the project, then so be it.
Sadly, this usually occurs when a successful project is nearing delivery. Ideally, you want to be decreasing the amount of change on a project as it gets closer to key Delivery Dates. This is because the risk of Missing the Date is greater than the risk of some features not being ready.
This can require some guts to do: you have to overcome your own ego (wanting to run a big team) for the sake of your project.
One of the key ways to measure whether your team is doing useful work is to look at whether, in fact, it can be automated. And this is the spirit of DevOps - the idea that people in general are poor at repeatable tasks, and anything people do repeatedly should be automated.
Since this is a trade-off, you have to be careful about how you weigh the Process Risk: clearly, it exists into the future.
You are making a bet that acting now will pay off in decreased Process Risk over the lifetime of the project. This is a hard thing to judge:
- How much Process Risk are we carrying, week-to-week? (A good way to answer this is to look at past failures).
- How much Complexity Risk will we pick up?
- How much Schedule Risk (in spent developer effort) will we pick up?
- How long will the mitigation last before the process changes again?
In general, unless the problem is somehow specific to your circumstances it may well be better to skip direct coding and pick up some new tools to help with the job.
- New Dependency Risk on the new tool.
- Communication Risk because now the team has to understand the tool.
- Schedule Risk in the time it takes to learn and integrate the tool.
- Complexity Risk because your project necessarily becomes more complex for the addition of the tool.
Tools in general are good and worth using if they offer you a better risk return than you would have had from not using them.
But, this is a low bar - some tools offer amazing returns on investment. The Silver Bullets article describes in general some of these:
- Garbage Collection
- Type Systems
- Build Tools
A really good tool offers such advantages that not using it becomes unthinkable: Linux is heading towards this point. For Java developers, the JVM is there already.
Picking new tools and libraries should be done very carefully: you may be stuck with your choices for some time. Here is a short guide that might help.
The term “refactoring” probably stems from the mathematical concept of (Factorization)[https://en.wikipedia.org/wiki/Factorization]. Factorizing polynomial equations or numbers means to identify and make clear their distinct components.
Most coders use the phrase “refactoring”, and intuitively understand what it is. It shouldn’t have been hard to find a clear explanation for this page, but sadly it was. There are some very woolly definitions of “refactoring” around, such as:
“Refactoring (n): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior”” – Refactoring.com
What do “easier to understand” (which makes sense) and “cheaper to modify” mean? Let’s try and be more specific. With Refactoring, we are trying to:
- Mitigate Communication Risk by making the intent of the software clearer. This can be done by breaking down larger functions and methods into smaller ones with helpful names, and naming elements of the program clearly, and
- Mitigate Complexity Risk by employing abstraction and modularization to remove duplication and reduce cross-cutting concerns. By becoming less complex, the code has less Inertia.
On Refactoring, Kent Beck says:
“If a programmer sees a one-minute ugly way to get a test working and a ten-minute way to get it working with a simpler design, the correct choice is to spend the ten minutes. “ – Kent Beck, Extreme Programming Explained
This is a bold, not-necessarily-true assertion. How does that ratio stack up when applied to hours or days? But you can see how it’s motivated: Kent is saying that the nine extra minutes of Schedule Risk are nothing compared to the carry cost of Complexity Risk on the project.
Risks Mitigated / Attendant Risks
- Do the riskiest bits first.
- Try and map out the risk landscape
- Examine Boundary Risk and Dead End Risk issues: is this decision going to limit you