Now, let’s look at the everyday process of developing a new feature on a software project, and see how our risk model informs it.
Let’s ignore for now the specifics of what methodology is being used - we’ll come to that later. Let’s say your team have settled for a process something like the following:
Is this a good process? Probably, it’s not that great: you could add code review, a pilot phase, integration testing, whatever.
Also, the methodology being used might be Waterfall, it might be Agile. We’re not going to commit to specifics at this stage.
For now though, let’s just assume that it works for this project and everyone is reasonably happy with it.
We’re just doing some analysis of what process gives us.
I am going to argue that this entire process is informed by software risk:
We could skip all those steps above and just do this:
We can all see this might end in disaster, but why?
Let’s look at how our process should act to prevent these risks materializing by considering an unhappy path, one where at the outset, we have lots of Hidden Risks. Let’s say a particularly vocal user rings up someone in the office and asks for new Feature X to be added to the software. It’s logged as a new feature request, but:
The diagram above shows how this plays out.
This is a slightly contrived example, as you’ll see. But let’s follow our feature through the process and see how it meets reality slowly, and the Hidden Risks are discovered:
The first stage of the journey for the feature is that it meets the Business Analyst (BA). The purpose of the BA is to examine new goals for the project and try to integrate them with reality as they understand it. A good BA might take a feature request and vet it against his Internal Model, saying something like:
In the process of doing this, the BA is turning the simple feature request idea into a more consistent, well-explained specification or requirement which the developer can pick up. But why is this a useful step in our simple methodology? From the perspective of our Internal Model, we can say that the BA is responsible for:
In surfacing these risks, there is another outcome: while Feature X might be flawed as originally presented, the BA can “evolve” it into a specification, and tie it down sufficiently to reduce the risks. The BA does all this by simply thinking about it, talking to people and writing stuff down.
This process of evolving the feature request into a requirement is the BA’s job. From our Risk-First perspective, it is taking an idea and making it Meet Reality. Not the full reality of production (yet), but something more limited.
The next stage for our feature, Feature X is that it gets coded and some tests get written. Let’s look at how our Goal In Mind meets a new reality: this time it’s the reality of a pre-existing codebase, which has it’s own internal logic.
As the developer begins coding the feature in the software, they will start with an Internal Model of the software, and how the code fits into it. But, in the process of implementing it, they are likely to learn about the codebase, and their Internal Model will develop.
At this point, let’s stop and discuss the visual grammar of the Risk-First Diagrams we’ve been looking at. A Risk-First diagram shows what you expect to happen when you Take Action. The action itself is represented by the shaded, sign-post-shaped box in the middle. On the left, we have the current state of the world, on the right is the anticipated state after taking the action.
The round-cornered rectangles represent our Internal Model, and these contain our view of Risk, whether the risks we face right now, or the Attendant Risks expected after taking the action. In the diagram above, taking the action of “coding and unit testing” is expected to mitigate the risk of “Duplicating Functionality”.
Beneath the internal models, we are also showing real-world tangible artifacts. That is, the physical change we would expect to see as a result of taking action. In the diagram above, the action will result in “New Code” being added to the project, needed for the next steps of the development process.
Integration is where we run all the tests on the project, and compile all the code in a clean environment, collecting together the work from the whole development team.
So, this stage is about meeting a new reality: the clean build.
At this stage, we might discover the Hidden Risk that we’d break Feature Y
Next, User Acceptance Testing (UAT) is where our new feature meets another reality: actual users. I think you can see how the process works by now. We’re just flushing out yet more Hidden Risks.
First, the people setting up the development process didn’t know about these exact risks, but they knew the shape that the risks take. The process builds “nets” for the different kinds of Hidden Risks without knowing exactly what they are.
Second, are these really risks, or are they problems we just didn’t know about? I am using the terms interchangeably, to a certain extent. Even when you know you have a problem, it’s still a risk to your deadline until it’s solved. So, when does a risk become a problem? Is a problem still just a schedule-risk, or cost-risk? We’ll come back to this question presently.
Third, the real take-away from this is that all these risks exist because we don’t know 100% how reality is. We don’t (and can’t) have a perfect view of the universe and how it’ll develop. Reality is reality, the risks just exist in our head.
Fourth, hopefully you can see from the above that really all this work is risk management, and all work is testing ideas against reality.
In the next section, we’re going to look at the concept of Meeting Reality in a bit more depth.
Found this interesting? Please add your star on GitHub to be invited to join the Risk-First GitHub group.