Internal Model Risk
Part Of
Reduced By Practices
- Analysis: Analysis is the process of doing work to build a better Internal Model.
- Demo: Prototypes are a way of learning about a particular solution to a problem.
- Documentation: Detailed documentation helps manage and understand complex systems.
- Pair Programming: Facilitates knowledge sharing and learning.
- Retrospectives: Looking at what went wrong before leads to improving the internal model of risk for the future.
- Review: Reviews and audits can uncover unseen problems in a system.
- Stakeholder Management: Talking to stakeholders helps to share and socialise Internal Models.
- Training: Provides necessary education to help team members get up to speed.
- User Acceptance Testing: As a feedback mechanism, UAT helps improve understanding of users and their requirements.
Attendant To Practices
- Automated Testing: Unit Testing and code coverage can give false assurances about how a system will work in the real world.
- Automation: Automation of reporting and statuses can lead to false confidence about a system's health.
- Measurement: Focusing on the wrong measures can blind you to what's important.
- Performance Testing: Performance testing might give a false confidence and not reflect real-world scenarios.
As we discussed in the Communication Risk section, our understanding of the world is informed by abstractions we create and the names we give them. Our Internal Models of the world are constructed from these abstractions and their relationships.
So there is a translation going on here: observations about the arrangement of atoms in the world are communicated to our Internal Models and stored as patterns of information (measured in bits and bytes). Therefore, we face Internal Model Risk because we base our behaviour on our Internal Models rather than reality itself. This is "Confusing the Map for the Territory", attributed to Alfred Korzybski:
"Polish-American scientist and philosopher Alfred Korzybski remarked that "the map is not the territory" and that "the word is not the thing", encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people do confuse maps with territories, that is, confuse models of reality with reality itself." - Map-Territory Relation, Wikipedia
Worked Example
A large software firm struggles to hire effectively and feels it uses a lot of cycles trying to lure the best candidates.
They decide to apply machine learning to its hiring process, as shown in the diagram above, figuring that it can cut out a lot of the manual effort of screening CVs and application forms. They use internal historical data to categorise candidates based on who was hired in the past, and then prioritise candidates for interview with the highest score.
But this is flawed, for the following reasons:
- The training data they are using reflects past hiring decisions and may not weight the types of people they want to attract in the future.
- The approach lacks validation, as they're not comparing it with a manual process to see how the new and old systems compare.
- They're likely to suffer from diversity issues and hire people mismatched to the roles.
- There may even be reputational or legal consequences as a result.
Specific Threats
1. Flawed Assumptions
Threat: As seen above, flawed assumptions can work their way into software systems. Internal Model Risk is therefore amplified when the stakes are higher.
In 1973, Fischer Black and Myron Scholes published their ground-breaking paper describing the Black-Scholes-Merton model for pricing options. Pricing options (agreements to give someone the option to buy or sell something at a later date and price) had previously been hugely problematic, so the creation of a model that would do it correctly was a huge step forward and earned Merton and Scholes the 1997 Nobel Prize for Economics (Black had died in 1995 and was thus ineligible).
Long-Term Capital Management (LTCM) was founded in 1994 and was, for a while, a hugely successful hedge fund. Scholes and Merton sat on the board, which, along with incredible returns lent the organisation a strong reputation. However, the models underlying their impressive returns were faulty: they were based on historical correlations (which might not hold in the future) and made assumptions about liquidity.
In 1997, a confluence of market conditions (the Asian Financial Crisis and Russian Debt Default) uncovered these weaknesses and the firm lost 90% of its value, exceeding $4bn, forcing the US government to stage a bail-out.
The star-studded team at LTCM were victims of Internal Model Risk due to their own hubris, over-confidence in their models and their dismissal of the warning signs from the markets around them.
2. Data Quality
Threat: Reliance on out-of-date information, incomplete sources or erroneous data.
In the headline above, taken from the Telegraph newspaper, the driver trusted the SatNav to such an extent that he didn't pay attention to the road-signs around him and ended up getting stuck.
This wasn't borne of stupidity, but experience: SatNavs are pretty reliable. So many times the SatNav had been right, that the driver stopped questioning its fallibility.
There are two Internal Model Risks here:
- The Internal Model of the SatNav contained information that was wrong: the track had been marked up as a road, rather than a path.
- The Internal Model of the driver was wrong: his abstraction of "the SatNav is always right" turned out to be only mostly accurate.
3. Operational Threats
Threat: Models need to be evaluated against real-world outcomes. (See Retrospectives)
Threat: Agents within the system may have a conflict-of-interests and subvert the model to their own ends.
4. Model Drift
Threat: Model drift is where the model becomes less relevant over time, the original assumptions and data no longer holding as strongly as before.
5. Adaption
Threat: The existence of a model of the world changes the world too - behaviours adapt to compensate for the new model.
See: A good example of this might be PageRank, which was initially a successful way of determining the relevance of a web page. However SEO practices adapted to "game" the algorithm and diminish its relevance over time.