Skip to main content

Systemic Threats

Broken systems of control are a common threat for Internal Model Risk. Let's examine a few cases.

1. Hierarchical Organisations

Internal Model Risk "trickles down" through an organisation. The higher levels have an out-sized ability to pervert the incentives at lower levels because once an organisation begins to pursue a "bullshit objective" the whole company can align to this.

The Huffington Post paints a brilliant picture of how Volkswagen managed to get caught faking their emissions tests. As they point out:

"The leadership culture of VW probably amplified the problem by disconnecting itself from the values and trajectory of society, by entrenching in what another executive in the auto industry once called a “bullshit-castle”... No engineer wakes up in the morning and thinks, OK, today I want to build devices that deceive our customers and destroy our planet. Yet it happened. Why? Because of hubris at the top. " - Otto Scharmer, Huffington Post.

This article identifies the following process:

  • De-sensing: VW Executives ignored The Territory of society around them (such as the green movement), ensuring their maps were out of date. The top-down culture made it hard for reality to propagate back up the hierarchy.
  • Hubris/Absencing: they pursued their own metrics of volume and cost, rather than seeking out others (a la the Availability Heuristic Bias). That is, focusing on their own Map, which is easier than checking the Territory.
  • Deception: backed into a corner, engineers had no choice but to find "creative" ways to meet the metrics.
  • Destruction: eventually, the truth comes out, to the detriment of the company, the environment and the shareholders. As the article's title summarizes "A fish rots from the head down".

2. Markets

We've considered Internal Model Risk for individuals, teams and organisations. Inadequate Equilibria by Eleizer Yudkovsky, looks at how perverse incentive mechanisms break not just departments, but entire societal systems. He highlights one example involving academics and grantmakers in academia:

  • It's not very apparent which academics are more worthy of funding.
  • One proxy is what they've published (scientific papers) and where they've published (journals).
  • Universities want to attract research grants, and the best way to do this is to have the best academics.
  • Because "best" isn't measurable, they use the publications proxy.
  • Therefore immense power rests in the hands of the journals, since they can control this metric.
  • Therefore journals are able to charge large amounts of money to universities for subscriptions.

"Now consider the system of scientific journals... Some journals are prestigious. So university hiring committees pay the most attention to publications in that journal. So people with the best, most interesting-looking publications try to send them to that journal. So if a university hiring committee paid an equal amount of attention to publications in lower-prestige journals, they’d end up granting tenure to less prestigious people. Thus, the whole system is a stable equilibrium that nobody can unilaterally defy except at cost to themselves." - Inadequate Equilibria, Eleizer Yudkovsky

As the book points out, while everyone persists in using an inadequate abstraction, the system is broken. However, Coordination would be required for everyone to stop doing it this way, which is hard work. (Maps are easier to fix in a top-down hierarchy.)

Scientific journals are a single example taken from a closely argued book investigating lots of cases of this kind. It's worth taking the time to read a couple of the chapters on this interesting topic. (Like Risk-First it is available to read online).

As usual, this section forms a grab-bag of examples in a complex topic. But it's time to move on as there is one last stop we have to make on the Risk Landscape, and that is to look at Operational Risk.

3. Cognitive Biases

In the example of the SatNav, we saw how the quality of Internal Model Risk is different for people and machines. Whereas people should be expected show skepticism for new (unlikely) information our databases accept it unquestioningly. Forgetting is an everyday, usually benign part of our human Internal Model, but for software systems it is a production crisis involving 3am calls and backups.

For Humans, Internal Model Risk is exacerbated by cognitive biases:

"Cognitive biases are systematic patterns of deviation from norm or rationality in judgement, and are often studied in psychology and behavioural economics." - Cognitive Bias, Wikipedia

There are lots of cognitive biases. But let's just mention some that are relevant to Internal Model Risk:

  • Availability Heuristic: people overestimate the importance of knowledge they have been exposed to.
  • The Ostrich Effect: which is where dangerous information is ignored or avoided because of the emotions it will evoke.
  • Bandwagon Effect: people like to believe things that other people believe. (Could this be a factor in the existence of the Hype Cycle?)