Message Risk
Part Of
Although Shannon's Communication Theory is about transmitting Messages, messages are really encoded Ideas and Concepts, from an Internal Model. Let's break down some of the risks associated with this:
Internal Model Risk
When we construct messages in a conversation, we have to make judgements about what the other person already knows. For example, if I talk to you about a new JDBC Driver, this presumes that you know what JDBC is. The message has a dependency on prior knowledge. Or, when talking to children it's often hard work because they assume that you have knowledge of everything they do.
This is called Theory Of Mind: the appreciation that your knowledge is different to other peoples', and adjusting you messages accordingly. When teaching, this is called The Curse Of Knowledge: teachers have difficulty understanding students' problems because they already understand the subject.
Message Risk
A second, related problem is actually Dependency Risk, which is covered more thoroughly in a later section. Often, to understand a new message, you have to have followed everything up to that point already.
The same Message Dependency Risk exists for computer software: if there is replication going on between instances of an application and one of the instances misses some messages, you end up with a "Split Brain" scenario, where later messages can't be processed because they refer to an application state that doesn't exist. For example, a message saying:
Update user 53's surname to 'Jones'
only makes sense if the application has previously processed the message
Create user 53 with surname 'Smith'
Misinterpretation
For people, nothing exists unless we have a name for it. The world is just atoms, but we don't think like this. The name is the thing.
"The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it's just a representation, is it not? So if I had written on my picture “This is a pipe”, I'd have been lying!" - Rene Magritte, of The Treachery of Images
People don't rely on rigorous definitions of abstractions like computers do; we make do with fuzzy definitions of concepts and ideas. We rely on Abstraction to move between the name of a thing and the idea of a thing.
This brings about Misinterpretation: names are not precise, and concepts mean different things to different people. We can't be sure that other people have the same meaning for a name that we have.