Rationality: The science of winning, Part II

This is part II of a little series on the importance of rationality and applied rational decision-making (click here for part I).

Optimal decision-making

Perhaps the misconception that rationality is associated with e.g. robot-like Mr. Spock comes from having unrealistic standards. The normative model of rationality describes ideal decision-making. Humans are far from ideal thinkers, so it would be foolish to try to follow the very same decision-procedures a perfect reasoner would use.

Nevertheless, studying these normative models is important because it can help us understand what improvements to our own thinking would look like. Ideal decision-making follows the laws of logic, Bayesian probability theory and rational choice theory.

  • Logic: A rational agent avoids holding contradictory beliefs and correctly deduces conclusions.
  • Bayesian probability theory: A rational agent’s beliefs always come with a corresponding credence function – how convinced one is that the beliefs are true. Degrees of credence are coherent and updated according to Bayes’ theorem whenever new relevant information comes in.
  • Rational choice theory: A rational agent always chooses the option that gives her the highest expected degree of personal goal-fulfillment (utility).

Humans: systematically irrational

It should come as no surprise that humans don’t always act rationally. Our brains, admittedly huge for an animal of our size, are only finite. Compared to an ideal decision-maker, we lack the intelligence and computing power to make decisions equally well.

This part is trivial. Much more interesting is something that was only discovered as recently as the 1970s: Humans aren’t just irrational, they are often systematically irrational. What this means is we tend to make the same kind of mistakes over and over in the same kinds of situations. These systematic deviations from ideal decision-making are called cognitive biases. Some examples include:

  • Status quo bias: We tend to irrationally favour the current state of affairs, even when we’d have good reasons to prefer specific changes. This leads to inertia in domains where new policies could have tremendous benefits.
  • Confirmation bias: We tend to weigh supporting evidence more strongly than evidence that contradicts our current beliefs. This leads to people changing their minds less often than they rationally should.
  • Scope insensitivity: At some point, when the stakes get higher and higher, everything intuitively feels the same to us. Something that could affect millions of people doesn’t remotely feel a thousand times as important as something that could affect thousands of people. This leads to worst-case scenarios being comparatively ignored.

The full list of cognitive biases is much longer.

The discovery of biases was revolutionary because it implies that there is easily accessible room for improvement. If we were irrational in a random way, we’d have a hard time fixing anything – we’d have to redesign the entire brain in order to make progress. On the other hand, if we’re predictably irrational, we can try to learn under which conditions human decision-making goes astray and then come up with methods to correct it.