Don’t fall for the Monte Carlo Fallacy

A recent Chief Technology Officer recently wrote publicly in the FT that “State of the art language models predict a single step with less than 90% accuracy, leaving little hope for predicting a dozen in a row without error

Sad to say, he has fallen prey to a common thinking error surrounding probability. Humans look for patterns, it is our way of assigning structure - we assign destiny. Where true randomness is rejected as chaotic (actually chaos is defined more as a breakdown of order, whereas randomness is unpredictable from the start). The case of Spotify’s Fisher-Yates shuffle - re-engineered from it’s original randomness to a ‘smoothly randomised arrangement of music’ is a case in point.

Today, computers are predictable in that they will follow their instructions and provide the desired outcome - any degree of unpredictability or randomness is accounted for in the programming given to that computer. And some ‘unknown’ randomness is entirely necessary in privacy/security processes so encryption cannot be reverse engineered.

The common thinking error around probability, is that there is a dependency on what has come before. This error in judgement is called the Monte Carlo Fallacy - defined as where the probability of any particular outcome… is inversely dependent upon the previous outcomes’. Named after the expectation in a casino that red will be followed by black, odds by even. It is not the case. In fact, physics (the angle of the toss, the speed of the roulette… has far more impact).

We must be careful to avoid this thinking trap and actively remove dependency in language programming if not necessary. We - humans have our own cognitive bias - the expectation of error is also present. The hope that language models present, with human intelligence in the loop is the best approach to the reduction of both error and bias.

Previous
Previous

Starbucks’ cup-writing controversy