Loading...
Loading...
Bayesian reasoning is perhaps the most important thinking tool that almost no one learns in school. Named after Reverend Thomas Bayes, it provides a mathematical framework for updating beliefs based on evidence.
The core idea is simple: start with a prior probability (how likely you think something is before seeing new evidence), then update that probability based on how likely the evidence would be if the hypothesis were true versus false. The result is a posterior probability — your updated belief.
Why does this matter? Because most people reason about evidence incorrectly. When a medical test comes back positive, most people (including many doctors) dramatically overestimate the probability of actually having the disease. They focus on the test's accuracy and ignore the base rate — how common the disease is in the population. A 99% accurate test for a disease that affects 1 in 10,000 people still produces far more false positives than true positives.
Base rate neglect is one of the most consequential cognitive errors, and Bayesian reasoning is the cure.
Example: A city has 100 terrorists and 999,900 non-terrorists. A surveillance system correctly identifies 99% of terrorists and has a 1% false positive rate. The system flags someone. What's the probability they're actually a terrorist?
Most people say 99%. The actual answer is about 1%. Why? The system correctly flags 99 of 100 terrorists. But it also falsely flags 1% of 999,900 innocent people — that's 9,999 false alarms. So 99 true positives out of 10,098 total flags = 0.98%. The vast majority of flagged individuals are innocent.
This isn't academic. It applies to medical screening, criminal profiling, fraud detection, hiring algorithms, and any system that tries to identify rare events. Without Bayesian reasoning, you systematically overestimate the significance of positive signals and make worse decisions as a result.
Bayesian reasoning also teaches proportional updating. Strong evidence should move your beliefs a lot; weak evidence should move them a little. But most people update in binary: they either ignore evidence entirely or swing to 100% certainty.
If you believe there's a 50% chance a startup will succeed, and you learn the founders have relevant industry experience, that should increase your estimate — maybe to 60%. It shouldn't jump to 95%. Conversely, learning they have no revenue yet should decrease your estimate — maybe to 40%. Not to 5%.
The discipline is: how much should this evidence move me, given my prior and how diagnostic this evidence is? A friend saying "I heard they're great" is weak evidence. Seeing audited financial statements is strong evidence. Both should update your beliefs, but by very different amounts.
This prevents two common errors: under-updating (dismissing evidence that challenges your beliefs) and over-updating (swinging to certainty based on a single data point).
Tip
Practice: Before reading a news article, estimate the probability of the headline's claim. After reading, adjust. Track how much evidence actually warrants changing your mind versus how much your mind wants to change.
Bayesian reasoning provides a framework for updating beliefs proportionally to evidence. Most people commit base rate neglect — ignoring how common something is before evaluating test results. The discipline is proportional updating: strong evidence moves beliefs a lot, weak evidence moves them a little, and prior probabilities always matter.
Keep reading to complete