Probability in the Real World: Beyond Mathematics

This article is an excerpt from the Shortform book guide to "Fooled By Randomness" by Nassim Nicholas Taleb. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

Are you a good judge of probability? Do you take into account probability when making decisions that involve an element of risk?

According to former options trader Nassim Taleb, most people have a poor grasp of real-world probability, and as a result, they misunderstand the likelihood of rare events and consequently don’t plan for risk appropriately. In his book Fooled by Randomness, he cites three reasons for our difficulty in understanding how probability plays out in the real world: 1) we don’t properly interpret the past, 2) we can’t predict the future, and 3) we don’t insure against risk properly.

We’ll explore the three reasons that reflect this difficulty below.

Why We’re Bad at Assessing Probability and Randomness

People find it hard to properly assess the likelihood—or probability—of randomness for several reasons: 

  • We read the past wrongly, misunderstanding the probabilities that drove events that have already happened. 
  • We find it hard to apply lessons of the past to the future. 
  • We don’t plan for risk appropriately.

1. We Don’t Properly Interpret the Past

One reason we’re bad at assessing and preparing for risk and random events is that we are not good at learning from the past. We mistakenly believe that because something has never happened before, it can’t happen now. We then defend our lack of planning accordingly: “That had never happened before!” 

A longer-term examination of history shows that rare events of all kinds do, indeed, happen. The very definition of a rare event is its unpredictability. History is littered with examples of events that never happened before. If history’s past brought surprises, why shouldn’t our own past do the same? 

Even when we do remember a past rare event, we tend to falsely believe that we now understand the events that led up to it, and we therefore think we can “predict” it; that is, if it were to happen again, we wouldn’t be taken by surprise. We’d be more prepared for, and therefore less exposed to, any negative fallout from a similar rare event. 

We also tend to falsely believe that mistakes of the past that led to these events have been resolved, making it even more unlikely that they would happen again. For example, people know that 1929 proved that stock markets can crash, but they often chalk that up to specific causes of that time. They believe, in other words, that the event is contained and non-repeatable.

We thus like to imagine that if we were to live through certain historical events, such as the stock market crash of 1929, we would recognize the signs and wouldn’t be taken by surprise in the way that people at the time were. This is the “hindsight bias,” otherwise known as the “I knew it all along” claim. However, seeing something clearly after the fact is much easier than seeing it clearly in real time. 

In the same way, a manager taking over a trading department might do an analysis and find that only a small percentage of the trades made that past year were profitable. She might then point out that the solution is to simply make more of the profitable trades, and less of the losers. Unfortunately, such a statement of the obvious doesn’t provide any usable guidance for future trading decisions. 

We Can’t Understand the World Through Observations Alone

One reason it’s hard for us to understand real-world probability and anticipate the risk of rare events is that when we examine the past, we are looking at a specific set of events that actually did happen, while ignoring the possible events that might have happened. This points to a fundamental problem with inductive reasoning: While deductive reasoning tries to make sense of the world by coming up with a theory and then finding data to support it, inductive reasoning works in the opposite direction: You observe data, detect patterns, and formulate a theory based on those observations.

The problem with induction is that when your theories of the world are based on only what you can personally observe, you’ll miss rare events. For example, you may look at thousands of swans, see that each one is white, and conclude that all swans are white. However, some swans are black. They are rare, and if you don’t live in Australia you might never see one, but they do exist and they disprove your conclusion. 

You can use observations to disprove a theory, but you can’t use them alone to prove one. Just one counterexample is enough to disprove a theory based on millions of observations. The same thing holds for observations of history. You can point to historical events to refute a conclusion, but you can’t make a sound conclusion based on a lack of historical events. For example, before September 11, 2001, you might have said that airplanes are never purposefully flown into skyscrapers. Your theory, based on a lack of comparable events, would have been disproven by the events of that one date. 

Therefore, if you base your understanding of the market on what you have observed, you do not allow for the rare event, and you leave yourself vulnerable to it. You might say, “The market never drops by thirty percent in any four-month period,” and point to the fact that it’s never happened to prove your theory. But “It has never gone down” is different from “It never goes down.”

We Misunderstand Absence of Evidence

One reason we misconstrue inductive reasoning is that we confuse absence of evidence with evidence of absence. This faulty reasoning shows up across all industries. For example, imagine a drug trial shows improved outcomes of 2 percent, but the researcher decides that her sample size was too small and the improvements she found were too slight to be decisive. She might report that she has found no evidence as of yet of improved outcomes. Other doctors might then read that and decide she means, “This medicine does not help.” 

In the same way, looking at history and saying “this hasn’t happened” is often interpreted as “this cannot happen,” an attitude that blinds people to potential risks.

2. We Can’t Predict the Future

Not only do we incorrectly interpret the past, but we also have trouble using lessons of the past to predict future rare events—no matter how hard we try—for three primary reasons:

  1. Noise prevents clarity: Only the passage of time allows us to properly judge which pieces of information end up being consequential. 
  2. Structures change: The structure of the past can be so different from the structure of the future that lessons learned from history might fit today’s world only clumsily.
  3. The future is changeable: If everyone could predict the future, the future would then change, and would once again become unpredictable.

These three reasons are explored further below. 

Reason #1: Noise Prevents Clarity

When people think they could have correctly predicted past rare events, they think that they can also correctly predict contemporary or future rare events. However, we can typically only have clarity on an event through the lens of time passed; in real time, there’s usually too much “noise” to be able to judge what’s consequential and what’s not. 

“Noise” is the overwhelming deluge of facts that bombards us from newspapers, television, online outlets, and so on. It includes up-to-the-minute stock fluctuations, daily explanations of market moves, and endless analyses of companies—most of which will be out of business within a decade. 

Small changes in the market are most likely random, and paying close attention to them can lead you astray, convincing you that unimportant things have larger consequences than they actually do. This means that noise is not just useless, but can be actively harmful, if it leads you to make bad decisions, such as selling a stock abruptly because of minor movements, when it would have been wiser to hang onto it. Only the passage of time reveals what is important and what is noise: Time filters out the inconsequential changes and allows you to see what matters, since it reveals which changes ultimately prove unimportant and which change the direction of history.  

This is especially true for stock prices. Looking at the minutiae of constant price changes means you are focusing on the variance but not the returns. It’s unhelpful, and the negative moments may even convince you to act prematurely. Imagine an investor is sitting on a portfolio that has a 90 percent chance of increasing over the course of a year. If she checks the stock prices every minute, she might experience 250 happy minutes each day as prices rise and 240 unhappy minutes as prices fall. 

Because a person reacts more strongly to negative news than positive, she’ll end each day exhausted. She’ll also have 240 instances during which she’ll question her strategy and consider changing course. However, if she only checks her balance yearly, she’ll experience 9 happy moments for every one unhappy one, if that 90 percent chance of increase means she’ll have good news 9 out of 10 years. Time will have filtered out the unhelpful noise. 

This idea also holds for world events. The daily news covers all events, consequential or not, while historians can see, with the benefit of hindsight, which events turned out to be transformative. 

Reason #2: Structures Change

Often when studying history to apply it to today, we look at one or two narrowly-framed past events and believe their lessons apply to the future as a whole—at least as it relates to that general type of event—without taking into account all the ways in which the world constantly changes on a fundamental level. 

So many details change in the way things work that it calls into question the usefulness of studying history at all—except, of course, to acknowledge its ability to serve up surprises. Lessons from previous eras may not apply to today. For example, the Asian markets of the 1990s bear little resemblance to the ones of today now that the structure of the world economy has changed so dramatically, so market strategies that worked back then might not work now.

Likewise, because the structure of the past can be so different from the structure of the future, we can only see similarities between the two from a distance, after the events have passed. In the moment, we’re too close to judge. This is true for connections between specific past and future events as well as between broader past and future landscapes. 

Reason #3: The Future Is Changeable

The future eludes us in another way, too: It is affected not just by outside influences but also by our own understanding of those influences. Our predictions of the future can themselves change it.

For example, if traders as a whole notice that the market always rises in March, they would all buy stocks in February in anticipation of the rise. Consequently, the market would no longer rise in March; it would rise in February. 

Because of this, even if we could fully and properly understand the past, if everyone did so, our predictions would no longer hold true because the preparations people would make for a future event would prevent the event from happening as expected. The tendency of the future to repeat the past depends upon our being driven by the same invisible forces that drove past events. Thus, rare events like stock sell-offs exist because they are unexpected; if they were expected, people would prepare for them and consequently, they wouldn’t happen. 

3. We Don’t Insure Against Rare Events Properly

When people misunderstand past rare events, they misunderstand the likelihood of future rare events and consequently, they don’t plan for risk appropriately. This mistake is apparent in how people feel about buying insurance. 

Often, people resist getting insurance for things that are highly unlikely. If they do get the insurance, and the unlikely event never happens, they often feel upset that they shelled out “useless” money. When people do this, they conflate “forecast” with “prophecy,” and chastise a person for not prophesying correctly, instead of for accurately forecasting risk. 

This can be frequently seen in reactions to warnings against risk in the stock market. For example, journalist George Will once interviewed Robert Shiller, author of Irrational Exuberance, a book about the mathematical randomness of the stock market. Will pointed out that had investors listened to Shiller’s warnings at one point that the market was overpriced, they would have lost money, since the market had actually risen during that time. Will did not understand that being wrong in one particular call did not mean Shiller’s caution was unwarranted overall. But it’s easy to pick out wrong calls by looking back on them. 

It would be the same as chastising someone for not playing Russian roulette, if, in hindsight, the people who did play the game got lucky and won the 50 million dollars. Over time, the people who keep an eye on risk end up better off because they’re better prepared for rare events.

The Dilemma of the Risk Manager

The difficulty of predicting the future and the tendency to dismiss risks that passed but didn’t happen leads to an uneasy existence for people employed by investment funds as risk managers. Their job is to identify potential catastrophic risks to investors’ portfolios. They must walk a line between advising investors to avoid certain risks and the inevitable blow-back that they’ll receive if the risk does not materialize and the investor ends up losing out on profit. 

As a result of this rock-and-a-hard place situation, most risk managers end up merely pointing out potentially risky moves without going so far as to warn against them. Consequently, risk managers exist more to give an impression of risk reduction than to actually reduce it. This is called “epiphenomena”: the ambiguous link between cause and effect, when people feel that merely watching risks is the same as reducing them.

Probability in the Real World: Beyond Mathematics

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Nassim Nicholas Taleb's "Fooled By Randomness" at Shortform .

Here's what you'll find in our full Fooled By Randomness summary :

  • The outsized role luck plays in success
  • How we’re fooled by randomness in many aspects of our lives
  • How we can accommodate randomness in our lives once we’re aware of it

Darya Sinusoid

Darya’s love for reading started with fantasy novels (The LOTR trilogy is still her all-time-favorite). Growing up, however, she found herself transitioning to non-fiction, psychological, and self-help books. She has a degree in Psychology and a deep passion for the subject. She likes reading research-informed books that distill the workings of the human brain/mind/consciousness and thinking of ways to apply the insights to her own life. Some of her favorites include Thinking, Fast and Slow, How We Decide, and The Wisdom of the Enneagram.

Leave a Reply

Your email address will not be published.