Nassem Taleb: Don’t Underestimate Randomness

This article is an excerpt from the Shortform book guide to "Fooled By Randomness" by Nassim Nicholas Taleb. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

How much weight do you give to randomness or uncertainty when making decisions? Do you think success is pure luck?

According to Nassim Taleb, randomness is a hugely underestimated factor when it comes to decision-making, especially in fields that are greatly affected by randomness (e.g. trading, economics, and politics). He cites three reasons for our difficulty in appreciating the power of randomness: 1) we’re guided by our primitive brain, 2) we don’t understand probability, and 3) we see meaning when there’s none.

Keep reading to learn why we have such trouble comprehending randomness, and how our brain’s wiring makes it difficult for us to understand probability. 

Why Do We Underestimate the Power of Randomness?

In his book Fooled by Randomness, Nassim Taleb explains why randomness has such an outsized effect on success, and how and why people generally misunderstand its effects, and as a result, take many unjustified risks:

  1. We are guided by our primitive brain, which likes simplicity and dislikes abstraction, and makes decisions emotionally.
  2. We don’t understand how probabilities work.
  3. We see meaning where there is none.

1. We Are Guided by Our Primitive Brain 

The first reason we fail to properly anticipate randomness is that we are guided by our primitive brain. To aid our survival, our brains evolved shortcuts of thinking that allow us to react quickly and decisively to threats. We’ve evolved these shortcuts to save ourselves time and mental energy; if we were to stop and think thoroughly about each interaction we have throughout the day, we would either miss opportunities or succumb to threats.

For most of our history, this system worked: Our lives were localized and simple, and we could optimize our survival without accounting for rare events. Unfortunately, in today’s complex world, we need to calculate probabilities, and our brains’ shortcuts lead us to believe many things without fully thinking them through. Consequently, we find ourselves equipped with primitive tools to face our contemporary challenges, and our views of the world are often based on misunderstandings and biases we unwittingly hold. A few of these are outlined below.

We Make Decisions Emotionally

Neurologists observe that the human brain has developed into three general parts: the primitive brain, the emotional brain, and the rational brain. The rational brain acts as an advisor, but it’s the other two parts—primitive and emotional—that are responsible for decision-making. 

This is not inherently a bad thing. Our thoughts can advise us, but without a feeling to direct us toward one option or the other, we get caught in endless rational deliberations as to what’s the best course of action. This can be seen in patients who’ve had brain trauma that destroyed their ability to feel emotions but left them intelligent, making them completely rational beings. People with this sort of brain damage cannot make decisions even as simple as whether or not to get out of bed in the morning.

The negative side of this, of course, is that emotions can steer us wrong and cause us to make mistakes. Emotions can cloud our judgment by blocking out rational thinking and causing us to wrongly assess risk, thereby leading us to make poor decisions. For example, we might buy a particular stock because we love the company and get emotionally invested in its future, though it may not be financially wise to do so.  

Feelings also steer us wrong because people are more emotionally impacted by negative events than positive ones. This means they also view volatility much more starkly when it involves lower prices than when it involves higher ones. Likewise, volatility during negative world events is seen as worse than volatility in peaceful times. For example, in the eighteen months leading to September 11, 2001, the market was more volatile than in the same period after, but people gave the later volatility much more media attention. As a result, people are more likely to make moves during times of stress, even if those moves are not strategically wise. 

We Like Simplicity 

To better identify risk, the primitive and emotional parts of our psyche have evolved to prioritize speed when scanning the environment for threats. Because of this, we don’t like complexity. We respond best to simple concepts that are easily understood and quickly summed up. Often we regard complex ideas with suspicion, assuming ill intent or falsehood. 

Because of this, we tend to avoid concepts that feel difficult to explain, even when those concepts are more enlightening than simpler ones. We therefore tend to gloss over the finer points of probabilities, which are not only difficult to understand but are often also counter-intuitive.

For example, a study of how medical professionals interpret probabilities shed light on how often people who are supposed to know better, don’t. Doctors were asked this question: A disease affects one in 1,000 people in a given population. People are tested for it randomly with a test that has a 5 percent false positive rate and no false negatives. If someone tests positive, what is the percentage likelihood that she has the disease? 

Most doctors responded by saying she’d be 95 percent likely to have it (since the test has a 95% accuracy rate). However, a person testing positive under these conditions would in fact only be 2 percent likely to be sick. (If 1,000 people are tested, only one will be sick, but an additional 50 will test falsely positive, for a total of 51 positive tests but only 1 actual illness. One divided by 51 is about 2 percent.) Fewer than one in five respondents answered correctly, as the right answer feels counter-intuitive.

(Shortform note: This does not mean that people are getting regularly treated for diseases they don’t have. The scenario doesn’t account for the human element of testing: Most people only get tested for a disease when they have symptoms of something, which increases the likelihood that a positive result does indicate sickness. 

But the math holds true in real life for diseases that are uncommon but for which asymptomatic people get regularly tested—for example, breast cancer. There is a fairly high rate of false positives for mammograms, and the vast majority of those who test positive do not turn out to be sick. These false alarms are weeded out through further testing.) 

We Notice Surprises

The primitive and emotional sections of our brain also pay much closer attention to surprises than to run-of-the-mill news. We attach greater significance to shocking events even if they are not ultimately important, and tend to believe events that are more easily recalled are more likely to occur. We therefore overestimate the risk of unlikely events while ignoring the risk of more likely ones. 

We can see this in how the media covers bizarre but relatively unthreatening news while ignoring much more common—and more likely—threats. For example, in the 1990s, mad cow disease got fevered treatment from the media but only killed several hundred people over the course of a decade. You were far more likely to be killed in a car accident on the way to a restaurant than from the tainted meat you might eat there. But due to the skewed media focus, people became more frightened of the (unlikely) threat of mad cow disease than of threats they were far more likely to face. 

We Dislike Abstraction 

Because for most of human history people faced tangible threats rather than theoretical probabilities, our brains evolved to better understand concrete ideas rather than abstract ones, and consequently, we have trouble assessing the risks of abstract circumstances. Studies have shown that when presented with two sets of risks, people will be more concerned about the one that describes specific threats even if the more general threats would also include those specific threats. 

For example, travelers are more likely to insure against a death from a terrorist threat on their trip than death from any reason (including, but not specifying, terrorism). In another example, a study found that people predicted an earthquake in California was more likely than an earthquake in North America (again, including but not specifying California). 

2. We Don’t Understand How Probabilities Work

The second reason we fail to anticipate randomness is that we don’t inherently understand how probabilities—the likelihood of rare events—work. Researchers have found that people have a lot of difficulty comprehending concepts that feel counterintuitive. Mathematical probabilities and random outcomes frequently fall into this category. 

Our inability to correctly judge probabilities shows up in many ways, including the following:

  • We underestimate the likelihood of a rare event.
  • We don’t understand how probabilities compound.
  • We misunderstand sample size.
  • We don’t understand skewness: the unevenness of randomness. 
  • We misunderstand how probabilities change over time.

Rare events happen infrequently enough that we sometimes are lulled into believing they’re rarer than they actually are, so that when they do happen, we are more surprised than we should be. 

This happens because when we think about rare events, we evaluate their probability based on the likelihood of them happening in one particular way—for example, we think about the risk of a market correction of a certain kind within a certain time period. However, this mindset causes us to miss the likelihood of any random event happening in any way—a market event of any kind over the next decade, for instance. 

To illustrate: You have a one in 365 chance of sharing your birthday with someone you meet randomly. In a room of twenty-two other people, you’d have a 22 in 365 chance that you’d share a birthday with any of them—still a relatively small chance. But the chance that any two people in that room will share any birthday is about fifty percent. This is because you are not limiting the rare event to one person and one date. However, should such a matching date be discovered, it feels like a highly unlikely occurrence and will most likely result in exclamations along the lines of “What a small world!”

In another analogy, your odds of winning the next lottery may be one in 25 million. But the odds of someone winning are one in one: one hundred percent. 

We Don’t Understand How Probabilities Compound

Conversely, we sometimes overestimate the likelihood of a rare event if it appears in conjunction with another probability, and therefore might prepare for risks that have an exceedingly small chance of happening. When two or more probabilities combine, the original, individual probabilities are multiplied. For example, the probability of you being diagnosed with a particular rare disease in any given year might be one in 1,000,000. The probability you’d get in a plane crash in any given year might also be one in 1,000,000. The probability of both happening in the same year would be the odds of either happening independently, multiplied: one in 1,000,000,000,000. 

Though it’s easy to accept that in theory, in practice we often focus only on the individual probabilities and not their compounded likelihood. Peoples’ inability to properly understand compounding probabilities was demonstrated during the OJ Simpson trial of 1995. 

Simpson’s lawyers argued that any DNA evidence was irrelevant because there might be four other people in Los Angeles with his same DNA characteristics. Though this might be technically true, the likelihood of Simpson being innocent of blood evidence (a 1 in 500,000 chance), plus the fact that he was the victim’s husband, plus the various additional evidence, puts the compounded likelihood of him being innocent in the one-to-several-trillion range. Despite this almost-infinitesimal chance that he was innocent, the jury focused on the individual probabilities alone—like the blood evidence—and acquitted him.   

We Misunderstand Sample Size 

We often misunderstand opportunities, risks, and probabilities because we evaluate them from too small of a sample size. We tend to extrapolate lessons from just a few examples and apply them to wider situations. Usually, this results in misguided strategies.

Authors of self-help books purporting to reveal secrets of millionaires are often guilty of this. They’ll study a small sample of millionaires, come up with some traits they share, and declare that these are the characteristics you need to also get rich. Their sampling problem is twofold: First, they only look at a small number of millionaires, and second, they don’t look at the wider set of people who also have these traits but are not millionaires. 

For example, the authors of one bestselling book advise their readers to accumulate investments because the millionaires in their study do. However, their sample size is too small to show that all millionaires collect investments. Further, many people who never become millionaires collect investments but of the wrong things: stocks of companies that soon go belly-up or foreign currencies that devalue. 

Such books also might look at a very narrow time period—for example, the 1980s and 1990s, when average stock grew almost twenty-fold. People who invested during these years were far more likely to get wealthy than at other times, but for no other reason than timing. Lessons derived from this will be virtually useless; the best advice based on this sample would be, “buy a time machine and invest in the late twentieth century.”

Similarly, if a trader makes one right call, people may believe she’ll make more right calls in the future. If she makes one wrong call, they may question her reputation as a skilled trader. This can be seen when journalists grill successful traders to predict the market on any given afternoon. If the traders make a wrong call, even once, the journalists will often shed doubt on their entire career, even though they’re only considering a very small sample size of the trader’s calls. 

Sometimes, we make this sampling mistake because we see only the data that is nearby to us and therefore more visible. For example, a lawyer making $500,000 a year might be out-earning 99 percent of Americans, but if she lives in a swanky neighborhood in an expensive city surrounded by nothing but multimillionaires several times over, she might feel that she’s not actually very successful at all. This is not only due to the small sample size of the comparisons, but also the fact that the sample is self-selecting: Anyone who is not fabulously wealthy cannot afford to live in her building and thus she only sees the super-winners in her sample set.

We Don’t Understand Skewness

In judging whether or not a strategy is smart, people tend to focus on whether or not there is a high probability of winning, but they ignore the more important aspect of how much they might win or lose. Such uneven results are described by the term “skewness,” when there is a large chance of a small win but a small chance of a large loss. 

People often value a strategy that has frequent small wins, even if a rare large loss might wipe out all those gains. Their emotional response to winning makes them focus on the frequency of the wins and the infrequency of the losses instead of striving to optimize the overall result. However, frequent, typical events do not matter as much as infrequent, rare events do because the consequences of rare events can be much more substantial. 

For example, a trader might be happy if her portfolio gains 1,000 dollars for eleven months in a row but then loses 15,000 dollars in the last month. She might focus not on the end-of-year result (a loss of 4,000 dollars) but on the eleven months when she gained. 

Someone who makes her money in large but infrequent bursts, by anticipating rare events (usually negative ones), often ends up wealthier than someone who makes her money slowly and steadily but ignores rare events. These types of traders are called “crisis hunters.” They may lose money frequently, but only in small amounts. When they make money, it’s infrequent but in large doses. This strategy can be seen outside of trading as well: Television production companies and book publishers produce a high volume of work that’s only expected to be slightly profitable or even slightly unprofitable, while holding out for the occasional blockbuster. 

In our everyday life, outside of business, we often can better judge risks that have high levels of skewness since they are less abstract. For example, imagine you are packing for a week-long trip to the mountains, where you are told that the weather will be about 65 degrees but might swing 30 degrees in either direction. Here, you’d pack for the variances as much as you’d pack for the expected temperature: You’d pack both light and heavy clothing, anticipating the risk in either direction. 

In this same way, you should approach investing with a similar mindset. Anticipate for the most likely outcomes but also plan for deviations. 

3. We See Meaning Where There Is None

The third reason we fail to anticipate randomness is that we tend to automatically ignore it and search for patterns and meaning instead, even where there is none. Our brains are wired to look for meaning; it’s an evolutionary adaptation to aid in our survival, but it misleads us when we see meaning in randomness and then use that perceived meaning to guide our decisions. In particular, we tend to read meaning into:

  • Empty but intelligent-sounding verbiage
  • Random events
  • Random noise

All three mistakes can lead us to make poor decisions and are further explored below.

We See Meaning in Empty but Intelligent-Sounding Verbiage

Fancy phrasing can make people think a piece of communication is significant when it in fact is nonsense. Feed a sentence-generating computer program phrases like “shareholder value,” “position in the market,” and “committed to customer satisfaction,” and you’ll get a paragraph that sounds like it has meaning but doesn’t. This kind of language can be found frequently in corporate and investment fund communications.

When you have experts in a field using industry-specific buzzwords and convoluted sentences, they project an aura of expertise that is often unwarranted. All too often, people buy into this aura, don’t properly question the advisors’ policies, and end up losing money on poor investments.

The experiences of some investors in the late 1990s illustrates this. Economists at the International Monetary Fund (IMF) at that time misunderstood the true risk of default by the Russian government but sounded like they knew what they were talking about. Many emerging-market traders invested deeply in Russian Principal Bonds as a result of advice from these IMF experts and lost hundreds of millions of dollars each.

We See Meaning in Random Events

Our tendency to look for meaning leads us to see meaning in random events, and to find rational explanations for circumstances of luck. For instance, when a trader makes a lot of money, people look for reasons for her success: They’ll often credit intelligence or market savvy. If she then loses money and is forced out of the game, people will again look for reasons: They might point to relaxed work ethics, for example. 

(Interestingly, we usually view our own misfortune through a different lens than we use to view others’ misfortune. We credit other peoples’ failure to a lack of skill; we blame our own failures on bad luck. This is called the attribution bias: the faulty thinking we use to explain our own and others’ behaviors.) 

Chances are if you observe enough data, you’ll find some correlations that seem relevant even if they aren’t. When we do this, we confuse correlation with causation. Correlation is when two events happen at the same time either through coincidence or because they are both the results of another, unseen cause; causation is when two events happen at the same time because one makes the other happen. 

“Data mining” illustrates this in action. Data mining is the process of looking for patterns in large quantities of data. Although it’s an important tool in industries from insurance to health care, it can be misused to create a sense of meaning in otherwise meaningless data. You can always find some detectable pattern in a random series of events if you look hard enough. 

For example, bestselling books have been published that examine irregularities in the Bible and use them to show how the Bible “predicted” events; these powers of prediction are, of course, improved by the fact that the events have already passed and can be matched to their “predictions” through the lens of certainty.

The same can happen with rules of investing: You can take a database of historical stock prices and sift through it, applying various rules until you find one that works for that dataset. Investors often do this hoping to find a “magic” rule that will allow them to predict future price fluctuations. However, the same laws of randomness apply to rules as they do to events: If enough rules are tried against a large enough data set, some correlations will emerge. But a rule that describes what happened in the past will not necessarily predict what will happen in the future. 

For example, a trader might examine what would have happened had she bought stocks closing 2 percent higher than their price the previous week. If that rule doesn’t produce a winning formula, she might re-run the experiment using 1.8 percent as a benchmark. Continuing in this way, she may hit upon a specific number under specific conditions that would have produced a specific result. But applying those parameters to future trades rarely produces the same result. 

In fact, any random set of data, by its very nature, will not look random. There will inevitably be some patterns in the data, for if not, and the data was truly “random,” it will actually look manufactured. Consider a painting of a night sky. To look real, the stars will be clustered here and there in ways that might indicate constellations. A sky full of evenly spaced stars, though more definably “meaningless,” will also be recognizably unnatural. 

We See Meaning in Random Noise

Likewise, we also see meaning in random noise. Small changes do not warrant explanation; they are likely random, with no discernable causes. However, people get caught up trying to parse them for meaning. For instance, if the stock market moves, there might be any number of reasons—or a combination of reasons. Listening to a pundit try to explain shifts in either direction, especially small ones, is usually a waste of time. 

It would be as if you watched a marathon in which one person crossed the finish line one second before another. Such a small difference does not warrant examination. It’s unlikely to be because of a meaningful difference in diet or training; more likely, it’s a random shift of wind at some point during the previous 26.6 miles. 

Further, it can be difficult to determine a single cause for a small event because there may be many possibilities. The dollar can react against the euro, the euro against the yen, the market against interest rates, interest rates against inflation, inflation against OPEC, and on and on. Isolating one cause among all the possible influences, particularly to explain small, frequent shifts in the market, is impossible.

Nassem Taleb: Don’t Underestimate Randomness

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Nassim Nicholas Taleb's "Fooled By Randomness" at Shortform .

Here's what you'll find in our full Fooled By Randomness summary :

  • The outsized role luck plays in success
  • How we’re fooled by randomness in many aspects of our lives
  • How we can accommodate randomness in our lives once we’re aware of it

Darya Sinusoid

Darya’s love for reading started with fantasy novels (The LOTR trilogy is still her all-time-favorite). Growing up, however, she found herself transitioning to non-fiction, psychological, and self-help books. She has a degree in Psychology and a deep passion for the subject. She likes reading research-informed books that distill the workings of the human brain/mind/consciousness and thinking of ways to apply the insights to her own life. Some of her favorites include Thinking, Fast and Slow, How We Decide, and The Wisdom of the Enneagram.

Leave a Reply

Your email address will not be published.