The Signal and the Noise by Nate Silver: Book Overview

This article is an excerpt from the Shortform book guide to "The Signal and the Noise" by Nate Silver. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here.

Why do most predictions fail? How can we make better predictions that save people from catastrophic events?

In The Signal and the Noise, Nate Silver argues that our predictions falter because of mental mistakes such as incorrect assumptions, overconfidence, and warped incentives. However, he also suggests that we can mitigate these thinking errors with the help of a method called Bayesian inference.

Read below for a brief overview of The Signal and the Noise.

The Signal and the Noise by Nate Silver

The early 21st century has already seen numerous catastrophic failures of prediction: From terrorist attacks to financial crises to natural disasters to political upheaval, we routinely seem unable to predict the events that change the world. In The Signal and the Noise, Nate Silver sets out to explain why our predictions typically fail and how we can do better. 

Silver is the creator of FiveThirtyEight, a political analysis website known for its track record of accurate forecasts of US elections. He’s also worked as a professional poker player and a baseball analyst—during which time he developed a well-respected system (PECOTA) for predicting players’ future performances. 

In The Signal and the Noise, Silver explores a wide range of fields that depend on predictions, from politics, poker and baseball to economics, meteorology, and military intelligence. Though the book focuses mostly on these sorts of high-stakes examples, Silver suggests that its lessons are valuable for all readers—after all, decisions as mundane as what to wear on a given day are, in essence, predictions (for example, about the weather). 

The book was originally published in 2012, though our guide is based on the updated 2020 version which includes a new preface discussing the 2016 US Presidential election and the early stages of the Covid-19 pandemic.

Part 1: Why Prediction Is Hard

Before we discuss our prediction mistakes and Silver’s advice for how to avoid them, it’s worth acknowledging that even under the best circumstances, prediction is an inherently challenging endeavor. In this section, we’ll explore Silver’s analysis of how insufficient information, system complexity, and the tendency of small errors to compound all limit the accuracy of our predictions.

We Often Lack Sufficient Information

To make good predictions, Silver says, you need information about the phenomenon you’re trying to predict as well as a good understanding of how that phenomenon works. For example, today’s meteorologists have abundant information about atmospheric conditions as well as a good understanding of the physical laws by which those conditions develop. Accordingly, Silver says, they can make reasonably accurate predictions about the weather. 

The problem is that in many situations, we don’t have much useful data to go on. Silver illustrates this point by comparing meteorology to seismology, explaining that seismologists have no way to predict the timing, location, or strength of specific earthquakes. That’s because earthquakes happen rarely and on a geological timescale, which makes it hard to discern any patterns that might be at play. And whereas meteorologists can directly observe atmospheric conditions, there’s no way to measure factors such as the pressures currently at work on a given fault line nor to foresee future tectonic activity.

Many Systems Are Too Complex to Fully Understand

Also, Silver argues that even when we have rich data and a good understanding of the underlying principles, many systems still defy accurate prediction because of their inherent complexity. Silver discusses economic forecasts—which he says are notoriously unreliable—to lay out several ways complex systems hinder forecasting:

  • They blur cause and effect, making it hard to determine root causes. For example, unemployment leads to a lack of consumer demand, which causes companies to scale down production and cut their workforces—which raises unemployment.
  • They contain feedback loops that complicate their behaviors. For instance, employers, consumers, and politicians all make decisions based on economic forecasts, which means that the forecasts themselves can change the outcomes they’re meant to predict.
  • Their rules and current status are unclear. Silver points out that economists don’t agree on what indicators might predict recessions; moreover, they often don’t realize when the economy is already in a recession (only recognizing that after the fact).

Small Errors Compound

Finally, Silver explains that even when you have good data and you’re not dealing with an inherently unpredictable system like the economy, your predictions are still limited by the fact that even minuscule mistakes in the predictive process compound into large errors over time. Silver argues that this principle of compounding error is at work even in fields with relatively accurate predictions, such as meteorology. Because each day’s weather depends on the previous day’s conditions, even the smallest inaccuracy in the initial data you plug into a simulation will add up to big discrepancies over time. Silver says that’s why weather forecasts are generally accurate a few days out but become increasingly unreliable the further into the future they try to predict.

Part 2: Why Our Predictions Are Worse Than They Could Be

So far, we’ve discussed challenges that are inherent to prediction—but according to Silver, these challenges don’t tell the whole story. Instead, he argues, we exacerbate these challenges through a series of mental errors that make our predictions even less accurate. In this section, we’ll examine these mental errors, which include making faulty assumptions, being overconfident, trusting data and technology too readily, seeing what we want to see, and following the wrong incentives.

We Make Faulty Assumptions

As we’ve seen, our predictions tend to go awry when we don’t have enough information or a clear enough understanding of how to interpret our information. This problem gets even worse, Silver says, when we assume that we know more than we actually do. He argues that we seldom recognize when we’re dealing with the unknown because our brains tend to make faulty assumptions based on analogies to what we do know

To illustrate the danger of assumptions, Silver says that the 2008 financial crisis resulted in part from two flawed assumptions made by ratings agencies who gave their highest ratings to financial products called CDOs that depended on mortgages not defaulting.

  • The first assumption was that these complicated new products were analogous to previous ones the agencies had rated. In fact, Silver suggests, the new products bore little resemblance to previous ones in terms of risk.
  • The second assumption was that CDOs carried low risk because there was little chance of widespread mortgage defaults—even though these products were backed by poor-quality mortgages issued during a housing bubble.

Silver argues that these assumptions exacerbated a bad situation because they created the illusion that these products were safe investments when in fact, no one really understood their actual risk.

We’re Overconfident

According to Silver, the same faulty assumptions that make our predictions less accurate also make us too confident in how good these predictions are. One dangerous consequence of this overconfidence is that by overestimating our certainty, we underestimate our risk. Silver argues that we can easily make grievous mistakes when we think we know the odds but actually don’t. You probably wouldn’t bet anything on a hand of cards if you didn’t know the rules of the game—you’d realize that your complete lack of certainty would make any bet too risky. But if you misunderstood the rules of the game (say you mistakenly believed that three of a kind is the strongest hand in poker), you might make an extremely risky bet while thinking it’s safe.

Silver further argues that it’s easy to become overconfident if you assume a prediction is accurate just because it’s precise. With information more readily available than ever before and with computers to help us run detailed calculations and simulations, we can produce extremely detailed estimates that don’t necessarily bear any relation to reality, but whose numerical sophistication might fool forecasters and their audiences into thinking they’re valid. Silver argues that this happened in the 2008 financial crisis, when financial agencies presented calculations whose multiple decimal places obscured the fundamental unsoundness of their predictive methodologies. 

We Trust Too Much in Data and Technology

As noted earlier, one of the challenges of making good predictions is a lack of information. By extension, the more information we have, the better our predictions should be, at least in theory. By that reasoning, today’s technology should be a boon to predictors—we have more data than ever before, and thanks to computers, we can process that data in ways that would have been impractical or impossible in the past. However, Silver argues that data and computers both present their own unique problems that can hinder predictions as much as they help them.

For one thing, Silver says, having more data doesn’t inherently improve our predictions. He argues that as the total amount of available data has increased, the amount of signal (the useful data) hasn’t—in other words, the proliferation of data means that there’s more noise (irrelevant or misleading data) to wade through to find the signal. At worst, this proportional increase in noise can lead to convincingly precise yet faulty predictions when forecasters inadvertently build theories that fit the noise rather than the signal.

Likewise, Silver warns that computers can lead us to baseless conclusions when we overestimate their capabilities or forget their limitations. Silver gives the example of the 1997 chess series between grandmaster Garry Kasparov and IBM’s supercomputer Deep Blue. Late in one of the matches, Deep Blue made a move that seemed to have little purpose, and afterward, Kasparov became convinced that the computer must have had far more processing and creative power than he’d thought—after all, he assumed, it must have had a good reason for the move. But in fact, Silver says, IBM eventually revealed that the move resulted from a bug: Deep Blue got stuck, so it picked a move at random.

The Signal and the Noise by Nate Silver: Book Overview

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Nate Silver's "The Signal and the Noise" at Shortform.

Here's what you'll find in our full The Signal and the Noise summary:

  • Why humans are bad at making predictions
  • How to overcome the mental mistakes that lead to incorrect assumptions
  • How to use the Bayesian inference method to improve forecasts

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published. Required fields are marked *