PDF Summary:Rationality, by Steven Pinker
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Rationality by Steven Pinker. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Rationality
What does it mean to be rational? Why is being rational important? How can you be a more rational person? In Rationality, Steven Pinker argues that rationality and reason are essential for improving our world and society, but that people often misunderstand them, acting irrationally even when they think they’re not. He examines how you can be more rational and make better decisions by improving your critical thinking skills and by understanding—and thus avoiding—the logical fallacies and cognitive blunders that people often fall victim to.
In our guide, we’ll explore Pinker’s definition of rationality, his argument for why it matters, and his advice on how you can think more rationally about your choices. We’ll also go over the key reasons why humans, despite our ingenuity, often think and behave so irrationally. Along the way, we’ll look at how other experts and thinkers have explored the topic and how they’ve added to the conversation.
(continued)...
The availability heuristic not only favors memorable ideas such as plane crashes but also favors ideas that are easily visualized as they capture our attention more readily.
The availability heuristic drives us to also believe things we can easily visualize are more probable because they capture our attention more readily. In one study demonstrating this, some participants were asked to guess the chances of a massive flood happening anywhere in North America while others were asked to guess the chances of a massive flood happening in California due to an earthquake.
By definition, the chances of a flood occurring in California are smaller than in North America, since California is a small section of North America. But, participants rated the likelihood of a California flood far higher. Researchers posited that they did so because they could more readily picture a California earthquake-caused flood, being more familiar with that type of event, whereas a vague notion of a flood “anywhere” in North America didn’t leave them with a concrete mental image, and thus they underestimated its likelihood.
Post Hoc Probability Fallacies
Another common probability blunder Pinker discusses is the post hoc probability fallacy. This is when, after something statistically unlikely occurs, people believe that because it happened, it was likely to happen. They fail to account for the number of times the event could have occurred but didn’t, and the probability that, given an enormous data set (an almost infinite number of events), coincidences are going to happen.
Post hoc probability fallacies are driven by the human tendency to seek patterns and ascribe meaning to otherwise meaningless or random events. It leads to superstitious beliefs like astrology, psychic powers, and other irrational beliefs about the world. It’s why, for example, if a tragedy occurs on a Friday the 13th, some will believe it’s because that day is cursed, ignoring the many Friday-the-13th dates that have passed with no tragedy, or the many tragedies that have occurred on dates other than Friday the 13th.
(Shortform note: In The Demon-Haunted World, Carl Sagan argues that supernatural and superstitious beliefs—often brought about by post hoc probability fallacies—can cause considerable societal harm because a society that holds minor irrational beliefs is more likely to hold major irrational beliefs. So, even seemingly innocuous beliefs like astrology or believing in guardian angels are potentially harmful because they lead to a more credulous and less critical society.)
Using Bayesian Reasoning to Counter Probability Fallacies
Pinker says that to prevent falling for fallacies like these, we can use Bayesian reasoning, which is a mathematical theorem named after an eighteenth-century thinker that calculates how we can base our judgments of probability on evidence (information showing the actual occurrences of an event). We won’t detail the full mathematical equations of the Bayesian theorem here, but essentially, it helps you consider all the relevant probabilities associated with a possible outcome to determine its true likelihood, which can sometimes differ greatly from the likelihood that seems most accurate.
One common use of this theorem is in determining the probability of a medical diagnosis being correct, which Pinker says is an archetypal example where Bayesian reasoning can aid in accurate assessments of probability. Let’s say you test positive for cancer. Most people (including many medical professionals) might believe that because the test came back positive, there’s an 80-90% chance you have the disease.
However, once other relevant probabilities are taken into account, the true risk is revealed to be much less than that. In this case, if the cancer in question occurs in 1% of the population (this is the evidence of the event), and the test’s false positive rate runs at 9%, then the true likelihood that you have cancer, after receiving a positive test, is just 9%.
In everyday life, we can use Bayesian reasoning without resorting to plugging in numbers by following three general rules:
- Give more credence to things that are more likely to be true. If a child has blue lips on a summer day just after eating a blue popsicle, it’s more likely that the popsicle stained her lips than that she has a rare disease causing the discoloration.
- Give more credence to things if the evidence is rare and associated closely with a particular event. If that child with blue lips has not been eating popsicles, but instead also presents with a rash and fever, the probability that the discoloration is caused by a disease increases.
- Give less credence to things when evidence is common and not closely associated with a particular event. If a child does not have discolored lips or any other symptoms of illness, there’s no rational reason to think she has the rare disease, even if some people with that disease have no symptoms.
We Use Bayesian Reasoning Naturally
In Smarter Faster Better, Charles Duhigg also recommends using Bayesian reasoning to assess the likelihood of events, and he argues that we often do it naturally by subconsciously detecting patterns in the world, which helps us form assumptions about the likelihood of events happening—we unconsciously detect, for example, when the color of a child’s lips are either more or less associated with a pattern of rare disease. These assumptions then guide us when we make predictions.
Duhigg points out, however, that our predictions will only be good if our assumptions are accurate, which is why it’s important to regularly update our assumptions based on new information. In other words, it’s important to continuously apply Bayesian reasoning; otherwise, we may make decisions based on faulty, outdated assumptions.)
Correlation Versus Causation
Pinker next turns his attention to problems that people run into when considering causation and correlation, which drive them to make irrational decisions.
A common mistake people make is thinking that events that are correlated (they often happen at the same time) are causing each other, when in fact they might be linked simply by coincidence or by a third factor. This can lead people to make poor decisions—when they think the wrong event causes another, they incorrectly predict the future.
For example, if the stock price of a company always rises in November, a person might think the arrival of November causes the price to rise, and they might then buy stock in October in anticipation of that rise. However, if the true reason behind the price increase is that the stock rises when the company offers a huge sale on their goods, which they happen to always offer in November, then the person buying stock in October might lose money if the company decides not to offer that sale this particular November. If the person had correctly identified the causal link (between the sale and the price rise, instead of between the month and the price rise), they might have purchased their stock at a better time.
Pinker notes that it can be difficult to determine causation, especially when there are multiple events or characteristics to account for. Complicating matters is that, very often, correlation does imply some sort of causation: If two events are commonly linked, they likely have a common source (as in the stock price example above).
How Our Search for Meaning Misleads Us
In Fooled by Randomness, Nassim Nicholas Taleb argues that one reason people confuse correlation with causation is that humans are wired to seek meaning, which drives us to invent meaning when we see patterns of events. This can lead us to mistake correlations that are due to either random chance or a third factor that’s not as easy to detect.
This can be a particularly easy trap to fall into if the events we’re studying are closely related in theme as well as in timing, as this often indicates a common source. For example, eating ice cream and getting sunburned are both related to hot afternoons. However, if you get a sunburn every time you eat ice cream, it’s not the ice cream that’s causing your burn—it’s the common source of both events: the hot sun.
When determining what factors cause other factors and which are merely correlated, Pinker notes that you can do one of two things:
- Run experiments.
- Analyze data.
Experiment to Determine Causation
Pinker recommends running a “natural experiment” to identify which events cause other events. To do so, you’d divide a sample population into two groups, change some characteristics in one group, and see how (or if) those changes affect the situation for that group. Such experiments are excellent ways to measure precisely how a factor might affect change and to determine which might only be correlated with other factors.
There are limits to these experiments, though. You might fail to account for variables that affect your results (if you’re studying mostly young adults, for example, you might miss how a change would affect a broader population), and there are ethical limits to how much you can change real-world elements. You can’t, for example, force two countries to go to war just so you can examine the effects on food pricing.
Natural, Field, and Lab Experiments
Pinker calls these “natural experiments,” but some psychologists call this type of experiment a “field experiment,” and they have a different meaning for the term “natural” experiment.”
A field experiment is conducted by changing some aspect of a natural setting, in the way Pinker describes, but a “natural” experiment, as defined by other psychologists, doesn’t deliberately manipulate a variable, but instead tracks the effects of a change that’s already happening in a natural setting. For example, you might track how a change to minimum wage influences retail prices in one state versus another, where the wage didn’t increase. This differs from Pinker’s description of purposefully changing a variable and seeing its effects.
Psychologists also name a third type of experiment that Pinker doesn’t discuss—a lab experiment, which is conducted in a controlled environment rather than in a real-world setting like a field experiment or natural experiment.
Each of these three types of experiments gives a researcher different levels of control as well as different levels of accuracy—in a lab experiment, a researcher has more control over the variables but might end up with a less accurate reflection of how a change would affect a group of people in the real world. Natural and field experiments might produce more accurate results, but they give the researcher less control over variables and changes.
Analyze Data to Determine Causation
When you can’t run an experiment, you can look for patterns in existing sets of data that might shed light on how one factor affects another. Pinker mentions two factors in particular that you should analyze data for when determining causation: chronology and nuisance variables.
Chronology: You can often judge which factors affect others, and not in reverse, by noting which factors occurred first. For example, in economic data, if prices across multiple countries rise before wages rise, but wages never rise before prices rise, that indicates that price increases drive wage increases and not the reverse.
(Shortform note: Chronology can help identify causation but isn’t foolproof. In Fooled by Randomness, Taleb notes the danger of analyzing data for patterns like chronology, arguing that given a large data set, you can always find some patterns if you look hard enough. He calls this “data mining,” and writes that we can ascribe meaning to such coincidences even though their true cause is chance. He cites, as an example, books that claim to show that the Bible predicted events—if you search among all events that have happened since the publication of the Bible, you’re sure to find ones that have some discernible connection to Bible verses, but the chronology of the Bible preceding these events doesn’t prove it predicted them.)
Nuisance variables: You can control for factors that are associated with events but don’t cause them (nuisance variables) by matching such factors in different contexts and looking at how other data changes within those matched sets. For example, if you’re examining how alcohol consumption affects longevity, you might want to consider how exercise might skew your results. You could examine two groups of people, one that drinks and one that doesn’t, and match up individuals from both groups who also exercise. Any differences in longevity would then not be due to differences in exercise but would be more closely related to alcohol consumption.
(Shortform note: Nuisance variables can create what Daniel Kahneman, Olivier Sibony, and Cass Sunstein call “noise.” In Noise, they define the phenomenon as unwanted variations in events that can impair our judgments—for example, stock prices that jump higher and lower in small amounts and encourage people to buy and sell repeatedly, and that obscure the longer-term trajectory of the price. Nuisance variables, as tangential factors that introduce variation into events, can create such noise and cause people to miss important data they’d be better off paying attention to.)
Rationality and Game Theory
Thus far, we’ve examined how people make rational decisions at an individual level. We’ll now look at how people make decisions as part of a group. This brings us to game theory, which examines how rationality is affected when the needs of an individual are pitted against the needs of others.
We’ll first look at how game theory shows that sometimes, acting irrationally can be the most rational choice, and we’ll then examine how people can be convinced to make rational choices when they’ll only see benefits from those choices if everyone else chooses rationally as well.
(Shortform note: In The Undercover Economist, Tim Harford defines a game as an activity in which predicting another’s actions affects your decisions. With this definition, many situations in our lives can be considered games, like driving, where how you drive depends on how others are driving.)
The Zero-Sum Game
Pinker first examines how game theory shows that sometimes, a rational person must make choices that are, on their face, irrational, such as when opposing another person in a competition. This happens in a zero-sum game—a match-up that produces one winner and one loser (so that the “positive” win and “negative” loss add up to a sum of “zero”). In such a contest, unpredictability has an advantage, as it prevents the other person from preparing a response.
This is why, for example, a tennis player will try to serve the ball unpredictably, even if, say, her strongest serve is to the right side of the court. If she acts “rationally” and always serves her strongest serve (to the right), her opponent will predict it and prepare to meet it. Thus, her most rational move is to act randomly and irrationally.
(Shortform note: Some posit that the advantages of unpredictability are so important to human interactions that they’ve even influenced our evolution—and it might explain why some people are left-handed. In the same way that an irrational choice brings an advantage of surprise in a zero-sum game, left-handedness brings an advantage of surprise in hand-to-hand combat, favoring the person throwing the unexpected punch. Scientists theorize that the reason left-handedness is not more common, given this advantage, is that were it more common, it would no longer have the advantage of surprise. Thus it might be a somewhat self-limiting evolutionary adaptation.)
Volunteer’s Dilemma
A volunteer’s dilemma is another situation in which a person’s best choice might be an irrational one. In this dilemma, one person must do something dangerous to help the group as a whole. If they’re successful, they’ll save everyone (including themselves), but if they fail, everyone will suffer—and they’ll suffer most of all.
For example, let’s say you’re marooned with a group of friends on an island and to get off the island, one of you must swim across shark-infested waters to get help. If you succeed, everyone will be saved, but if you fail, the rescue boats won’t know where the group is—and you’ll be eaten by sharks. The question becomes, who will volunteer for such a task?
A volunteer’s dilemma is similar to a zero-sum game in that the incentives of the individuals are in conflict—no one wants to be the one entering the water. However, the end result of this dilemma is not zero-sum: If the volunteer succeeds, everyone wins. If the volunteer fails, everyone loses.
In such a situation, everyone’s individual rational choice is to let someone else volunteer and put themselves in danger. However, if no one volunteers, everyone loses. Thus, in order to ultimately choose rationally so that everyone has a chance of survival, someone will have to irrationally put themselves in danger.
(Shortform note: Psychologists point to the similarities between the volunteer’s dilemma and the bystander effect, in which people are less likely to help someone when other people are present. In both the volunteer’s dilemma and the bystander effect, people are less likely to act because of a “diffusion of responsibility”: when there are more people, each individual assumes someone else will take on the responsibility to carry out the necessary action. In large groups of people, the volunteer’s dilemma can be especially dangerous because it leads to no one stepping up. If you were on the hypothetical island with a hundred other people. You might ask, “Why should I be the one to put my life on the line?”)
The Tragedy of the Commons
The tragedy of the commons is a dynamic that applies to situations involving shared resources, where everyone in a group has an individual incentive to take as much of that resource for themselves as possible and contribute as little as possible, which ultimately harms everyone. For example, each fisher in a village will be incentivized to catch as many fish as they can, so that others don’t take them first. Unfortunately, if everyone is fishing aggressively, the stock is soon depleted and then no one has enough.
The same dynamic shows up in any situation where a public good is shared, be it roads, schools, or a military force—everyone benefits from using these things, but each individual benefits more if others pay for them. This dynamic also affects how the world’s environmental crisis plays out, as each individual or country is incentivized to consume energy and resources as they wish, hoping that others will curtail their own use. However, those others have the same incentives to use as much as they want to, too.
Pinker writes that the most effective way to manage this dilemma is to remove the choice from individuals and instead have an outsider regulate people’s decisions—specifically, a government or organization that oversees how much each individual can take from the shared resource and establishes rules or contracts that individuals must abide by. When “free riders” are punished for taking too much or not contributing enough (through fines, for example, for failing to pay taxes), everyone is more likely to refrain from the self-benefiting behavior that can drain a public resource because they can trust that others are also refraining.
Scarcity and the Tragedy of the Commons
Ironically, the tragedy of the commons can come about either because people are hoping for the best from others or because they assume the worst of others. In many of the examples Pinker mentions, like unwillingness to pay for schools, the military, and so on, or in the way both individuals and countries fail to curb their environmental waste, people are hoping for the best from others: They’re hoping that others will step up and do what needs to be done to prevent future problems. But there’s another instinct that can drive the tragedy of the commons, one that comes from assuming the worst of others: a fear of scarcity, which can motivate people to grab what they can, while they can. This is the instinct underpinning the situation of fishers taking more than they should.
An example of how this can play out was demonstrated by the efforts of the park management of the Petrified Forest in Arizona to discourage people from taking pieces of petrified wood as souvenirs. In the mid-20th century, the park posted signs warning that if theft of rocks samples continued at current rates, the forest would disappear within decades. Unfortunately, the signs had the opposite of their intended effect: They led people to believe that the rocks were scarce, making them want to grab some before they were all gone—especially since it seemed lots of other people were doing the same. Theft rates increased, rather than decreased, as people who fear a resource will be depleted by others are more likely to deplete it themselves.
In either case, whether people are hoping for the best or fearing the worst in other people, the most effective solution often is, as Pinker suggests, regulation by an outside party, as that’s the only way to establish the trust that will ameliorate either extreme.
Why Humans Are Irrational
So, Pinker asks, given that most people agree on the importance of rationality and its basic characteristics, why do people often act irrationally? Why do people hold irrational beliefs—-such as paranormal phenomena or conspiracy theories?
Pinker notes that social media has allowed people to express their irrational beliefs loudly, which makes it seem that irrational thought is a growing and recent phenomenon, but, he argues that people have believed these types of ideas for millennia. Aside from all the fallacies and biases that we just covered, Pinker discusses two additional causes of such human irrationality in the modern world: motivated reasoning and myside thinking.
(Shortform note: The examples of irrational beliefs that Pinker mentions are, indeed, only recent examples of a phenomenon dating back to ancient times. For thousands of years, for example, people have believed in shadowy groups controlling the world, right back to the ancient Egyptians complaining of Grecian priests who held secret knowledge and the 18th-century belief in the Order of the Illuminati. Still, despite scientific studies showing that conspiracy beliefs have not become more common in recent years, many people feel, as Pinker points out, that they have increased in frequency—an example, ironically, of more irrational thinking, in this case, based on the availability heuristic: What people notice today, they assume is more likely.)
Motivated Reasoning
Pinker writes that rationality, by itself, is unmotivated. That is, a rational line of thought doesn’t desire to end in a certain place, but instead follows its logic to wherever its premises and conclusions lead it. However, sometimes a rational line of thought points to an end that the thinker doesn’t desire, like when it’s clear that the fair thing to do in a situation requires the reasoner to do something unpleasant. When this happens, a person might fall back on motivated reasoning—the use of faulty logic to arrive at a desired conclusion.
We see people engaging in motivated reasoning when, for example, they justify purchasing an extravagant car by saying they like its fuel efficiency. In such a case, their true motivation is that they simply want the car, and they find a reason to justify that desire. We also see motivated reasoning when people choose to ignore certain facts that don’t support their worldview—like when a favored politician does something wrong. And, it’s behind many conspiracy theories, such as when someone who doesn’t want to believe in climate change dismisses scientific data as manipulated despite a lack of evidence to that effect.
Pinker says that people engage in motivated reasoning so frequently, it suggests that our instinct to try to win arguments developed in tandem with our ability to reason. We’ve evolved not only to think logically but equally, to convince others of our logic, even when flawed. According to this theory, the evolutionary advantage of this instinct is that it leads to stronger collective conclusions—people are eager to pass off weak arguments of their own but are quick to point out flaws in the arguments of others, and in doing so, the group as a whole ends up at the right answer. He points to studies that show small groups are better able to arrive at a correct answer than individuals are—as long as one group member can spot the right argument, the others are quickly convinced.
(Shortform note: Some psychologists take Pinker’s theory even further, suggesting that arguments—attempts to convince others of your point of view, even when flawed—aren’t just a byproduct of reason, but in fact, are the purpose of reason itself. In this view, reason didn’t evolve as a way to make better decisions but instead, to allow people to evaluate the arguments of others. This allowed for communication to be reliable, which in turn, made possible the development of human societies. This theory suggests, then, that when people bandy around some of the irrational ideas Pinker refers to, such as support for flawed politicians and conspiracy theories, eventually, more rational takes will win out.)
Myside Thinking
Myside thinking is the tendency to irrationally favor information or conclusions that support your group. It’s largely driven by our desire to be part of a collective, and in this way, is rational. If your goal is to be respected and valued by your peers, it makes sense to express opinions and even see things in a way that earns this respect. But, the result is that you’ll likely think and behave irrationally, overlooking logical flaws of arguments that support your own side while fixating on flaws of the other side.
Pinker writes that virtually everyone is susceptible to myside thinking no matter their political affiliation, race, class, gender, education level, or awareness of cognitive biases and fallacies, and that myside thinking is driving the heated political climate of recent years. He points to studies that show liberals and conservatives will accept or refute a conclusion or scientific evidence based on whether or not it supports their predetermined notions (not based on whether or not it’s well-argued or supported). Additionally, if an invalid logical statement supports a liberal idea, a conservative will be more likely to spot the fallacy, and vice versa.
(Shortform note: Though it’s important to be aware of your myside thinking and to try to minimize it to avoid irrational thinking, scientists suggest that tribal bias is human nature and is thus impossible to eradicate entirely. Humans are tribal because we evolved as members of small groups in intense competition with each other—to survive, group loyalty was a must. Because this group dynamic has been so important to human survival, we’ve evolved to favor people who support our group’s thinking and behaviors, and thus people who align with a group’s values often have a higher social status within that group. This underpins the instinct to—sometimes blindly—support or parrot a group’s thinking.)
Want to learn the rest of Rationality in 21 minutes?
Unlock the full book summary of Rationality by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Rationality PDF summary:
What Our Readers Say
This is the best summary of Rationality I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example