What are the principles of Nudge economics? What is the connection between Nudge and economics?
Humans do not always behave in their own best interest. Nudge economics principles explain what affects their choices and how to improve decisions.
Read more about Nudge economics examples and principles and why nudges are necessary.
Nudges: Economic Rationality vs. Actual Human Behavior
One of the most important insights of Nudge economics is that self-interested human beings are actually terrible at doing what’s best for themselves. That is, despite having the intellectual capacity to write symphonies or design aircraft or edit genes, we humans consistently and predictably make bad choices in our everyday lives.
A classic example of our poor decision-making is diet. More than 60% of Americans are either obese or overweight, despite the well-known health risks of excess weight. If human beings unfailingly acted in their self-interest, they would always—or, at least, most of the time—choose healthy foods over unhealthy ones.
The fact that human beings don’t always make the right choice for themselves contradicts a central assumption underlying traditional economics: that human beings are always rational and always choose what’s in their self-interest.
Traditional economics gets us wrong because it conceptualizes us as “Econs”—fantastical beings with incredible powers of perception and self-awareness. On the contrary, in behavioral economics, people are conceptualized as “Humans”—often irrational beings prone to make mistakes.
The reasons for humans’ fallibility are legion, but the dominant factor is simply the way we think. We favor gut feeling over reflection. We rely on rules of thumb rather than research. We allow the way a question is phrased to sway us. Unfortunately, these deeply ingrained ways of dealing with the world tend to lead us astray. Nudge economics recognizes this and offers solutions.
Automatic vs. Reflective Thinking
Many psychologists and neuroscientists have begun to theorize the brain’s function by drawing a distinction between intuitive, instinctual thinking and deliberate, rational thinking: the Automatic System and the Reflective System. These systems also explain the need for Nudge economics.
The Automatic System comprises the almost instantaneous actions or reactions that, in normal conversation, we might call “unthinking.” This system is engaged when we flinch at motion near our face or adopt a silly voice to address a pet or child; it also includes what we mean when we say we have a “gut feeling” about something. (It also happens to be the part of our brain we share with our pets.)
The Reflective System, oppositely, is what we engage when we apply our brains to something consciously, for example, a nonobvious math problem or a career decision. When we say we’re “mulling it over” or “considering all the options,” we’re using the Reflective System.
|Automatic System||Reflective System|
Although the Automatic System is essential in certain situations—for example, when we grasp the railing if we trip on the stairs—it can get us into trouble when a situation calls for slow, conscious thought (for example, when deciding with health insurance plan to choose). A key finding of the behavioral economics literature is that humans far too often think automatically when they should be thinking reflectively.
Heuristics and Nudge Economics Examples
Heuristics—more commonly known as “rules of thumb”—allow us to make judgments when we aren’t sure of the right answer. In many cases, they’re helpful—for example, when we estimate distances based on landmarks or seek a rental apartment that costs no more than a third of our monthly earnings. But, as discovered by psychologists Amos Tversky and Daniel Kahneman, our tendency to rely on rules of thumb also leads to systemic biases that skew our judgments. (Shortform note: Read our summary of Kahneman’s Thinking, Fast and Slow here.) Tversky and Kahneman identified three common heuristics and the misapprehensions each entails. The Nudge economics examples below show how heuristics make for poor decisions without nudges.
Anchoring and Adjustment
When we “anchor and adjust” when making a judgment, we take a fact we know (or think we know) and adjust it to account for the fact we don’t.
Let’s say, for example, that someone asks you to guess the population of Boston. You don’t know the population of Boston, but you do know the population of Worcester, and you know that Boston is quite a bit bigger than Worcester. So, using Worcester’s population as your “anchor,” you adjust upwards to make an “educated guess” at the population of Boston. All good, right?
Unfortunately, studies have shown that people who “anchor and adjust” guess incorrectly in entirely predictable ways. For example, people who use a lower number as an anchor—e.g., using the population of Worcester to guess the population of Boston—will tend to guess too low, whereas people using a higher number as an anchor—e.g., using the population of New York to guess Boston’s—will tend to guess high.
This type of bias occurs in nonquantitative guesses as well. For example, one study asked college students two questions: (1) How happy are you? and (2) How often are you dating? When the questions were ordered 1-2, there was little correlation between the two answers. When the order was reversed, however, so that the dating question came first, the correlation jumped nearly sixfold—the students took their dating number and used it to determine whether they were happy.
The availability heuristic finds human beings answering questions and making judgments on the basis of whether comparable examples come readily to mind. When it comes to the availability heuristic, we are biased against statistical probabilities (Reflective Thinking) and toward the most vivid examples (Automatic Thinking).
The availability heuristic is especially active in assessments of risk. For example, people who have experienced a natural disaster themselves are more likely to overestimate disasters’ frequency; and research shows that flood and earthquake insurance purchases jump in the immediate aftermath of these events and taper off as people’s memories fade. Similarly, because nuclear meltdowns receive sensational coverage in the news, people tend to be more concerned about nuclear power plants than about heart health, even though heart disease kills over 10,000 times more people than nuclear accidents.
The third heuristic Tversky and Kahneman identify is the “representativeness” heuristic, although it might be better termed the “similarity” heuristic. We use this heuristic when we categorize a phenomenon based on how similar it is to the stereotype of some category.
A simple example concerns our categorization of people based on their appearance. If we come across a diminutive white-haired man wearing glasses and a corduroy blazer and carrying a briefcase, we’re more likely to think “professor” than we are “professional basketball player.”
However, our bias toward “representativeness” becomes dangerous when we confront random processes. We tend to expect random processes to conform to our idea of randomness—as producing unpatterned, impossible-to-predict outcomes. Unfortunately, random processes, especially in the short run, can appear to have causal or predictable results. That means, again because of the “representativeness” heuristic, we ascribe those results to some particular cause rather than chance.
A classic example of humans misinterpreting randomness is sports fans’ notion of the “hot hand.” When, say, a basketball player makes a shot—or even better, a number of shots in a row—fans believe he or she is more likely to make the next shot than if he or she had missed. The idea is that shooters get “hot,” and thus should be passed the ball more frequently.
However, upon careful statistical analysis, the “hot hand” proves not to exist—a shooter’s probability of making his or her next shot is the same regardless of the result of the previous shot. We simply believe the “hot” shooter is more likely to make the next shot because of the representativeness heuristic.
Whereas exhibiting a cognitive bias in sports is relatively harmless, exhibiting one in the world of disease control can cause panic and wasted resources. For example, American public health officials receive more than 1,000 reports of so-called “cancer clusters” each year. (A “cancer cluster” is a sudden incidence of cancer diagnoses over a short period of time and in a limited area.) The fear is that environmental (or some other) factors are causing the uptick in cancer. However, in the vast majority of cases, the sudden increase in cancer is completely random—a chance fluctuation bound to occur in a population of 300 million.
———End of Preview———
Like what you just read? Read the rest of the world's best summary of Richard H. Thaler and Cass R. Sunstein's "Nudge" at Shortform.
Here's what you'll find in our full Nudge summary:
- Why subtle changes, like switching the order of two choices, can dramatically change your response
- How to increase the organ donation rate by over 50% through one simple change
- The best way for society to balance individual freedom with social welfare