PDF Summary:Think Like a Freak, by Steven Levitt and Stephen Dubner
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Think Like a Freak by Steven Levitt and Stephen Dubner. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Think Like a Freak
In Think Like a Freak, economist Steven Levitt and journalist Stephen Dubner set out to help readers solve their problems by looking at data in unconventional—yet data-driven and rational—ways. They argue that when you approach problems by properly analyzing facts instead of by following emotions or morals, you’ll come up with far more effective (if unexpected) solutions. This is true whether your problems are lighthearted ones like improving your competitive-eating skills or weighty ones like reducing famine.
In this guide, we explore their advice on how to admit what you don’t know, identify the root problem, think like a kid, use incentives smartly, and know when to quit. Along the way, we examine some of the psychological roots of their ideas, connecting their thoughts to advice from other psychologists who also look at problems in unconventional ways, such as Angela Duckworth, Nassim Nicholas Taleb, and Rolf Dobelli.
(continued)...
(Shortform note: In The Politics of Famine in European History and Memory, Alex de Waal agrees that humanitarian aid is only a temporary solution, and that the true cause of famines are political (incorporating social and legal frameworks as well). He argues further that politics affects not only the creation of famines but also their treatment—humanitarian aid often becomes a political tool to pick winners and losers, foster international goodwill, and pressure governments to comply with requests by foreign governments. This points to yet another reason for aid organizations to address the underlying causes of these crises: Doing so prevents additional problems from arising later.)
Think Like a Kid
Levitt and Dubner advise that to effectively see problems from a new viewpoint, or to generate ideas and questions, it helps to have the mentality of a child. They argue that children inherently see the world from a new angle—not just metaphorically but also literally, since they’re generally smaller than adults—and this allows them to spot things adults overlook.
Kids are also naturally curious, and they haven’t yet developed preconceptions and biases that stop them from seeking to learn—such as overconfidence in their own expertise. Additionally, they don’t know the “common knowledge” related to any particular problem, and they don’t know what others have tried before. This allows them to approach problems with a fresh mindset, without the self-imposed restrictions adults often work under.
(Shortform note: In Zen Buddhism, the concept of thinking like a child is known as shoshin, meaning beginner’s mind. It describes a mentality of openness and eagerness that counteracts what psychologists call the earned dogmatist effect: the closed-mindedness that comes when you feel like an expert. Studies have shown that when people are made to feel like experts—for example, when they’re given inflated scores on tests of knowledge—they become less willing to consider other viewpoints. Shoshin also counters the Einstellung effect, which describes people’s tendency to solve problems with the same methods they’ve used before even if simpler solutions exist—a problem that arises when common knowledge isn’t questioned.)
Levitt and Dubner offer a few tips on how to approach problems with the mentality of a child:
Ask About the Obvious
As adults, we typically don’t question things we learned long ago, such as set-up steps in a process or background pieces of information. We take them for granted and assume they’re true because we’re used to them. But someone learning something new will often question those things and in doing so, may reveal insights we’ve overlooked. Levitt and Dubner encourage you to emulate this—question things you haven’t thought about critically in a while.
(Shortform note: Innovators, educators, and other experts largely agree with advice to question the obvious, but the approach does have some limitations. Questioning the obvious can waste time if the basic premises of a problem have already been settled through rigorous investigation. In these cases, not knowing common knowledge can be a liability. For example, if you’re building a bridge, questioning whether or not gravity works will force you to spend time and energy on an unhelpful investigation.)
Generate Lots of Ideas and Edit Them Later
As you brainstorm, don’t worry about what's good or bad. Sit on your ideas for a day and see how you feel about them—this will usually reveal which ideas are better than others.
(Shortform note: Studies confirm that people generate more original, high quality ideas when brainstorming is separated from evaluation or critique. This is one reason why group brainstorming tends to be less innovative than individual brainstorming—when conferring with others, it’s impossible not to feel judged and evaluated as you go. Experts therefore recommend that when brainstorming with others, everyone in the group should first come up with ideas individually, then meet to review them together.)
Have Fun
There are several ways that having fun can lead to better solutions. First, when you’re having fun, you’re more likely to examine things more closely—people who ask unusual questions are often the ones who are enjoying themselves. Second, you’re more likely to spend time on something if you’re enjoying it, and research shows that the key to success is not raw talent but practice—how many hours you put into a pursuit. Third, by using fun, you can encourage other people to behave in certain ways. For example, a teacher might encourage their students to exercise by offering prizes to the class that does the most jumping jacks.
(Shortform note: Whether fun inspires you to ask more questions, spend more time on something, or behave in a certain way, the underlying reason it does so is that it sparks intrinsic motivation—the drive to do something for internal rewards like happiness rather than external rewards like money. In Drive, Daniel Pink argues that intrinsic motivation is more powerful than extrinsic motivation, as it can inspire people to invest time, energy, and money into a pursuit that has no tangible rewards. He compares Wikipedia, a massively successful crowd-volunteering effort, to Microsoft Encarta, a digital encyclopedia from the 1990s that failed despite enormous financial investment—Wikipedia thrives because people simply enjoy contributing to it.)
Use Incentives
Levitt and Dubner write that to solve a problem that involves other people—be it a personal, professional, or societal problem—you must understand how and why people respond to incentives. Incentives are at the root of all human behavior and drive everyone’s decision-making.
Levitt and Dubner note that it can be difficult to identify a person’s true incentives. People don’t always clearly say—or admit—what really drives them. Instead they often say what they think the other person wants to hear, but later, in private, they behave as they truly want to. Economists call these declared preferences and revealed preferences. The key to successfully crafting solutions to problems is figuring out how to bridge the gap between these two types of preferences so that your incentives appeal to what people will do rather than what they think they should do. If you don’t, the solutions you come up with may not inspire people to act as you hope they will, and thus may be ineffective.
(Shortform note: In their earlier book, Freaknomics, Levitt and Dubner define incentives as things that encourage people to do more good things and fewer bad things. This definition narrows incentives down to purposeful inducements deliberately created by one person to nudge another person to do something positive. However, Levitt and Dubner’s discussions—in both Freakonomics and Think Like a Freak—go beyond this definition, speaking of incentives in a broader sense as inducements for people to do anything—good or bad. Their focus on how incentives can prompt people toward negative behavior because of unstated, revealed preferences lies at the heart of their insights.)
Next, we’ll review Levitt and Dubner’s discussions of the power of herd mentality, the fact that people generally make selfish decisions, how incentives encourage people to reveal their true selves, and how using incentives can backfire.
The Herd Mentality
Levitt and Dubner note that one of the most powerful incentives is the desire to fit into the group, often referred to as the herd mentality. They write that this desire can rival the desire for money, and they cite a study that demonstrates just how strong the herd mentality can be. The study also highlights the discrepancy between declared and revealed preferences.
Through a phone survey, researchers asked Californian residents what would motivate them to cut back on their energy use. Most people cited environmental concerns, followed by societal benefits and then financial concerns. They ranked “other people also conserving energy” last.
However, the next phase of the experiment showed that what people actually cared about was very different from what they claimed. The researchers hung information tags on people’s doorknobs encouraging them to save energy. Each tag cited various reasons and included a related statistic, such as the fact that residents can reduce pollution by a certain amount every month, would save a certain amount of money, or that 77% of their neighbors use fans instead of air conditioning when possible.
Then, the researchers measured the energy use of each home. They discovered that the homes receiving tags that said their neighbors were also conserving energy lowered their energy usage significantly more than any other homes. Levitt and Dubner write that when you successfully identify people’s true incentives like this, you can spur them to do something right even if it’s for the wrong reasons—like getting them to save energy just to fit in with the neighbors.
The Pros and Cons of Social Proof
The herd mentality demonstrated by the energy-saving study Levitt and Dubner reference can be explained by the psychological phenomenon called social proof. Coined by psychologist Robert Cialdini in his book Influence, the term refers to people’s tendency to check the opinions of others before forming their own—essentially, we decide how to behave in any given situation based on how others behave.
Cialdini cites the use of canned laughter (pre-recorded laugh tracks as opposed to real, spontaneous laughter) in television sitcoms as an example: When questioned, people consistently say they don’t like canned laughter, but data shows that audiences laugh longer and more frequently when canned laughter is used. The idea is that when we hear others laughing, we’re instinctively encouraged to laugh. This is true even though we know the laughter is fabricated—the fact that we consciously know we’re being manipulated by television executives doesn’t stop our unconscious selves from responding.
(This example also illustrates Levitt and Dubner’s distinction between declared and revealed preferences—the preferences audiences claim to hold aren’t born out by their actual behavior.)
Cialdini argues that social proof isn’t necessarily a bad thing. It helps us deduce the correct way to behave in various situations, and therefore helps us make fewer mistakes. But, it does make us prone to manipulation because it’s a cognitive shortcut that profiteers can use to direct our actions.
In the case of the energy-saving study, this was used for a good purpose: Researchers appealed to social proof (the herd mentality) to prompt the public to help the environment. However, marketers and salespeople of all stripes regularly appeal to social proof to encourage people to part with their money. Cialdini cites examples of bartenders seeding their tip jars with dollars to indicate that other customers have contributed, and of advertisers assuring us that a product is quickly selling out to make us feel others have deemed it worth buying.
People Make Selfish Choices
Levitt and Dubner write that when you evaluate incentives and base solutions on them, you must keep in mind that people usually act selfishly, putting their personal interests ahead of the concerns of a larger group. This human tendency poses challenges for policymakers—organizations tasked with improving society have to figure out how to counter the human instinct to work against the greater good in favor of personal benefit.
To illustrate this instinct, Levitt and Dubner examine the thought process that soccer players go through during a penalty kick—when a kicker has a chance to shoot the ball into the goal with only the goalie to block it. The kicker can choose to kick left, right, or center, and will do so as randomly as possible to prevent the goalie from detecting a pattern. The goalie must choose to jump left, jump right, or stay in the center before the kick happens, because it takes just a fraction of a second for the ball to travel from the kicker to the goal.
Rationally, you’d expect the kicker to choose left, right, or center approximately one-third of the time each, and same for the goalie. However, data shows that kickers kick to the center only 17 percent of the time—and goalies stay in the center only two percent of the time. Shots aimed at the center of the goal are significantly more likely to succeed, and both kickers and goalies know this. So why do kickers so seldom shoot there, and why do goalies so infrequently defend it?
Levitt and Dubner attribute this tendency to the desire on both sides to avoid embarrassment. Kickers don’t want to look like they kicked directly to a waiting goalie, should the goalie stay in that spot. And if the goalie stays in the center while the ball is kicked to either side, it will look like they didn’t bother to try. Either way, the crowd will react negatively.
This example illustrates the power of incentives to make decisions selfishly—in this case, the players are motivated by the personal incentive not to look silly rather than the teamwide incentive to win the game. Levitt and Dubner argue that acting selfishly in this way doesn’t make anyone a bad person—it simply means they’re human, and when you’re constructing incentive systems, you must assume that people are selfish and design your incentives accordingly.
Selfish Versus Selfless Motivations
In The Art of Strategy, game theorists Dixit and Nalebuff also contend that people typically make selfish decisions, and they examine the difficulties this poses for policymakers.
One problem they discuss is what’s commonly known as the tragedy of the commons. In this scenario, a group of people uses a common resource that’s freely available, such as fishers trawling the same sea. Individually, each is incentivized to take as much as possible for themselves so they don’t lose it to the other members of the group, who are also taking as much as possible for the same reason (the same incentive). Ultimately, such selfish actions cause the resource to be depleted, so that the group as a whole ends up worse off.
This demonstrates how hard it is to discourage people from acting selfishly—and the way incentives can harm the greater good. While a common resource can be protected by rules and regulations (such as an outside body mandating how much fish each person can take within a season, or what technology may be used), not all behavior can be so easily guided. For example, in the penalty kick dilemma, there’s no way to mandate whether a player kicks to or defends the center of the goal, as that would destroy the unpredictability factor, which is the purpose of the play.
Self-Selecting Incentives
Levitt and Dubner write that one way you can leverage incentives to benefit your project or organization is by designing incentive schemes that encourage people to reveal their true inner motivations, even if those motivations are something they’d rather keep hidden. If you set up incentives to attract certain people in certain ways, you can get them to self-select into categories, allowing you to know what kind of people they are and how to interact with them.
For example, if you’re hiring an employee, you can design your job application process to discourage the wrong people from applying. This is the reason some companies make their application process difficult even for entry-level jobs: It prevents people from applying who might quickly quit if they do get the job. A difficult application process is self-selecting, since it only attracts determined, diligent applicants.
(Shortform note: In Sludge, Cass Sunstein argues that organizations routinely design incentive systems to get people (customers or workers) to self-select. For example, they often add friction (aka sludge) to their customer-service processes to weed out undesirable customers. Warranty companies, for instance, may require people filing claims to submit duplicate forms and receipts at several different steps. Sunstein warns that the frequency of purposeful self-selecting sludge has negative effects on society—when faced with friction, people often give up and may lose out on jobs, permits, education, medical help, and so on. The end result is an economy with reduced growth and exacerbated poverty, and an unhealthier, unhappier populace.)
Be Careful of Incentives Backfiring
Levitt and Dubner warn that incentives sometimes have the opposite effect of what you intend: They can encourage more of the bad behavior you’re trying to prevent. One manifestation of this is if you warn people that a problem is common—instead of feeling moral outrage, they’ll often think, if everyone else is doing this, I can too. This is an undesirable consequence of the herd mentality, and was illustrated when park rangers at Arizona’s Petrified National Park posted a sign stating that 14 tons of petrified wood was stolen every year. Their intention was to deter visitors from stealing more, but instead, visitors developed a fear of missing out, prompting many to grab a piece of the forest before it all disappeared.
(Shortform note: This example tapped into another psychological trait in addition to herd mentality—loss aversion. This describes the fact that people tend to fear loss more than they value gain. In Thinking, Fast and Slow, Daniel Kahneman writes that this tendency stems from our evolutionary need to respond more urgently to threats, which can bring immediate death, than to opportunities, which can bring pleasant (but not always necessary) improvements. As a result, we’ve evolved to react more strongly to bad news—such as that we may miss out on owning a piece of natural history if we don’t take one now.)
Levitt and Dubner say incentives can also backfire if you offer to pay people to destroy something undesirable—they’ll often create more of it to destroy so they can get paid. This tendency is sometimes called the “cobra effect,” named after an incident in colonial India when a British overlord, concerned about the burgeoning cobra population, offered a cash reward for cobra skins. Instead of leading to fewer cobras, it gave rise to a cobra-farming industry designed around raising and slaughtering the snakes.
A more recent example of this phenomenon was when the United Nations offered payments to manufacturers to destroy their pollutants. Manufacturers began creating more of them so they could destroy them, leading to a sharp increase in pollution.
(Shortform note: In The Art of Thinking Clearly, Dobelli calls this the incentive super-response tendency—the tendency for people to maximize their self-interest when responding to incentives, which often makes people ignore the intention of the incentive in the process. Dobelli writes that effective incentive systems must address both intent and reward, or they’ll risk creating cobra-effect situations where people profit from perpetuating bad behavior (such as creating more, not less, pollution in our earlier example). To avoid this, the United Nations could’ve created incentives for producing fewer pollutants to begin with, rather than for destroying pollutants.)
It can be difficult to predict unintended consequences, write Levitt and Dubner, because people are often smarter than the organizations behind incentive schemes and can be highly motivated to come up with ways to game them. In addition, people can often tell when organizations are trying to manipulate them, and they tend to rebel against that.
(Shortform note: This tendency to rebel against perceived manipulation is explained by the reactance theory, where people respond emotionally to pressure and adopt a conflicting belief. It stems from our desire for autonomy—we dislike feeling that our freedom to choose is being taken away. It’s one reason people can be so highly motivated to game incentive systems—and why they can be incentivized to act smarter than the organizations implementing those systems.)
Know When to Quit
Levitt and Dubner wrap up their arguments by suggesting another unconventional mindset that can keep you from chasing ineffective solutions: Be willing to quit. They write that unrelenting perseverance is often promoted as a positive trait, but it can sometimes be harmful—although persistence and tenacity are key elements of success, you can waste time, energy, and money if you get caught up in pursuing a goal that’s ultimately unattainable. Thus, it’s important to recognize when it’s best to cut your losses, and then to be willing to change your plans. Studies show that people who work toward unattainable goals feel psychologically better when they give them up. This leads to better physical health too.
(Shortform note: In her best-selling book, Grit, Angela Duckworth helped popularize the notion that effort and perseverance count more toward success than raw talent. However, she, too, maintains that sometimes quitting is the best option—you shouldn’t stick to something if you’ve lost passion for it or if you’re making no headway. These two conditions are related: When you’re making no headway on a pursuit, you may lose joy for it, which can lead to the psychological struggles and health problems Levitt and Dubner refer to. If that happens, it’s time to reassess whether the pursuit is the right one for you.)
Levitt and Dubner cite three reasons people resist quitting even when they should:
- They believe quitting means failing.
- They fall for the sunk cost fallacy.
- They ignore opportunity costs.
We’ll examine each of these reasons in more detail, and then look at Levitt and Dubner’s advice on how to lessen the likelihood that you’ll quit a project.
Quitting Is Not Failing
Quitting does technically mean failure in that you fail to reach the goal you set out for. However, Levitt and Dubner argue that failing to immediately meet your goal isn’t always a bad thing and shouldn’t be viewed as true failure. Sometimes you need to make failures along the way so you or others can reach the ultimate goal, even if that means reaching it in a different way than you originally intended. For example, when scientists or medical researchers pursue a path that doesn’t produce results, they still make a contribution, because they let others know not to follow that path later. Quitting under these circumstances can be an important part of the process.
(Shortform note: Duckworth agrees that immediate failure does not necessarily mean ultimate failure. To clarify this, she distinguishes between top-level, mid-level, and low-level goals, with low-level goals existing only to support your high-level ones. It’s not important, she writes, to meet every single low-level goal, as these may change as you find different paths to achieving your ultimate, high-level goal. Levitt and Dubner’s example of researchers failing to produce results takes this idea one step further, looking at an ultimate goal with a broader, multi-person perspective: Your immediate failure may not ever lead to you meeting your ultimate goal, but it could help someone else do so.)
The Sunk-Cost Fallacy
The sunk-cost fallacy is the belief that once you’ve invested a significant amount of time, money, effort, brainpower, or other resources into something, you should keep pursuing it. People don’t like to abandon projects they’ve put many resources into because they feel it will be counterproductive. But, Levitt and Dubner argue, it’s better to cut your losses on projects that won’t pan out rather than continuing to sink resources into them.
(Shortform note: Duckworth agrees that getting caught by the sunk cost fallacy can be a problem, but she argues the more likely problem is quitting too soon. Further, as long as you’re pursuing a high-level goal, she argues that any effort you invest is unlikely to be wasted. It’s when you stubbornly pursue a low-level goal that you might waste time and be prone to the sunk cost fallacy. For example, no effort is wasted toward a high-level goal of using psychology and science to help kids learn better, but you might waste effort if you spend too much time trying to get an article published in one particular magazine. This would be a low-level goal supporting your high-level goal, but not an end goal in itself.)
Opportunity Costs
Levitt and Dubner argue that people often overlook the fact that when they pursue one goal, it means they can’t pursue some others. These missed opportunities can be hard to quantify, but when you’re considering moving forward with a plan, it’s important that you think about what else you could be doing with your time instead. For example, if you’ve taken on the renovation of a dilapidated house, think about what else you could do with the money and labor it will entail. You might get a better return on, say, going back to school for an advanced degree, making it more beneficial to quit the renovation project and redirect your resources elsewhere.
Levitt and Dubner don’t advise that you quit just to watch TV all day, but instead that you quit to take up a new goal. They advise that if you’re unhappy in your current path, don’t ignore that feeling and push through it; consider what else you might do that could lead to the same ultimate end. For example, if you’re unhappy with your chosen line of work, examine what it is about your work that you like and what you like doing outside your work. Then, try to find a direction that marries those two factors. This may bring you to your ultimate goal of having fulfilling employment, even if it means quitting your current method of achieving that goal.
How to Prioritize and Choose Between Goals
Levitt and Dubner’s advice mirrors Duckworth's advice in Grit: Prioritize high-level goals, think of low-level goals as a means to an end, and be willing to pivot as you work toward your high-level goals by quitting your low-level ones. Though Duckworth doesn’t specifically mention opportunity costs, her advice rests on the same underlying reasoning—if you’re unwilling to let go of small, unfulfillable goals, you’ll lose out on opportunities that pivoting toward manageable goals would bring you.
Duckworth writes that it can be difficult to distinguish low-level goals from high-level goals, as you may feel that a low-level goal is important enough that it’s the ultimate end you’re aiming for. For example, imagine that your goal is to publish a novel. This may feel like an irreplaceable end-goal, but in truth, it may be a low-level goal supporting a larger, broader goal of “make a living through creative work.”
To identify your true end goal, ask yourself “why?” in regards to each of your goals. Each answer will reveal a progressively higher-level goal. When you get to a point where you say, “Just because!” you’ve identified your ultimate high-level goal. Focus on that, and consider all goals leading up to that to be replaceable. If you do this, you can feel confident that you’re not missing out on alternative opportunities by sticking to that pursuit.
Lessen Your Chances of Quitting
Levitt and Dubner offer guidance on what you can do when approaching a project to lessen the chances that you won’t finish it: Conduct a premortem on it. This is the opposite of a postmortem, which is what organizations do to analyze what went wrong after a project has failed. With a premortem, you’ll think through all the possible points of failure in advance so you can prepare for them.
To conduct this analysis, ask everyone involved with the project to imagine that it fails, then ask them to come up with reasons for its failure. This thought experiment is effective in flushing out potential problems that otherwise may go unmentioned—issues that people may have quietly been thinking about but didn’t see a reason or an opportunity to raise. Levitt and Dubner note that if you make the premortem anonymous, people will be more honest, and will more readily bring attention to problems they may otherwise hesitate to mention.
(Shortform note: In The Culture Code, Daniel Coyle writes that an organization’s work culture must foster three themes: safety (you belong here), vulnerability (you can take risks), and purpose (you’re here for a reason). Levitt and Dubner’s advice to ask team members for their thoughts on potential failure points encourages each of these. It makes people feel that they’re valued, their thoughts are appreciated, and they can take risks by pointing out problems. It also makes it clear that management feels they’re aligned toward the same higher purpose. This creates a sense of connection, making team members feel supported by the group while being valued as individuals—an essential characteristic of high-functioning workplaces.)
Want to learn the rest of Think Like a Freak in 21 minutes?
Unlock the full book summary of Think Like a Freak by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Think Like a Freak PDF summary:
What Our Readers Say
This is the best summary of Think Like a Freak I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example