PDF Summary:The Art of Strategy, by Avinash K. Dixit and Barry J. Nalebuff
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of The Art of Strategy by Avinash K. Dixit and Barry J. Nalebuff. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of The Art of Strategy
In The Art of Strategy, Avinash Dixit and Barry Nalebuff explore how you can apply game theory to your business and everyday life. They show how you can be successful in any competitive situation by anticipating the other player’s moves, reasoning backward from your ultimate goal, and knowing when to act against your own interests. They examine real-world case studies to explain foundational game theory concepts like the Nash equilibrium, dominant strategies, and how to encourage people to make selfless choices for the good of the group when selfish ones will benefit them individually.
In this guide, we’ll explore general game strategies as well as specific strategies for the various types of games you may encounter in your everyday life. Along the way, we’ll examine how Dixit and Nalebuff’s ideas align or contrast with other thinkers in the field, and how recent research may shed new light on some of their arguments.
(continued)...
Repeated studies have shown that when players go through a prisoner’s dilemma game repeatedly, they start to cooperate, while playing the game only once generally results in both players choosing selfishly. This points to a difference in how this paradoxical situation plays out in the real world—in a one-off event like the classic confession scene, the players might be expected to choose selfishly. But in an ongoing pricing war, two companies might start to coordinate, even if they never formally discuss plans. Their understanding of how the other may retaliate in the future to any selfish moves they make can encourage them to act in a less individualistic way.
Dominant Strategy
The best approach to navigating simultaneous games is to use what game theorists call a dominant strategy—a strategy that will deliver a good outcome whether the other person chooses selfishly or selflessly.
In the prisoner’s dilemma, this means confessing early, because you’ll avoid a harsh punishment whether or not the other stays silent. This points to a paradox that contradicts typical economic theories espoused by free-market thinkers such as Adam Smith: When every player pursues their own self-interest, it leads to an outcome that is worse for everyone.
In a price war discussed above, the dominant strategy would be lowering your prices, even though again, this leads to lower profits for everyone. It at least guarantees your company still has some profits, instead of losing all customers to your competitor.
(Shortform note: As Dixit and Nalebuff note, the potential for individualistic decision-making to lead to worse outcomes for individuals contradicts the writings of Adam Smith, who argued that the “invisible hand” of the market, driven by individual self-interest, benefits society without the need for a visible hand, such as government. In large part, this discrepancy is because Smith’s theory doesn’t account for the conflict between short-term and long-term benefits, in which a dominant strategy leads individuals to choose immediate gains at the expense of sustainable, collective well-being. It also doesn’t account for information asymmetry, where individuals must make decisions without knowing how others will act—an inherent problem in games like pricing wars.)
The Nash Equilibrium
When playing a simultaneous game, you can usually arrive at your best choice by figuring out the equilibrium—and specifically, the Nash Equilibrium. Named after mathematician John Nash, this is the choice that benefits both players based on what they believe the other is most likely to choose, knowing the other player is also making the same judgments. This allows for an outcome where both players are generally happy and thus, their choices are stable—neither has incentive to change their decision because they’ve already arrived at the most beneficial outcome. In the prisoners’ dilemma, the Nash equilibrium is where they both confess, knowing the other will likely confess.
(Shortform note: Eric Rasumsen (Games and Information) describes an equilibrium slightly more broadly, using the term to mean the combination of strategies chosen by each player. This definition differs from Dixit and Nalebuff’s in that in their explanation, an equilibrium denotes a stable set of choices representing the resting place where strategies fall once all players’ decisions have been weighed against each other. With Rasmusen’s definition, an equilibrium is essentially the outcome of the game, whether or not players have weighed all possible options of the other players to arrive at a mutually stable decision.)
An example of how this strategy plays out in real life is in the pricing war mentioned above. Each company wants to avoid two outcomes: pricing their product too high so they lose customers to the other company, or pricing their product too low so they lose profits even if they gain customers. While they’re weighing these risks, they know that the other company is weighing the same risks, and also wants to avoid the same fates.
Each company will thus likely set their price at a level that’s high enough to cover their costs but low enough that they know the other company is unlikely to underprice them because they need to cover their own costs. In this way, the two companies can settle on a stable level for their prices that removes the need for them to constantly adjust their pricing in response to the other.
Dixit and Nalebuff write that the vast majority of simultaneous games in real life are ones where both parties benefit by cooperating, not competing, and thus, most games have a Nash equilibrium—a stable solution that both parties individually decide is most likely to benefit both of them based on what they think the others will likely do.
Dixit and Nalebuff argue that the trick to finding an equilibrium is figuring out what the focal point of the situation is: the aspect each party guesses the other will pick, and that each thinks the other guesses they’ll pick, in a circular loop of reasoning. Thus, it’s often the most prominent characteristic. For example, if two people are told to meet in New York City but not told where or at what time, they’ll likely choose to meet at noon, which seems like a “starting point” time, and at a famous spot such as the Empire State Building. Studies show that when strangers are told to do this experiment, they’re surprisingly successful at meeting up.
The Nash Equilibrium and Recursive Thinking
The reasoning used by players that leads them to a Nash equilibrium is an example of recursive thinking, mentioned earlier. To arrive at a Nash equilibrium, all players must successfully navigate the looping logic of thinking about what the other person is thinking about what you’re thinking about. The example of companies deciding their pricing is an example of this process—Company A considers what Company B will do, knowing B is considering what A will likely do, knowing B factors in what A will think B will think A will do, and so on.
The reason focal points are so important to this approach is that humans have an innate psychological tendency to converge on mutually recognizable solutions when faced with ambiguous coordination problems. This ability stems from our evolutionary history of social cooperation, where quickly finding common ground without explicit communication was crucial for survival. Because of this, shared cultural knowledge creates natural "landmarks," which is why the Empire State building stood out in the mental landscape of New York City, making it an intuitively obvious meeting point for strangers who share a common cultural context.
The Tragedy of the Commons
The game theory known as the tragedy of the commons is a prisoners’ dilemma that involves more than two people. It occurs when a group of people uses a common resource that’s freely available—for example, when hunters all hunt in the same area. Individually, each is incentivized to take as much as possible for themselves so they don’t lose it to the other players who are also taking as much as possible. This ultimately causes the resource to be depleted and all players to lose.
(Shortform note: The term tragedy of the commons was coined by ecologist Garret Hardin in 1968, building on an economic theory proposed by British writer William Forster Lloyd in 1833. In Hardin’s view, this conundrum was unsolvable by any technical means—he felt no policy or government could effectively ward off the human instinct to pursue individual rewards over collective good. Instead, he felt the problem could only be solved when human morality collectively improves so that a majority of people no longer prioritize their own interests over those of a group.)
Common resources can be conserved if all players limit their hauls, but, like with the prisoner’s dilemma, only if all of them do so. Incentives to cooperate are often destroyed by the free rider problem, where one person “cheats” and instead of limiting their haul, they continue to take as much as they can, knowing that because others are limiting their own hauls, they can take even more. Because all players are aware of this potential, all players are then incentivized to not limit themselves—because why should they limit their own take if doing so will unfairly enrich someone else? This again leads to a race where everyone takes what they can and the resource is depleted.
The tragedy of the commons underpins many of problems of society, leading to over-fishing, over-grazing, over-mining, and climate change—each individual country has no incentive to stop their own polluting activities if the rest of the world continues theirs, as limiting themselves would set them back competitively economically.
(Shortform note: The free rider problem not only describes situations where individuals take more resources than others, but also situations where an individual doesn’t contribute to a collective effort, thereby enjoying the fruits of others’ labor without doing any work—a not uncommon situation in group projects in school and work. The problem also describes situations where people benefit from a public good or service without contributing to its costs, such as when someone doesn’t donate to a charity but gets benefits from the charity’s services.)
How to Encourage Cooperation and Coordination
So how can games be played to encourage players to make selfless choices so that common resources are not overexploited? Dixit and Nalebuff note that the common market, where everyone is free to act how they want, isn’t good at encouraging things that benefit everyone but require sacrifices from everyone—such as clean water and air. Thus, a system of governance or oversight must be established that watches for cheaters and punishes those who violate the rules. This is the only way users will feel confident others aren’t cheating, and thus the only way they’ll resist cheating as well.
Dixit and Nalebuff write that to protect common resources while allowing their use:
- It must be established that a resource is only available to a certain group, and there must be clear rules as to who belongs to that group. There’s many ways members can be determined, such as geography (residents of a town, for example), skill set, ethnicity, or subscription fee.
- There must be clear rules of permitted and forbidden behavior, such as hunting seasons, technology (size of fishing boats, for example), or size of the haul.
- There have to be clear penalties for violating the rules. These can range from fines to loss of rights to incarceration. They may also be as slight as social ostracism—whatever will deter cheaters. Punishments should get more intense after multiple violations.
- There has to be a system to detect cheaters. This might mean employing regulators or guards. But it can also be users watching for others who are violating the rules.
- There should be easily accessible information. When users know facts like current game stocks and locations, harvesting technologies, and the like, they can better watch to see if others are abiding by the rules.
The Challenges of Regulating Personal Freedom
Most social scientists agree that though personal freedom is an essential element of a modern, healthy society, it must be balanced against the freedoms of others in a group. If the freedom of any one person impinges on the freedom of others, the community as a whole can be harmed, which ultimately also harms the individual claiming more freedom. This understanding underpins the theories of rules and regulations that Dixit and Nalebuff outline as well as many theories of governance. It can be difficult, though, to know where to draw the line between personal freedom and personal responsibility, and much political debate centers on this conflict.
Some political parties argue for ultimate personal freedom, such as the US’s Libertarian Party. Advocates of these theories contend that governments and other outside bodies aren’t needed to regulate selfish behavior, because selfish behavior ultimately regulates itself—bad behavior leads to negative consequences, which encourages people to behave better. Other groups prioritize the welfare of the community as a whole, such as the Chinese Communist Party, which prioritizes collectivism and disencourages individual dissent or opinions that counter approved messages.
In practice, neither extreme has proven successful in improving quality of life—experience shows that while unregulated individualism can indeed bring economic benefits to a free-market economic system, individuals do not regulate themselves before, for example, depleting fishing stocks. At the other end of the spectrum, extreme regulation of dissent and opinion has led to violent repression.
Most political parties fall somewhere in the middle of these two extremes, but that still leaves a lot of room for debate on the specifics of the rules Dixit and Nalebuff outline. For example, the US has engaged in heated deliberation for decades about who can be considered a member of their group—a citizen of the US. And across the globe, countries have long debated how to effectively punish criminals.
Regardless of which system of governance is chosen, Dixit and Nalebuff’s rules apply: Users must be aware of the rules and there must be a reliable system to detect cheaters. Transparency, as Dixit and Nalebuff note, can help with this: Resource management in multiple countries has shown that public knowledge of resource consumption can not only allow for easier detection of cheaters, but can also create social pressure, as individuals are less likely to overexploit resources if their actions are visible to their peers and the broader community.
Miscellaneous Game Strategies
After explaining the basics of sequential and simultaneous games and exploring the difficulty of getting players to coordinate, Dixit and Nalebuff discuss techniques that can help you get ahead in various types of games. These include introducing randomness to the game, detecting lies, and limiting your options.
Be Random
Dixit and Nalebuff write that because other players are trying to predict your moves just as you’re trying to predict theirs, you can make their job harder by acting randomly. If you can keep them from detecting patterns in your behavior, they’ll be less likely to guess your next move.
The authors illustrate this principle with an example of a soccer player shooting a penalty kick. This is a zero-sum game with no Nash equilibrium—there’s no choice that benefits both kicker and goalie.
A ball takes only a fraction of a second to travel from the kicker’s foot to the goal, so a goalie doesn’t have the luxury of waiting to see which way it’s headed before they have to choose which side to jump to. (Because a kicker rarely kicks to the center of the goal, Dixit and Nalebuff ignore this option and focus only on the option to kick either left or right.)
If a certain player always kicks to the left, the goalie will naturally guess that their next kick will also go left, and will jump left to block it. But if a kicker kicks left and right with no detectable pattern, the goalie will have a harder time choosing which direction to defend.
Randomness
In Think Like a Freak, Steven Levitt and Stephen Dubner also discuss the importance of choosing randomly when kicking penalty kicks. But while Dixit and Nalebuff ignore the option of kicking to the middle of the goal because of its rarity, Levitt and Dubner examine it closely for the same reason—its rarity.
They note that balls aimed at the center of a goal are significantly more likely to succeed, and yet, kickers aim their kicks at the center only 17% of the time. Furthermore, goalies rarely predict that kickers will aim at the center—a goalie will remain in the center, rather than diving to either side, only 2% of the time.
One reason for these patterns Levitt and Dubner point to aligns with Dixit and Nalebuff’s explanation: If kickers start aiming for the center more often, because they know it’s more statistically successful, that pattern will become predictable—goalies will adapt and will choose to remain in the center more frequently.
Still, this doesn’t explain the far lower rates of center kicks, or of goalies’ center stances. Levitt and Dubner note another reason, one outweighs players’ desires to act randomly no matter how effective it might be: the potential for embarrassment. In a high-stakes soccer match with thousands of spectators watching, no kicker wants to be remembered as the one who shot the ball directly to the goalie standing in the middle. And, no goalie wants to be thought of as the one who stayed standing in place without even trying to block a ball sailing past them to the left or right.
In this way, Levitt and Dubner note, a penalty kick shows how people will often pick individual well-being over collective well-being—a player usually chooses to risk losing the goal (affecting their team, collectively) rather than to appear foolish (affecting themselves individually).
Dixit and Nalebuff note that it’s difficult to be truly random. Most people unwittingly fall into patterns even when consciously trying not to. But there are a few tactics you can use to increase your randomness:
- Use recognized patterns but change them up at unpredictable times, leading the opponent to focus on when those patterns will change.
- Don’t be afraid to repeat options—people start to think that a certain option is “due,” like if you haven’t kicked “left” in a while, you’re sure to kick that way soon.
- Follow a fixed rule you know but the other person doesn’t. For example, just before kicking, glance at your watch. If the second hand is on an even number, kick left. If it’s on an odd number, kick right.
(Shortform note: These methods have long been adopted by players of all kinds of games successfully, but even when fully trying to be unpredictable with techniques like these, people still struggle to avoid falling into patterns. The propensity of soccer players to avoid the center-goal option, discussed in the box above, illustrates this: Their tendency to avoid the center, despite its better success rate, is itself a predictable pattern, no matter how random their choices of left and right are.)
Randomness in Everyday Life
Dixit and Nalebuff caution that while randomness works well in games like sports, it’s less effective in games like business. Negotiators don’t usually value unpredictability.
(Shortform note: Negotiation expert Chris Voss, author of Never Split the Difference, might qualify Dixit and Nalebuff’s argument that randomness has no place in business negotiations—Voss writes that negotiations are inherently random to some degree because people are driven by unpredictable whims and motives. He encourages readers to look for opportunities to introduce surprise elements (he calls these Black Swans) to business discussions if doing so can give you an upper hand—if the other party doesn’t predict your approach, they won’t mount an effective counter, and will be more likely to follow your lead, leading to a better deal for you.)
Dixit and Nalebuff do note some real-life, non-sports situations where randomness is effective, though. Regulators, for example, can’t check every single activity of every business or person they oversee, but random inspections can encourage people to comply with rules—if a person thinks they might get caught at any random time, they’re more likely to follow the rules at all times. This is the principle underpinning, for example, random tax audits, speed traps, and health inspections of restaurants.
(Shortform note: Random compliance checks are similar to quality control sampling in manufacturing—both systems use randomness to manage costs and to encourage widescale compliance with rules, be they regulatory rules or quality standards. The difference is that compliance checks like tax audits and speed traps aim to change future behavior through psychological persuasion, while quality control sampling aims to verify past behavior (manufacturing processes that have already happened).)
Watch for Lies
Dixit and Nalebuff argue that you can’t rely on others to tell the truth if the truth could harm their interests. For example, if a salesperson recommends you buy the most expensive toaster available, you can’t fully trust that their recommendation reflects their true opinion of the toaster if they work on commission and stand to make money on that sale.
In any strategic interaction, there’s often an imbalance of information so that one player knows more than another player. Lies happen when the player who knows more tries to conceal or manipulate their information so as to gain an advantage over the other player.
Dixit and Nalebuff advise that if you’re trying to determine if someone is telling the truth, you should look for their attempts at either signaling (with actions that broadcast good intentions) or signal jamming (with actions that limit the amount of information available to you). For example, if the salesperson above offers a no-questions-asked money-back guarantee, that’s a signal that the toaster is a good one. If they discourage you from looking at online reviews, they may be signal jamming to conceal negative information.
Either way, Dixit and Nalebuff basically advise that you watch what the other person does rather than what they say.
Detecting Liars
In Becoming Bulletproof, former FBI agent Evy Poumpouras explores how to tell if someone is lying, echoing Dixit and Nalebuff’s advice to watch for how a person acts rather than what they say. In particular, Poumpouras says to watch for changes in a person’s typical behavior, which can indicate signals as to their true intentions. This might include microexpressions (subtle facial clues that last less than half a second but can reveal underlying emotions) or changes in how a person typically uses their eyes, mouth, or hands. For example, if they normally make solid eye contact but are suddenly looking to the side frequently, this might indicate deception—aligning with Dixit and Nalebuff’s analysis, a person looking away might be trying to signal jam by avoiding your gaze to prevent you from reading them.
Poumpouras’s analysis of deception, like Dixit and Nalebuff’s, hinges on an imbalance of information, but she adds another angle: The information imbalance that gives liars an advantage also disadvantages them because it means they have additional cognitive challenges. Not only are they thinking about the extra information they know, they’re also thinking about how to conceal it, which entails controlling their bodies and their messages, as well as remembering fabricated details.
Poumpouras writes that our cognitive skills are finite resources, and when they’re stressed in this way, our ability to successfully accomplish certain kinds of tasks decreases. This is why people start making mistakes and acting atypically when they’re trying to mislead you, and why watching what they do can reveal a lot more than listening to what they say.
Limit Your Options
Sometimes the most effective way to win a game is to signal to the other player your total commitment to winning. You can do this by publicly removing your options to do anything other than continue forward. This can encourage—or even, force—the other player to give up.
Dixit and Nalebuff use the example of two drivers playing “chicken” —a game where they drive straight toward each other, and the person who swerves out of the way first loses (because it shows they’re more scared). Of course, if neither swerve, they both crash and thus they both lose by getting injured or worse. But, if you throw your steering wheel out the window, you’ll change the parameters of the game, making it clear that you can’t swerve and therefore, the other driver must.
This strategy has familiar roots in the idea of “burning your bridges” behind you, referring to armies that destroyed their ability to retreat from battle. This not only convinced the enemy of their unshakable intent, but it also convinced warriors in the advancing army to keep pushing forward, since they had no other option.
Putting this strategy into an everyday context, you can use this principle if you’re playing a game against your future self—for example, if you’re trying to eat healthier but you know your future self won’t have the willpower to resist snacks. If you limit the options of your future self by, for example, making sure you don’t have sweets or snacks in your kitchen, you’ll limit your future self’s freedom to sabotage your efforts.
Pros and Cons of the Madman Theory
Dixit and Nalebuff’s example of two drivers playing chicken is a radical one, but illustrates the possible upsides—and downsides—of the extremes of the limiting-your-options approach. In particular, it shows how acting in a psychologically unstable way can be beneficial but can also backfire.
In the 1970s, US president Richard Nixon embraced the philosophy that the best way to win a game (in his case, the Vietnam War), was to intimidate the opposing side by acting recklessly, to make them believe that he’d do anything to win, no matter how potentially detrimental to his own side. He called this the madman theory, and it hinged on making the other players unsure of his intentions.
The madman theory is only successful if the other party believes your bluff, allowing you to de-escalate a situation after making an extreme threat without having to follow through on it. Publicly destroying your alternative options—like throwing your steering wheel out the window or burning bridges behind your army—can accomplish this, but in many real-life situations, it’s not possible to use such dead-end measures. Instead, the other players must believe that you’re not bluffing when you issue threats of extreme consequences.
Thus, when you make extreme threats or limit your options to force the other side to a specific resolution, you have to be willing to follow through, even if the consequence harms you. If you don’t, then the next time you’re negotiating, the other parties won’t be swayed by your extreme measures. You thus have to make sure the threats you issue are ones you’ll be able to manage if you must put them into action. In Dixit and Nalebuff’s example, this means destroying your own car by getting rid of your steering wheel—you’ll definitely win the game of chicken, but you’ll be out a car.
Want to learn the rest of The Art of Strategy in 21 minutes?
Unlock the full book summary of The Art of Strategy by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's The Art of Strategy PDF summary:
What Our Readers Say
This is the best summary of The Art of Strategy I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example