
Why do smart people sometimes make decisions that seem to work against their own interests? What if you could predict your competitors’ moves and always stay one step ahead?
Game theory offers insights for navigating everything from business negotiations to everyday conflicts. The Art of Strategy by Avinash K. Dixit and Barry J. Nalebuff explains how understanding strategic thinking can transform your approach to competition and cooperation.
Keep reading for our overview of The Art of Strategy: A Game Theorist’s Guide to Success in Business and Life.
Overview of The Art of Strategy by Avinash K. Dixit and Barry J. Nalebuff
The Art of Strategy by Avinash K. Dixit and Barry J. Nalebuff explores how you can apply strategic principles of game theory to your business and everyday life. The authors argue that success in any competitive situation depends on understanding your choices as well as your opponent’s: anticipating moves, reasoning backward from your ultimate goal, and recognizing when to act against your own interests. They examine real-world case studies to explain foundational game theory concepts such as the Nash equilibrium, dominant strategies, and how to encourage people to make selfless choices for the good of the group when selfish ones will benefit them individually.
In this overview of The Art of Strategy: A Game Theorist’s Guide to Success in Business and Life (published in 2010), we’ll explore general game strategies as well as specific strategies for the various types of games you may encounter in your everyday life.
Overview of Games
Dixit and Nalebuff define a game as a set of interactions between people or organizations in which each player’s options, decisions, and outcomes depend on the decisions of the other players. They call this “strategic interdependence.”
As Dixit and Nalebuff explain it, your overall strategy when playing a game is to try to predict what the other players will do so you can make choices that counter, nullify, or otherwise work off their moves so that your interests are served no matter what they do. To make these predictions, you’ll consider the goals, motives, and points of view of the other players. You’ll also assume they’re trying to predict your next move as much as you’re trying to predict theirs.
In general, when playing games, you should assume other players are motivated by self-interest. However, Dixit and Nalebuff note that human motivations aren’t always straightforward. People are driven by a combination of selfishness, benevolence, justice, fairness, and short-term and long-term considerations. They’re influenced by emotions like shame, fear, and happiness. They often act irrationally, can be well aware that their own interests can align with yours, and are likely to engage in “reciprocal altruism,” whereby they act selflessly in the interest of a larger group. These differing motives are born of evolutionary drives that sometimes prioritize individual survival and sometimes prioritize group survival. Therefore, though people will usually try to further their interests at the expense of yours, this isn’t always the case.
Dixit and Nalebuff categorize games by two overall characteristics—games are either:
- Zero-sum or non-zero-sum
- Sequential or simultaneous
We’ll explore each of these distinctions below.
Zero-Sum and Non-Zero-Sum Games
Dixit and Nalebuff explain that if the interests of each player directly conflict with the interests of the other players so that one person’s win is another person’s loss, the game is a zero-sum game. Examples are sports championships and job applications—for one team or applicant to win, all others have to lose.
Non-zero-sum games are those in which multiple players can either benefit or lose at the same time. Examples include business transactions where buyer and seller both benefit from a deal, or where they both lose out by not agreeing to a price.
Dixit and Nalebuff write that most games of life, business, and politics are a combination of these two types of games—players can either both win, both lose, or end up somewhere in the middle, where one wins because the other loses to at least some degree. Continuing the example above, this might mean that the resulting business deal benefits both parties, but benefits one more than the other.
Sequential and Simultaneous Games
Dixit and Nalebuff note another characteristic of games, which is that they are either played sequentially or simultaneously:
- In sequential games, players take turns making moves. Each can then observe the other’s actions and respond accordingly. An example is a board game.
- In simultaneous games, players act at the same time without knowing the other players’ choices. An example is American football, where the offensive team decides which player to pass to and the defensive team decides which player to cover before the ball is snapped into play, and then both teams play out the consequences of their choices simultaneously.
Dixit and Nalebuff devote the majority of their book to discussing simultaneous games, which are more complex than sequential games because you can’t see the other player’s moves before you choose your own moves. Our guide will do the same—we’ll explore sequential games briefly, then spend the rest of the guide on simultaneous games.
Sequential Games
To successfully navigate sequential games, Dixit and Nalebuff recommend using a game tree to look forward to your ultimate goal and then reason backward. A game tree is a type of decision tree, which is a common analysis tool where you write down a starting choice, then draw a line from it to each possible next choice. From those choices, you draw lines to subsequent choices, and so on. The resulting chart will resemble a tree branching out from a central trunk.
For example, if your end goal is to become a lawyer, your decision tree’s trunk—your starting option—might be “go to college.” From there, your branches could point to different colleges that offer law programs. From there, your branches might point to the specialties each college program offers, as well as potential internships each college might lead to, and so on. You could then judge which path is most likely to lead you to your desired job.
A game tree differs from a simple decision tree because it also accounts for the decisions of others. For example, if you’re playing checkers, your choices at any point could be to move one piece here or another piece there. In either case, your opponent would then have specific choices in response, leading to subsequent options for you, and so on. To use a game tree, keep in mind your goal destination (capturing all your opponents pieces), and then choose the path that is most likely to lead you there, judging along the way your opponent’s likely responses to your moves. In this way, think ahead to where you want to end up, then work backward to figure out how to get there.
Dixit and Nalebuff note that game trees lose their usefulness in extremely complex cases—for example, if you’re playing chess rather than checkers, where each move reveals so many choices for both players that even supercomputers struggle to analyze them. But in games you encounter in everyday life, such as business or political deals, this approach generally works—pick your end goal, figure out the steps to get there taking into account your opponent’s likely reactions, and choose the path most likely to succeed.
Simultaneous Games
Dixit and Nalebuff devote the majority of their book to exploring simultaneous games, where players make their moves at the same time. Because players don’t have the advantage of seeing other players’ choices before they get their turn to choose a move, everyone has to predict what the other player will do at the same time they’re planning their own moves.
Dixit and Nalebuff write that most games of everyday life are simultaneous games. For example, two companies bringing similar products to market will plan their marketing campaign without knowing how the other company plans to market their product.
Cooperation Versus Competition in Simultaneous Games
Dixit and Nalebuff write that in many games, there’s a tension between selfish choices that benefit an individual and selfless choices that benefit a group as a whole. Individual actions can lead to collectively worse outcomes, but cooperative success is often possible only if everyone in a group acts against their own individual interests—a difficult thing to pull off. When faced with a choice to either ensure their own survival or to risk their personal gain for the good of the group, players often act selfishly, even if they’d ultimately come out better by acting in the group’s interest.
A famous illustration of the way games inherently encourage selfishness, even when it ultimately doesn’t benefit the players, is the prisoners’ dilemma. Dixit and Nalebuff use this game theory concept to explain the related concepts of dominant strategies, the Nash equilibrium, and the tragedy of the commons. We’ll explore each of these below.
The Paradox of the Prisoner’s Dilemma
The prisoner’s dilemma is a classic game theory concept that demonstrates how individuals acting in their own self-interest can end up hurting their interests. In a prisoner’s dilemma, two criminals are interrogated separately about a crime. Both did the crime, but both naturally want to get away with it. To convict them, the police need a confession. While they’re interrogated, a prisoner won’t know if the other is confessing or staying silent, so their dilemma is as follows:
- If they both stay silent, they both get away with it (no prison time).
- If one confesses while the other stays silent, the confessor gets a light punishment (say, one year in prison) and the silent one gets a harsh punishment (10 years in prison).
- If they both confess, they both get a medium-harsh punishment (for example, three years in prison).
Thus, they both benefit most if they both stay silent—but only if they both stay silent. As each one stays silent individually, they each risk the worst outcome, where the other confesses to ensure a lighter punishment. Ultimately, then, they both have an incentive to confess, even though that’s the choice that leads to a somewhat bad result for everyone.
Dixit and Nalebuff note that this game dynamic shows up in real life situations when, for example, companies engage in price wars: Companies A and B will both benefit if they both keep their prices high. If Company A lowers prices, they’ll gain customers (and profits) as Company B loses customers (and profits). If both lower prices, they’ll both end up with lower profits, but they’re both incentivized to lower profits so that they don’t lose customers to the other. Thus, they each have an incentive to make the choice that leads to a somewhat bad result for both of them.
Dominant Strategy
The best approach to navigating simultaneous games is to use what game theorists call a dominant strategy—a strategy that will deliver a good outcome whether the other person chooses selfishly or selflessly.
In the prisoner’s dilemma, this means confessing early, because you’ll avoid a harsh punishment whether or not the other stays silent. This points to a paradox that contradicts typical economic theories espoused by free-market thinkers such as Adam Smith: When every player pursues their own self-interest, it leads to an outcome that is worse for everyone.
In a price war discussed above, the dominant strategy would be lowering your prices, even though again, this leads to lower profits for everyone. It at least guarantees your company still has some profits, instead of losing all customers to your competitor.
The Nash Equilibrium
When playing a simultaneous game, you can usually arrive at your best choice by figuring out the equilibrium—and specifically, the Nash Equilibrium. Named after mathematician John Nash, this is the choice that benefits both players based on what they believe the other is most likely to choose, knowing the other player is also making the same judgments. This allows for an outcome where both players are generally happy and thus, their choices are stable—neither has incentive to change their decision because they’ve already arrived at the most beneficial outcome. In the prisoners’ dilemma, the Nash equilibrium is where they both confess, knowing the other will likely also confess.
An example of how this strategy plays out in real life is in the pricing war mentioned above. Each company wants to avoid two outcomes: pricing their product too high so they lose customers to the other company, or pricing their product too low so they lose profits even if they gain customers. While they’re weighing these risks, they know that the other company is weighing the same risks, and also wants to avoid the same fates.
Each company will thus likely set their price at a level that’s high enough to cover their costs but low enough that they know the other company is unlikely to underprice them because they need to cover their own costs. In this way, the two companies can settle on a stable level for their prices that removes the need for them to constantly adjust their pricing in response to the other.
Dixit and Nalebuff write that the vast majority of simultaneous games in real life are ones where both parties benefit by cooperating, not competing, and thus, most games have a Nash equilibrium—a stable solution that both parties individually decide is most likely to benefit both of them based on what they think the others will likely do.
Dixit and Nalebuff argue that the trick to finding an equilibrium is figuring out what the focal point of the situation is: the aspect each party guesses the other will pick, and that each thinks the other guesses they’ll pick, in a circular loop of reasoning. Thus, it’s often the most prominent characteristic. For example, if two people are told to meet in New York City but not told where or at what time, they’ll likely choose to meet at noon, which seems like a “starting point” time, and at a famous spot such as the Empire State Building. Studies show that when strangers are told to do this experiment, they’re surprisingly successful at meeting up.
The Tragedy of the Commons
The game theory known as the tragedy of the commons is a prisoners’ dilemma that involves more than two people. It occurs when a group of people uses a common resource that’s freely available—for example, when hunters all hunt in the same area. Individually, each is incentivized to take as much as possible for themselves so they don’t lose it to the other players who are also taking as much as possible. This ultimately causes the resource to be depleted and all players to lose.
Common resources can be conserved if all players limit their hauls, but, like with the prisoner’s dilemma, only if all of them do so. Incentives to cooperate are often destroyed by the free rider problem, where one person “cheats” and instead of limiting their haul, they continue to take as much as they can, knowing that because others are limiting their own hauls, they can take even more. Because all players are aware of this potential, all players are then incentivized to not limit themselves—because why should they limit their own take if doing so will unfairly enrich someone else? This again leads to a race where everyone takes what they can and the resource is depleted.
The tragedy of the commons underpins many of problems of society, leading to over-fishing, over-grazing, over-mining, and climate change—each individual country has no incentive to stop their own polluting activities if the rest of the world continues theirs, as limiting themselves would set them back competitively economically.
How to Encourage Cooperation and Coordination
So how can games be played to encourage players to make selfless choices so that common resources are not overexploited? Dixit and Nalebuff note that the common market, where everyone is free to act how they want, isn’t good at encouraging things that benefit everyone but require sacrifices from everyone—such as clean water and air. Thus, a system of governance or oversight must be established that watches for cheaters and punishes those who violate the rules. This is the only way users will feel confident others aren’t cheating, and thus the only way they’ll resist cheating as well.
Dixit and Nalebuff write that, to protect common resources while allowing their use:
- It must be established that a resource is only available to a certain group, and there must be clear rules as to who belongs to that group. There’s many ways members can be determined, such as geography (residents of a town, for example), skill set, ethnicity, or subscription fee.
- There must be clear rules of permitted and forbidden behavior, such as hunting seasons, technology (size of fishing boats, for example), or size of the haul.
- There have to be clear penalties for violating the rules. These can range from fines to loss of rights to incarceration. They may also be as slight as social ostracism—whatever will deter cheaters. Punishments should get more intense after multiple violations.
- There has to be a system to detect cheaters. This might mean employing regulators or guards. But it can also be users watching for others who are violating the rules.
- There should be easily accessible information. When users know facts like current game stocks and locations, harvesting technologies, and the like, they can better watch to see if others are abiding by the rules.
Miscellaneous Game Strategies
After explaining the basics of sequential and simultaneous games and exploring the difficulty of getting players to coordinate, Dixit and Nalebuff discuss techniques that can help you get ahead in various types of games. These include introducing randomness to the game, detecting lies, and limiting your options.
Be Random
Dixit and Nalebuff write that, because other players are trying to predict your moves just as you’re trying to predict theirs, you can make their job harder by acting randomly. If you can keep them from detecting patterns in your behavior, they’ll be less likely to guess your next move.
The authors illustrate this principle with an example of a soccer player shooting a penalty kick. This is a zero-sum game with no Nash equilibrium—there’s no choice that benefits both kicker and goalie.
A ball takes only a fraction of a second to travel from the kicker’s foot to the goal, so a goalie doesn’t have the luxury of waiting to see which way it’s headed before they have to choose which side to jump to. (Because a kicker rarely kicks to the center of the goal, Dixit and Nalebuff ignore this option and focus only on the option to kick either left or right.)
If a certain player always kicks to the left, the goalie will naturally guess that their next kick will also go left, and will jump left to block it. But if a kicker kicks left and right with no detectable pattern, the goalie will have a harder time choosing which direction to defend.
Dixit and Nalebuff note that it’s difficult to be truly random. Most people unwittingly fall into patterns even when consciously trying not to. But there are a few tactics you can use to increase your randomness:
- Use recognized patterns but change them up at unpredictable times, leading the opponent to focus on when those patterns will change.
- Don’t be afraid to repeat options—people start to think that a certain option is “due,” like if you haven’t kicked “left” in a while, you’re sure to kick that way soon.
- Follow a fixed rule you know but the other person doesn’t. For example, just before kicking, glance at your watch. If the second hand is on an even number, kick left. If it’s on an odd number, kick right.
Randomness in Everyday Life
Dixit and Nalebuff caution that while randomness works well in games like sports, it’s less effective in games like business. Negotiators don’t usually value unpredictability.
Dixit and Nalebuff do note some real-life, non-sports situations where randomness is effective, though. Regulators, for example, can’t check every single activity of every business or person they oversee, but random inspections can encourage people to comply with rules—if a person thinks they might get caught at any random time, they’re more likely to follow the rules at all times. This is the principle underpinning, for example, random tax audits, speed traps, and health inspections of restaurants.
Watch for Lies
Dixit and Nalebuff argue that you can’t rely on others to tell the truth if the truth could harm their interests. For example, if a salesperson recommends you buy the most expensive toaster available, you can’t fully trust that their recommendation reflects their true opinion of the toaster if they work on commission and stand to make money on that sale.
In any strategic interaction, there’s often an imbalance of information so that one player knows more than another player. Lies happen when the player who knows more tries to conceal or manipulate their information so as to gain an advantage over the other player.
Dixit and Nalebuff advise that if you’re trying to determine if someone is telling the truth, you should look for their attempts at either signaling (with actions that broadcast good intentions) or signal jamming (with actions that limit the amount of information available to you). For example, if the salesperson above offers a no-questions-asked money-back guarantee, that’s a signal that the toaster is a good one. If they discourage you from looking at online reviews, they may be signal jamming to conceal negative information.
Either way, Dixit and Nalebuff basically advise that you watch what the other person does rather than what they say.
Limit Your Options
Sometimes the most effective way to win a game is to signal to the other player your total commitment to winning. You can do this by publicly removing your options to do anything other than continue forward. This can encourage—or even, force—the other player to give up.
Dixit and Nalebuff use the example of two drivers playing “chicken” —a game where they drive straight toward each other, and the person who swerves out of the way first loses (because it shows they’re more scared). Of course, if neither swerve, they both crash and thus they both lose by getting injured or worse. But, if you throw your steering wheel out the window, you’ll change the parameters of the game, making it clear that you can’t swerve and therefore, the other driver must.
This strategy has familiar roots in the idea of “burning your bridges” behind you, referring to armies that destroyed their ability to retreat from battle. This not only convinced the enemy of their unshakable intent, but it also convinced warriors in the advancing army to keep pushing forward, since they had no other option.
Putting this strategy into an everyday context, you can use this principle if you’re playing a game against your future self—for example, if you’re trying to eat healthier but you know your future self won’t have the willpower to resist snacks. If you limit the options of your future self by, for example, making sure you don’t have sweets or snacks in your kitchen, you’ll limit your future self’s freedom to sabotage your efforts.