Book Summary: The Black Swan, by

Learn the key points in minutes.

Book Rating by Shortform Readers: 4.8 (55 reviews)

This is a preview of the Shortform book summary of The Black Swan by Nassim Nicholas Taleb. Read the full comprehensive summary at Shortform.

1-Page Book Summary of The Black Swan

The Black Swan is the second book in former options trader Nassim Nicholas Taleb’s five-volume series on uncertainty. This book analyzes so-called “Black Swans”—extremely unpredictable events that have massive impacts on human society.

The Black Swan is named after a classic error of induction wherein an observer assumes that because all the swans he’s seen are white, all swans must be white. Black Swans have three salient features:

  • They are rare (statistical outliers);
  • They are disproportionately impactful; and, because of that outsize impact,
  • They compel human beings to explain why they happened—to show, after the fact, that they were indeed predictable.

Taleb’s thesis, however, is that Black Swans, by their very nature, are always unpredictable—they are the “unknown unknowns” for which even our most comprehensive models can’t account. The fall of the Berlin Wall, the 1987 stock market crash, the creation of the Internet, 9/11, the 2008 financial crisis—all are Black Swans.

Once Taleb introduces the concept of the Black Swan, he delves into human society and psychology, analyzing why modern civilization invites wild randomness and why humans can neither accept nor control that randomness.

Extremistan vs. Mediocristan

To explain how and why Black Swans occur, Taleb coins two categories to describe the measurable facets of existence: Extremistan and Mediocristan.

In Mediocristan, randomness is highly constrained, and deviations from the average are minor. Physical characteristics such as height and weight are from Mediocristan: They have upper and lower bounds, their distribution is a bell curve, and even the tallest or lightest human being isn’t much taller or lighter than the average. In Mediocristan, prediction is possible.

In Extremistan, however, randomness is wild, and deviations from the average can be, well, extreme. Most social, man-made aspects of human society—the economy, the stock market, politics—hail from Extremistan: They have no known upper or lower bounds, their behavior can’t be graphed on a bell curve, and individual events or phenomena—i.e., Black Swans—can have exponential impacts on averages.

Imagine you put ten people in a room. Even if one of those people is Shaquille O’Neal, the average height in the room is likely to be pretty close to the human average (Mediocristan). If one of those people is Jeff Bezos, however, suddenly the wealth average changes drastically (Extremistan).

The Unreliability of “Experts”

Taleb has very little patience for “experts”—academics, thought leaders, corporate executives, politicians, and the like. Throughout the book, Taleb illustrates how and why “experts” are almost always wrong and have little more ability to predict the future than the average person.

There are two reasons “experts” make bad predictions:

1) Human Nature

Because of various habits innate to our species—our penchant for telling stories, our belief in cause and effect, our tendency to “cluster” around specific ideas (confirmation bias) and “tunnel” into specific disciplines or methods (specialization)—we tend to miss or minimize randomness’s effect on our lives. Experts are no less guilty of this blindspot than your average person.

2) Flawed Methods

Because experts both (1) “tunnel” into the norms of their particular discipline and (2) base their predictive models exclusively on past events, their predictions are inevitably susceptible to the extremely random and unforeseen.

Consider, for example, a financial analyst predicting the price of a barrel of oil in ten years. This analyst may build a model using the gold standards of her field: past and current oil prices, car manufacturers’ projections, projected oil-field yields, and a host of other factors, computed using the techniques of regression analysis. The problem is that this model is innately narrow. It can’t account for the truly random—a natural disaster that disrupts a key producer, or a war that increases demand exponentially.

Taleb draws a key distinction between experts in Extremistan disciplines (economics, finance, politics, history) and Mediocristan disciplines (medicine, physical sciences). Experts like biologists and astrophysicists are able to predict events with fair accuracy; experts like economists and financial planners are not.

Difficulties of Prediction

The central problem with experts is their uncritical belief in the possibility of prediction, despite the mountain of evidence that indicates prediction is a fool’s errand. Some key illustrations of the futility of prediction include:

Discoveries

Most groundbreaking discoveries occur by happenstance—luck—rather than careful and painstaking work. The quintessential example is the discovery of penicillin. Discoverer Alexander Fleming wasn’t researching antibiotics; rather, he was studying the properties of a particular bacterium. He left a stack of cultures lying out in his laboratory while he went on vacation, and when he returned he found that a bacteria-killing mold had formed on one of the cultures. Voilá—the world’s first antibiotic.

Dynamical Systems

A dynamical system is one in which an array of inputs affect each other. Whereas prediction in a system that contains, say, two inputs, is a simple affair—one need only account for the qualities and behavior of those two inputs—prediction in a system that contains, say, five hundred billion inputs is effectively impossible.

The most famous illustration of a dynamical system’s properties is the “butterfly effect.” This idea was proposed by an MIT meteorologist, who discovered that an infinitesimal change in input parameters can drastically change weather models. The “butterfly effect” describes the possibility that the flutter of a butterfly’s wings can, a few weeks later and many miles distant, cause a tornado.

Predicting the Past

The past itself is as unknowable as the future. Because of how complex the world is and how a single event could be influenced by any number of tiny causes, we cannot reverse engineer causes for events.

An example should help illustrate. Think of an ice cube sitting on a table. Imagine the shape of the puddle that ice cube will make as it melts.

Now think of a puddle on the table and try to imagine how that puddle got there.

When historians propose causes for certain historical events, they’re looking at puddles and imagining ice cubes (or a spilled glass of water, or some other cause). The problem is that the sheer number of possible causes for a puddle—or a historical event—render any ascription of cause suspect.

If You Can’t Predict, How Do You Deal with Uncertainty?

Although Taleb is far more concerned with explaining why prediction is impossible than he is with proposing alternatives or solutions, he does offer some strategies for dealing with radical uncertainty.

1) Don’t Sweat the Small Predictions

When it comes to low-stakes, everyday predictions—about the weather, say, or the outcome of a baseball game—there’s no harm in indulging our natural penchant for prediction: If we’re wrong, the repercussions are minimal. It’s when we make large-scale predictions and incur real risk on their basis that we get into trouble.

2) Maximize Possibilities for Positive Black Swans

Although the most memorable Black Swans are typically the negatively disruptive ones, Black Swans can also be serendipitous. (Shortform note: Love at first sight is an example of a serendipitous Black Swan.)

Two strategies for opening ourselves up to positive Black Swans are (1) sociability and (2) proactiveness when presented with an opportunity. Sociability puts us in the company of others who may be in a position to help us—we never know where a casual conversation might lead. And proactiveness—for example, taking up a successful acquaintance on an invitation to have coffee—ensures we’ll never miss our lucky break.

3) Adopt the “Barbell Strategy”

When Taleb was a trader, he pursued an idiosyncratic investment strategy to inoculate himself against a financial Black Swan. He devoted 85%–90% of his portfolio to extremely safe...

Want to learn the rest of The Black Swan in 21 minutes?

Unlock the full book summary of The Black Swan by signing up for Shortform.

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's The Black Swan summary:

The Black Swan Summary Shortform Introduction

In his April 2007 review of The Black Swan in the New York Times, Gregg Easterbrook, riffing on the author’s skepticism about forecasts of any kind, noted, “At the beginning of 2006, the Wall Street Journal forecast a bad year for stocks; the Dow Jones Industrial Average rose 16% that year. (Disturbingly, the Journal has forecast a good year for 2007.)” Mere months later, the world economy would be in a tailspin—and Nassim Nicholas Taleb, who in The Black Swan warned that the global financial system was vulnerable to collapse, would be treated as a seer.

The Black Swan covers a broad range of topics and is organized in a somewhat unbalanced way. The first two parts of the book contain its core ideas, moving from (1) the ways we domesticate randomness to (2) the reasons why prediction is impossible to (3) humans’ best options when it comes to uncertainty. (Chapters 1–7 of the summary correspond to these topics.) The third part of the book, meanwhile, is framed as an add-on to the first two parts for the more technically minded reader—it grounds many of the claims Taleb makes in the second part with examples from history and the social sciences. (This material can be found in our summary’s “Appendix.”) The fourth and final part, entitled “The End,” is very brief and functions...

The Black Swan Summary Chapter 1: What Is a Black Swan?

For millennia, it was universally accepted that all swans were white. In fact, this truth was so incontrovertible that logicians would often use it to illustrate the process of deductive reasoning. That classic deduction went like this:

  1. All swans are white
  2. The bird is a swan
  3. The bird is white

But in 1697, Willem de Vlamingh, a Dutch explorer, discovered black swans while on a rescue mission in Australia—and, in an instant, a universal, incontrovertible truth was shown to be anything but.

After Vlamingh’s discovery, philosophers used the term “black swan” to describe a seeming logical impossibility that could very well end up being possible.

Taleb, however, offers a new spin on the term. He uses it to describe specific historical events with specific impacts. These events have three salient features:

  • They are “outliers” (that is, they are statistically insignificant);
  • They have profound real-world impacts; and
  • Despite (or perhaps because of) their extreme unpredictability, they compel human beings to account for them—to explain after the fact that they were in fact predictable.

Some examples of Black Swan events include World Wars I and II, the fall of the Berlin Wall, 9/11, the rise of the Internet, the stock-market crash of 1987, and the 2008 financial crisis.

Taleb’s thesis is that Black Swans, far from being insignificant or unworthy of systematic study, comprise the most significant phenomena in human history. We should study them, even if we can’t predict them. Thus, counter-intuitively, we would be better served by concentrating our intellectual energies on what we don’t—nay, can’t—know, rather than on what we do and can know.

Taleb also claims, also counter-intuitively, that the more our knowledge advances, the more likely we are to be blindsided by a Black Swan. This is because our knowledge is forever becoming more precise and specific and less capable of recognizing generality—for example, the general tendency for earth-shattering events to be completely unforeseen (which, of course, is _why...

The Black Swan Summary Chapter 2: Scalability | Mediocristan and Extremistan

One reason that Black Swans are so profoundly disruptive is that they occur in the “scalable” parts of our lives—where physical limits don’t apply and effects tend toward incredible extremes. When a particular thing—an income, an audience for a particular product—is “scalable,” it can grow exponentially without any additional expenditure of effort.

“Massage therapist,” for example, is a “nonscalable” profession. There is an upper limit on how many clients you can see—there’s only so much time in a day, and therapists’ bodies fatigue—and thus there’s only so much income you can expect from that profession.

“Quantitative trader,” however, is a “scalable” profession. It takes no additional energy or time to purchase 5,000 shares of a stock than 50, and your income isn’t limited by physical constraints.

Artists, too, are in a scalable profession (at least in the age of digital reproduction). For instance, a singer doesn’t need to perform her hit song each time someone wants to hear it. She performs it once for the record, and that performance can be disseminated widely.

The problem with scalability is that it creates vast inequalities. Let’s look at the singer example again:

  • Before the advent of recording technology, a singer’s audience was limited to those for whom she could physically perform. That is, a singer in one town wasn’t likely to prevent the survival of a singer in another town; they might have differently sized audiences—based on the populations of their respective towns—and thus different incomes, but those differences would be comparatively mild.

  • After the advent of recording technology, however, a small number of singers come to dominate the listening public. Now that we can pay pennies to stream Beyoncé any time we want, why spend the $10 or $20 to see a local singer we’ve never heard of? Suddenly, differences in audience and income become vast. With scalability comes extremes.

The Contrary Worlds of Mediocristan and Extremistan

“Mediocristan” is Taleb’s term for the facets of our experience that are...

What Our Readers Say

This is the best summary of The Black Swan I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

The Black Swan Summary Chapter 3: Don’t Be a Turkey | It Pays to Be a Skeptic

Picture a turkey cared for by humans. It has been fed every day for its entire life by the same humans, and so it has come to believe the world works in a certain, predictable, and advantageous way. And it does...until the day before Thanksgiving.

Made famous by British philosopher Bertrand Russell (though, in his telling, the unlucky bird was a chicken), this story illustrates the problem with inductive reasoning (the derivation of general rules from specific instances). With certain phenomena—marketing strategy, stock prices, record sales—a pattern in the past is no guarantee of a pattern in the future.

In Taleb’s words, the turkey was a sucker—it had full faith that the events of the past accurately indicated the future. Instead, it was hit with a Black Swan, an event that completely upends the pattern of the past. (It’s worth noting that the problem of inductive reasoning is the problem of Black Swans: Black Swans are possible because we lend too much weight to past experience.)

Another example of faulty inductive reasoning, this time from the world of finance, concerns the hedge fund Amaranth (ironically named after a flower that’s “immortal”), which incurred one of the steepest losses in trading history: $7 billion in less than a week. Just days before the company went into tailspin, Amaranth had reminded its investors that the firm employed twelve risk managers to keep losses to a minimum. The problem was that these risk managers—or suckers—based their models on the market’s past performance.

In order not to be suckers, we must (1) cultivate an “empirical skepticism”—that is, a skepticism steeped in fact and observation—and (2) remain vigilant against the innately human tendencies that leave us vulnerable to Black Swans.

Traits of the Empirical (a-Platonic) Skeptic Traits of the Platonifier
Respects those who say “I don’t know” Views those who say “I don’t know” as ignorant
...

Shortform Exercise: Responding to Randomness

Explore what it means to be an “empirical skeptic.”


Write down something that happened to you recently, good or bad, that was out of the ordinary.

The Black Swan Summary Chapter 4: The Scandal of Prediction

With the rapid advance of technology—computer chips, cellular networks, the Internet—it stands to reason that our predictive capabilities too are advancing. But consider how few of these groundbreaking advances in technology were themselves predicted. For example, no one predicted the Internet, and it was more or less ignored when it was created.

(Shortform note: It’s unclear how Taleb defines “predicted.” Plenty of science-fiction writers and cultural commentators anticipated recent technologies like the Internet and augmented and virtual reality.)

It is an inconvenient truth that humans’ predictive capabilities are extremely limited; we are continuously faced with catastrophic or revolutionary events that arrive completely unexpectedly and for which we have no plan. Yet, nevertheless, we maintain that the future is knowable and that we can adequately prepare for it. Taleb calls this tendency the scandal of prediction.

Epistemic Arrogance

The reason we overestimate our ability to predict is that we’re overconfident in our knowledge.

A classic illustration of the fact comes from a study conducted by a pair of Harvard researchers. In the study, the researchers asked subjects to answer specific questions with numerical ranges. (A sample question might be, “How many redwoods are there in Redwood Park in California?” To which the subject would respond, “I’m 98% sure there are between x and y number of redwoods.) The researchers found that the subjects, though they were 98% sure of their answers, ended up being wrong 45% of the time! (Fun fact: The subjects of the study were Harvard MBAs.) In other words, the subjects picked overly narrow ranges because they overestimated they own ability to estimate. If they had picked wider ranges—and, in so doing, acknowledged their own lack of knowledge—they would have scored much better.

Taleb calls our overconfidence in our knowledge “epistemic arrogance.” On the one hand, we overestimate what we know; on the other, we underestimate what we don’t—uncertainty.

It’s important to recognize that Taleb isn’t...

Shortform Exercise: Learning the Limits of Prediction

Think like Taleb about prediction and the limits of your ability to predict things.


Write down a prediction you recently made, whether it was about a baseball game, the economy, an election, or other event.

The Black Swan Summary Chapter 5: Why We Can’t Know What We’ll Know

Epistemic arrogance, the pretensions of “experts,” our ever-increasing access to information—all belie an incontrovertible fact: In many, perhaps even most, areas of our lives, prediction is simply impossible.

Take discoveries, for example. At any given moment, there are scores of scientists, scholars, researchers, and inventors around the world working diligently to better our lives and increase our knowledge. But what often goes unremarked is that the discoveries with the profoundest impact on our lives are inadvertentrandom—rather than the reward for careful and painstaking work.

The discovery of penicillin is a case in point. Biologist Alexander Fleming left a stack of cultures sitting out in his laboratory while he went on vacation, and when he returned, a bacteria-killing mold had formed on one of the cultures. Voila!—the world’s first antibiotic.

The same goes for the discovery of the cosmic microwave background, the omnipresent radiation in space that provides a key piece of evidence for the Big Bang. No researcher had any idea it existed until two radio astronomers noticed a hiss in their listening devices. How unexpected was their discovery? At first, the astronomers thought the hiss was caused by pigeon droppings on their antenna!

Because earth-shattering discoveries are unexpected or inadvertent, their importance, at least in the beginning, often goes unrecognized. When Darwin presented his findings at the Linnean Society, the 19th-century’s preeminent institution of natural history, its president dismissed the theory as “no striking discovery.”

The Scientific Grounds for Uncertainty

The innovator of the scientific method of “falsification,” Karl Popper, also proposed an influential theory of history. The theory held that, because technological advance was “fundamentally unpredictable,” so too was the course of history.

Popper’s theory is echoed by a key law in statistics called “the law of iterated expectations.” The law describes a situation **when to predict something means to already know that...

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example

The Black Swan Summary Chapter 6: Predicting the Past

Through the limitations of inductive reasoning as illustrated by the turkey anecdote, as well as the distortions of the narrative fallacy and silent evidence, we’ve seen how problematic the past is vis-à-vis prediction. But because of these phenomena and others, the past itself is as unknowable as the future.

One of the major obstacles that prevents us from knowing the past with certainty is the impossibility of reverse engineering causes for events. That is, there’s no way to determine the precise cause of an event when we work backward in time from the event itself.

An example should help illustrate.

Think of an ice cube sitting on a table. Imagine the shape of the puddle that ice cube will make as it melts.

Now think of a puddle on the table and try to imagine how that puddle got there.

The second thought experiment is much harder than the first. With the right physics know-how and ample time, one could model exactly what kind of puddle will result from the melting ice cube (based on the cube’s shape, the environmental conditions, etc.). In contrast, it’s nearly impossible to reverse engineer a cause from a random puddle (because the puddle could have been caused by any number of things).

When historians propose causes for certain historical events, they’re looking at puddles and imagining ice cubes (or a spilled glass of water, or some other cause). The problem is that the sheer number of possible causes for a puddle—or a historical event—render any ascription of cause suspect.

Poincaré’s nonlinearities too help illustrate this problem. Again, with the right tools and time, one might be able to observe how the flutter of a butterfly’s wings in India causes a hurricane in Florida, but it would be impossible to work backwards from the hurricane to that cause—there are just too many other tiny events that may have played a part.

Our Information Is Always Incomplete

Mathematicians and philosophers draw a distinction between “true randomness” and “deterministic chaos.” A “random” system is one whose operation is always and forever...

The Black Swan Summary Chapter 7: What to Do When You Can’t Predict

If we are surrounded by randomness and unpredictability, if our well-being is radically uncertain, what—besides despair—are our options?

1) Don’t Sweat the Small Predictions

It bears repeating that humans’ ability to predict in the short-term is unique among animal species and quite possibly the reason we’ve survived and thrived as long as we have. To predict is human.

So, when it comes to low-stakes, everyday predictions—about the weather, say, or the outcome of a baseball game—there’s no harm in indulging our natural penchant for prediction: If we’re wrong, the repercussions are minimal. It’s when we make large-scale predictions and incur real risk on their basis that we get into trouble.

2) Maximize Possibilities for Positive Black Swans

Although the most memorable Black Swans are typically the negatively disruptive ones, Black Swans can also be serendipitous. (Shortform note: Love at first sight is undoubtedly a Black Swan.)

Taleb advocates (1) sociability and (2) proactiveness when presented with an opportunity as strategies for opening ourselves up to positive Black Swans. Sociability puts us in the company of others who may be in a position to help us—we never know where a casual conversation might lead. And proactiveness—taking up a successful acquaintance on an invitation to have coffee, for example—ensures we’ll never miss our lucky break.

3) Adopt the “Barbell Strategy”

When Taleb was a trader, he pursued an idiosyncratic investment strategy to inoculate himself against a financial Black Swan. He devoted 85%–90% of his portfolio to extremely safe instruments (Treasury bills, for example) and made extremely risky bets—in venture-capital portfolios, for example—with the remaining 10%–15%. (Another variation on the strategy is to have a highly speculative portfolio but to insure yourself against losses greater than 15%.) The high-risk portion of Taleb’s portfolio was highly diversified: He wanted to place as many small bets as possible to increase the odds of a Black Swan paying off in his favor.

The...

Shortform Exercise: The Barbell Strategy

A barbell strategy devotes the majority of resources to safe options, and a minority to highly risky options that can pay off big. How can you integrate this into your life?


Write down a goal you’ve recently set for yourself, whether in your personal or professional life.

Shortform Exercise: Maximizing Positive Black Swans

Think about how you can use randomness to your advantage.


Write down an area of your life where you think you could use improvement and why.

The Black Swan Summary Appendix: The Contours of Extremistan

Although one needs only an intuitive sense of phenomena like wealth and market returns to understand that they don’t adhere to the same rules as phenomena like height and weight, throughout the book Taleb provides a robust theoretical and statistical scaffolding for his claims about the differences between Mediocristan and Extremistan.

Because these discussions tend toward the technical and aren’t essential for understanding Black Swans and their role in our lives, we at Shortform have decided to summarize them as an appendix.

Unfairness in Extremistan

As exemplified by figures like Beyoncé and Jeff Bezos, social and economic advantages accrue highly unequally in Extremistan.

One reason for this disparity is the “superstar effect.” Coined by economist Sherwin Rosen to describe the unequal distributions of income and prestige in Extremistan sectors like stand-up comedy, classical music, and research scholarship, the “superstar effect” operates when marginal differences in talent yield massive rewards.

The superstar effect, it’s vital to note, is meritocratic—that is, those with the most talent, even if they’re only slightly more talented than their competitors, get the spoils. What a theory like Rosen’s fails to take into account, however, is that all-important aspect of life in Extremistan: dumb luck.

Consider research scholarship for instance. Sociologist Robert K. Merton has observed that academics rarely read all the papers they cite; rather, they look at the bibliography of a paper they have read and pick sources to cite more or less at random.

Now imagine a scholar cites three authors at random. Then a second scholar cites those same three authors, because the first author cited them. Then a third does the same thing. Suddenly the three authors that have been cited are considered leaders in their field, all by dint of dumb luck.

The phenomenon identified by Merton’s study has been called both “cumulative advantage” and “preferential attachment.” These concepts describe** the innate human tendency to flock to past successes,...

Shortform Exercise: Reflecting on The Black Swan

Reflect on your takeaways from Taleb’s book.


Which of Taleb’s concepts or examples did you find most surprising and why?