This is a preview of the Shortform book summary of
The Black Swan by Nassim Nicholas Taleb.
Read Full Summary

1-Page Summary 1-Page Book Summary of The Black Swan

The Black Swan is the second book in former options trader Nassim Nicholas Taleb’s five-volume series on uncertainty. This book analyzes so-called “Black Swans”—extremely unpredictable events that have massive impacts on human society.

The Black Swan is named after a classic error of induction wherein an observer assumes that because all the swans he’s seen are white, all swans must be white. Black Swans have three salient features:

  • They are rare (statistical outliers);
  • They are disproportionately impactful; and, because of that outsize impact,
  • They compel human beings to explain why they happened—to show, after the fact, that they were indeed predictable.

Taleb’s thesis, however, is that Black Swans, by their very nature, are always unpredictable—they are the “unknown unknowns” for which even our most comprehensive models can’t account. The fall of the Berlin Wall, the 1987 stock market crash, the creation of the Internet, 9/11, the 2008 financial crisis—all are Black Swans.

Once Taleb introduces the concept of the Black Swan, he delves into human society and psychology, analyzing why modern civilization invites wild randomness and why humans can neither accept nor control that randomness.

Extremistan vs. Mediocristan

To explain how and why Black Swans occur, Taleb coins two categories to describe the measurable facets of existence: Extremistan and Mediocristan.

In Mediocristan, randomness is highly constrained, and deviations from the average are minor. Physical characteristics such as height and weight are from Mediocristan: They have upper and lower bounds, their distribution is a bell curve, and even the tallest or lightest human being isn’t much taller or lighter than the average. In Mediocristan, prediction is possible.

In Extremistan, however, randomness is wild, and deviations from the average can be, well, extreme. Most social, man-made aspects of human society—the economy, the stock market, politics—hail from Extremistan: They have no known upper or lower bounds, their behavior can’t be graphed on a bell curve, and individual events or phenomena—i.e., Black Swans—can have exponential impacts on averages.

Imagine you put ten people in a room. Even if one of those people is Shaquille O’Neal, the average height in the room is likely to be pretty close to the human average (Mediocristan). If one of those people is Jeff Bezos, however, suddenly the wealth average changes drastically (Extremistan).

The Unreliability of “Experts”

Taleb has very little patience for “experts”—academics, thought leaders, corporate executives, politicians, and the like. Throughout the book, Taleb illustrates how and why “experts” are almost always wrong and have little more ability to predict the future than the average person.

There are two reasons “experts” make bad predictions:

1) Human Nature

Because of various habits innate to our species—our penchant for telling stories, our belief in cause and effect, our tendency to “cluster” around specific ideas (confirmation bias) and “tunnel” into specific disciplines or methods (specialization)—we tend to miss or minimize randomness’s effect on our lives. Experts are no less guilty of this blindspot than your average person.

2) Flawed Methods

Because experts both (1) “tunnel” into the norms of their particular discipline and (2) base their predictive models exclusively on past events, their predictions are inevitably susceptible to the extremely random and unforeseen.

Consider, for example, a financial analyst predicting the price of a barrel of oil in ten years. This analyst may build a model using the gold standards of her field: past and current oil prices, car manufacturers’ projections, projected oil-field yields, and a host of other factors, computed using the techniques of regression analysis. The problem is that this model is innately narrow. It can’t account for the truly random—a natural disaster that disrupts a key producer, or a war that increases demand exponentially.

Taleb draws a key distinction between experts in Extremistan disciplines (economics, finance, politics, history) and Mediocristan disciplines (medicine, physical sciences). Experts like biologists and astrophysicists are able to predict events with fair accuracy; experts like economists and financial planners are not.

Difficulties of Prediction

The central problem with experts is their uncritical belief in the possibility of prediction, despite the mountain of evidence that indicates prediction is a fool’s errand. Some key illustrations of the futility of prediction include:

Discoveries

Most groundbreaking discoveries occur by happenstance—luck—rather than careful and painstaking work. The quintessential example is the discovery of penicillin. Discoverer Alexander Fleming wasn’t researching antibiotics; rather, he was studying the properties of a particular bacterium. He left a stack of cultures lying out in his laboratory while he went on vacation, and when he returned he found that a bacteria-killing mold had formed on one of the cultures. Voilá—the world’s first antibiotic.

Dynamical Systems

A dynamical system is one in which an array of inputs affect each other. Whereas prediction in a system that contains, say, two inputs, is a simple affair—one need only account for the qualities and behavior of those two inputs—prediction in a system that contains, say, five hundred billion inputs is effectively impossible.

The most famous illustration of a dynamical system’s properties is the “butterfly effect.” This idea was proposed by an MIT meteorologist, who discovered that an infinitesimal change in input parameters can drastically change weather models. The “butterfly effect” describes the possibility that the flutter of a butterfly’s wings can, a few weeks later and many miles distant, cause a tornado.

Predicting the Past

The past itself is as unknowable as the future. Because of how complex the world is and how a single event could be influenced by any number of tiny causes, we cannot reverse engineer causes for events.

An example should help illustrate. Think of an ice cube sitting on a table. Imagine the shape of the puddle that ice cube will make as it melts.

Now think of a puddle on the table and try to imagine how that puddle got there.

When historians propose causes for certain historical events, they’re looking at puddles and imagining ice cubes (or a spilled glass of water, or some other cause). The problem is that the sheer number of possible causes for a puddle—or a historical event—render any ascription of cause suspect.

If You Can’t Predict, How Do You Deal with Uncertainty?

Although Taleb is far more concerned with explaining why prediction is impossible than he is with proposing alternatives or solutions, he does offer some strategies for dealing with radical uncertainty.

1) Don’t Sweat the Small Predictions

When it comes to low-stakes, everyday predictions—about the weather, say, or the outcome of a baseball game—there’s no harm in indulging our natural penchant for prediction: If we’re wrong, the repercussions are minimal. It’s when we make large-scale predictions and incur real risk on their basis that we get into trouble.

2) Maximize Possibilities for Positive Black Swans

Although the most memorable Black Swans are typically the negatively disruptive ones, Black Swans can also be serendipitous. (Shortform note: Love at first sight is an example of a serendipitous Black Swan.)

Two strategies for opening ourselves up to positive Black Swans are (1) sociability and (2) proactiveness when presented with an opportunity. Sociability puts us in the company of others who may be in a position to help us—we never know where a casual conversation might lead. And proactiveness—for example, taking up a successful acquaintance on an invitation to have coffee—ensures we’ll never miss our lucky break.

3) Adopt the “Barbell Strategy”

When Taleb was a trader, he pursued an idiosyncratic investment strategy to inoculate himself against a financial Black Swan. He devoted 85%–90% of his portfolio to extremely safe instruments (Treasury bills, for example) and made extremely risky bets—in venture-capital portfolios, for example—with the remaining 10%–15%. (Another variation on the strategy is to have a highly speculative portfolio but to insure yourself against losses greater than 15%.) The high-risk portion of Taleb’s portfolio was highly diversified: He wanted to place as many small bets as possible to increase the odds of a Black Swan paying off in his favor.

The “barbell strategy” is designed to minimize the pain of a negative Black Swan while, potentially, reaping a positive Black Swan’s benefits. If the market collapses, a person pursuing this strategy isn’t hurt beneath the “floor” of the safe investments (say, 85%), but if the market explodes, he has a chance to capitalize by virtue of the speculative bets.

4) Distinguish Between Positive Contingencies and Negative Ones

Different areas of society have different exposure to Black Swans, both positive and negative. For example, scientific research and moviemaking...

Want to learn the rest of The Black Swan in 21 minutes?

Unlock the full book summary of The Black Swan by signing up for Shortform.

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

READ FULL SUMMARY OF THE BLACK SWAN

Here's a preview of the rest of Shortform's The Black Swan summary:

The Black Swan Summary Shortform Introduction

In his April 2007 review of The Black Swan in the New York Times, Gregg Easterbrook, riffing on the author’s skepticism about forecasts of any kind, noted, “At the beginning of 2006, the Wall Street Journal forecast a bad year for stocks; the Dow Jones Industrial Average rose 16% that year. (Disturbingly, the Journal has forecast a good year for 2007.)” Mere months later, the world economy would be in a tailspin—and Nassim Nicholas Taleb, who in The Black Swan warned that the global financial system was vulnerable to collapse, would be treated as a seer.

The Black Swan covers a broad range of topics and is organized in a somewhat unbalanced way. The first two parts of the book contain its core ideas, moving from (1) the ways we domesticate randomness to (2) the reasons why prediction is impossible to (3) humans’ best options when it comes to uncertainty. (Chapters 1–7 of the summary correspond to these topics.) The third part of the book, meanwhile, is framed as an add-on to the first two parts for the more technically minded reader—it grounds many of the claims Taleb makes in the second part with examples from history and the social sciences. (This material can be found in our summary’s “Appendix.”) The fourth and final part, entitled “The End,” is very brief and functions more or less as a farewell to the reader.

Taleb’s knowledge is encyclopedic. Over the course of the text’s four parts, he discusses mathematics, economics, philosophy, statistics, psychology, political science, history, physics, and literature, among other topics, all with an eye to exposing our assumptions about the nature of randomness and our abilities to account for it.

This summary covers all of Taleb’s major claims as well as many of his real-world examples, but...

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

The Black Swan Summary Chapter 1: What Is a Black Swan?

For millennia, it was universally accepted that all swans were white. In fact, this truth was so incontrovertible that logicians would often use it to illustrate the process of deductive reasoning. That classic deduction went like this:

  1. All swans are white
  2. The bird is a swan
  3. The bird is white

But in 1697, Willem de Vlamingh, a Dutch explorer, discovered black swans while on a rescue mission in Australia—and, in an instant, a universal, incontrovertible truth was shown to be anything but.

After Vlamingh’s discovery, philosophers used the term “black swan” to describe a seeming logical impossibility that could very well end up being possible.

Taleb, however, offers a new spin on the term. He uses it to describe specific historical events with specific impacts. These events have three salient features:

  • They are “outliers” (that is, they are statistically insignificant);
  • They have profound real-world impacts; and
  • Despite (or perhaps because of) their extreme unpredictability, they compel human beings to account for them—to explain after the fact that they were in fact predictable.

Some examples of Black Swan events include World Wars I and II, the fall of the Berlin Wall, 9/11, the rise of the Internet, the stock-market crash of 1987, and the 2008 financial crisis.

Taleb’s thesis is that Black Swans, far from being insignificant or unworthy of systematic study, comprise the most significant phenomena in human history. We should study them, even if we can’t predict them. Thus, counter-intuitively, we would be better served by concentrating our intellectual energies on what we don’t—nay, can’t—know, rather than on what we do and can know.

Taleb also claims, also counter-intuitively, that the more our knowledge advances, the more likely we are to be blindsided by a Black Swan. This is because our knowledge is forever becoming more precise and specific and less capable of recognizing generality—for example, the general tendency for earth-shattering events to be completely unforeseen (which, of course, is why they’re earth-shattering).

Platonicity and the Platonic Fold

Like Plato, with his abstract and ideal “forms,” human beings in general tend to favor neat, “pure” concepts that are universally consistent. These concepts—mathematical rules, notions of historical progress, economic laws—allow us to form models of the world so that predictions are much easier to make.

The problem with these models is that they lead us to “mistake the map for the territory”—that is, we are fooled into thinking the models are reality, rather than a very particular representation of reality that excludes outliers (i.e., Black Swans).

Taleb calls this natural human tendency to box in reality Platonicity and he holds it responsible for our dangerous confidence in our own knowledge. We become so enamored of our elegant, self-consistent models that we are unable to see beyond them.

It’s where our models cease to be useful that Black Swans occur—in the Platonic fold between our predictive models and unpredictable reality.

The Origins of Taleb’s Black Swan Obsession

Taleb’s first encounter with a Black Swan took place in his home country of Lebanon.

For centuries, the region around Mount Lebanon was known for its cosmopolitanism—located on the eastern shore of the Mediterranean, at the crossroads of Europe and the Near East, it was home to a vibrant mercantile population that included Christians and Muslims of various sects, Jews, and Druze.

This tolerant and multicultural paradise was still in existence in Taleb’s youth. He recalls the motley crew of international playboys, spies, writers, and merchants that frequented the country.

Taleb was part of the country’s aristocracy. Both of his grandfathers were educated in France, and one of them was serving as minister of the interior when a teenaged Taleb was jailed for participating in a political rally that turned violent. The government was scared enough of the unrest to grant all the arrested protesters amnesty.

Shortly after the rally came the Black Swan that launched Taleb’s obsession: A civil war between Lebanon’s Muslims and Christians that shattered the country’s millennia-long ethnic peace and reduced Beirut, Lebanon’s capital and the “Paris of the Middle East,” to rubble.

What this Black Swan brought home for Taleb was the blunt unknowability of history. The war wasn’t—couldn’t have been—predicted; it could only be explained after the fact. He realized that human beings suffer from a “triplet of opacity” when it comes to our encounters with history:

1) The Illusion of Understanding

We all tend to think we have a grasp of what’s going on in the world when, in fact, the world is far more complex than we know.

For example, all the adults around Taleb predicted the civil crisis would last a matter of days (it ended up lasting around 17 years). Despite the fact that events kept contradicting people’s forecasts, people acted each day as though nothing exceptional had occurred.

2) The Retrospective Distortion

History always appears predictable (or, at least, explainable) in our retrospective accounts of events.

In the case of the Lebanese Civil War, the adults whose forecasts were continually proved incorrect were always able to explain the surprising events after the fact. In other words, the events always were indeed predictable, but one could only predict them after they’d already happened.

3) The Overvaluation of Facts and the Flaw of Expertise

We accumulate information and listen to expert analysis of that information, but these elements never measure up to real events.

Taleb cites the example of his grandfather, who eventually rose to deputy prime minister of Lebanon. Although his grandfather was an educated man with years of experience in politics, his forecasts were proven wrong as routinely as those of his...

What Our Readers Say

This is the best summary of How to Win Friends and Influence People I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →

The Black Swan Summary Chapter 2: Scalability | Mediocristan and Extremistan

One reason that Black Swans are so profoundly disruptive is that they occur in the “scalable” parts of our lives—where physical limits don’t apply and effects tend toward incredible extremes. When a particular thing—an income, an audience for a particular product—is “scalable,” it can grow exponentially without any additional expenditure of effort.

“Massage therapist,” for example, is a “nonscalable” profession. There is an upper limit on how many clients you can see—there’s only so much time in a day, and therapists’ bodies fatigue—and thus there’s only so much income you can expect from that profession.

“Quantitative trader,” however, is a “scalable” profession. It takes no additional energy or time to purchase 5,000 shares of a stock than 50, and your income isn’t limited by physical constraints.

Artists, too, are in a scalable profession (at least in the age of digital reproduction). For instance, a singer doesn’t need to perform her hit song each time someone wants to hear it. She performs it once for the record, and that performance can be disseminated widely.

The problem with scalability is that it creates vast inequalities. Let’s look at the singer example again:

  • Before the advent of recording technology, a singer’s audience was limited to those for whom she could physically perform. That is, a singer in one town wasn’t likely to prevent the survival of a singer in another town; they might have differently sized audiences—based on the populations of their respective towns—and thus different incomes, but those differences would be comparatively mild.

  • After the advent of recording technology, however, a small number of singers come to dominate the listening public. Now that we can pay pennies to stream Beyoncé any time we want, why spend the $10 or $20 to see a local singer we’ve never heard of? Suddenly, differences in audience and income become vast. With scalability comes extremes.

The Contrary Worlds of Mediocristan and Extremistan

“Mediocristan” is Taleb’s term for the facets of our experience that are nonscalable. For example, like the income of a massage therapist, human physical traits such as height and weight hail from Mediocristan—they have upper and lower bounds, and if you were to graph every human’s height and weight, you would produce a bell curve.

Mediocristan’s overriding law can be stated thus: Given a large-enough sample size, no individual event will have a significant effect on the total. That is, there will be outliers—extremely heavy or tall people—but those outliers (1) will not be exponentially larger or smaller than the average, and (2) will be rendered insignificant by the sheer number of average cases. Most physical phenomena—human footspeed, trees’ rate of growth—come from Mediocristan. (Shortform note: Taleb sometimes treats Mediocristan as a distinct place, other times as an adjective to describe certain kinds of phenomena.)

“Extremistan,” oppositely, describes those facets of our experience that are eminently scalable. In Extremistan, inequalities are vast enough that one instance can profoundly affect the total.

Most social (man-made) phenomena come from Extremistan. For example, wealth: It has no readily detectable upper limit; and if you were to include, say, Jeff Bezos, in any average of human wealth, you would produce a grossly distorted picture of how much money most people have.

Extremistan—Where Black Swans Fly

In the realm of Mediocristan, randomness is highly constrained (mild): There’s only so much variation in the physical aspects of our world. Thus, in Mediocristan, Black Swans are (effectively) impossible.

In Extremistan, however, randomness is highly variable (wild): No matter how large your sample size for a given phenomenon, you can’t produce a trustworthy average or aggregate picture because of the variation in that phenomenon. In Extremistan, Black Swans are frequent.

(Thanks to the work of French mathematician Benoît Mandelbrot, some Black Swans can become “Gray” Swans—rare and surprising events that can be imaginable, if not precisely predicted. See the Appendix for the theory behind Gray Swans.)

Key Qualities of Mediocristan Key Qualities of Extremistan
Nonscalable Scalable
Typical member is mediocre (“average,” in the statistical sense) Most members are dwarfs, a few are giants
Best-off are only marginally better than worst-off Best-off are considerably better off than worst-off
Events are predictable from available information Events are highly unpredictable from available information
Probability distribution is a bell curve Probability distribution accords either with Mandelbrotian “Gray” Swans or is dominated by Black Swans

<!--SSMLContent

We'll go over some key qualities of both Mediocristan and Extremistan.

Mediocristan: Nonscalable

Extremistan: Scalable

Mediocristan: Typical member is...

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

The Black Swan Summary Chapter 3: Don’t Be a Turkey | It Pays to Be a Skeptic

Picture a turkey cared for by humans. It has been fed every day for its entire life by the same humans, and so it has come to believe the world works in a certain, predictable, and advantageous way. And it does...until the day before Thanksgiving.

Made famous by British philosopher Bertrand Russell (though, in his telling, the unlucky bird was a chicken), this story illustrates the problem with inductive reasoning (the derivation of general rules from specific instances). With certain phenomena—marketing strategy, stock prices, record sales—a pattern in the past is no guarantee of a pattern in the future.

In Taleb’s words, the turkey was a sucker—it had full faith that the events of the past accurately indicated the future. Instead, it was hit with a Black Swan, an event that completely upends the pattern of the past. (It’s worth noting that the problem of inductive reasoning is the problem of Black Swans: Black Swans are possible because we lend too much weight to past experience.)

Another example of faulty inductive reasoning, this time from the world of finance, concerns the hedge fund Amaranth (ironically named after a flower that’s “immortal”), which incurred one of the steepest losses in trading history: $7 billion in less than a week. Just days before the company went into tailspin, Amaranth had reminded its investors that the firm employed twelve risk managers to keep losses to a minimum. The problem was that these risk managers—or suckers—based their models on the market’s past performance.

In order not to be suckers, we must (1) cultivate an “empirical skepticism”—that is, a skepticism steeped in fact and observation—and (2) remain vigilant against the innately human tendencies that leave us vulnerable to Black Swans.

Traits of the Empirical (a-Platonic) Skeptic Traits of the Platonifier
Respects those who say “I don’t know” Views those who say “I don’t know” as ignorant
Thinks of Black Swans as a primary incidence of randomness Thinks of minor deviations as the primary incidence of randomness
Minimizes theory Praises theory
Assumes the world functions like Extremistan rather Mediocristan Assumes the world functions like Mediocristan rather than Extremistan
Prefers to be broadly right across a wide range of disciplines and situations Prefers to be perfectly right in a narrow range of disciplines and situations

The Long History of Empirical Skepticism

The problem of induction illustrated by the turkey story has been noted by many well-known philosophers, including the great skeptic David Hume. But induction’s shortcomings were noted even in antiquity.

Empirical Skeptic #1: Sextus Empiricus

Either a philosopher himself or simply a copyist of other thinkers, Sextus Empiricus resided in 2nd-century BC Alexandria. In addition to his philosophical pursuits, he practiced medicine, doing so according to empirical observation but without dogmatism (that is, without blind loyalty to a particular approach or method). In fact, Sextus Empiricus was devoutly antidogmatic: He eschewed theory of any kind—the title of his most famous book translates “Against the Professors”—and proceeded according to persistent trial and error, much like Taleb.

Empirical Skeptic #2: Al-Ghazali

An 11th-century Persian philosopher, Al-Ghazali too doubted the wisdom of the intellectual authorities of his time. (The title of his most famous text is The Incoherence of the Philosophers.) He expressed skepticism of “scientific” knowledge (as espoused by his rival, Averroës, who was himself influenced by Aristotle). Unfortunately Al-Ghazali’s ideas were coopted and exaggerated by later Sufi scholars, who argued that humans were better served by communing with God and leaving behind all earthly matters.

Empirical Skeptic #3: Pierre Bayle

A French skeptic of the 17th century, Bayle is best known for his Historical and Critical Dictionary, which critiques much of what passed for “truth” in his historical moment.

Empirical Skeptic #4: Pierre-Daniel Huet

A contemporary of Bayle, Huet, long before David Hume, proposed that for any event there could be an infinity of causes.

What Empirical Skeptics Resist

Empirical skeptics tend to resist five cognitive flaws that filter the truth and prevent the recognization of Black Swans.

Flaw #1: The Error of Confirmation

All too often we draw universal conclusions from a particular set of facts. For example, if we were presented with evidence that showed a turkey had been fed and housed for 1,000 straight days, we would likely predict the same for day 1,001 and for day 1,100.

Taleb calls this prediction the “round-trip fallacy.” **When we commit the round-trip fallacy, we assume that “no evidence of x”—where x is any...

Why people love using Shortform

"I LOVE Shortform as these are the BEST summaries I’ve ever seen...and I’ve looked at lots of similar sites. The 1-page summary and then the longer, complete version are so useful. I read Shortform nearly every day.""
Jerry McPhee
Sign up for free

Shortform Exercise: Responding to Randomness

Explore what it means to be an “empirical skeptic.”


Write down something that happened to you recently, good or bad, that was out of the ordinary.

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

The Black Swan Summary Chapter 4: The Scandal of Prediction

With the rapid advance of technology—computer chips, cellular networks, the Internet—it stands to reason that our predictive capabilities too are advancing. But consider how few of these groundbreaking advances in technology were themselves predicted. For example, no one predicted the Internet, and it was more or less ignored when it was created.

(Shortform note: It’s unclear how Taleb defines “predicted.” Plenty of science-fiction writers and cultural commentators anticipated recent technologies like the Internet and augmented and virtual reality.)

It is an inconvenient truth that humans’ predictive capabilities are extremely limited; we are continuously faced with catastrophic or revolutionary events that arrive completely unexpectedly and for which we have no plan. Yet, nevertheless, we maintain that the future is knowable and that we can adequately prepare for it. Taleb calls this tendency the scandal of prediction.

Epistemic Arrogance

The reason we overestimate our ability to predict is that we’re overconfident in our knowledge.

A classic illustration of the fact comes from a study conducted by a pair of Harvard researchers. In the study, the researchers asked subjects to answer specific questions with numerical ranges. (A sample question might be, “How many redwoods are there in Redwood Park in California?” To which the subject would respond, “I’m 98% sure there are between x and y number of redwoods.) The researchers found that the subjects, though they were 98% sure of their answers, ended up being wrong 45% of the time! (Fun fact: The subjects of the study were Harvard MBAs.) In other words, the subjects picked overly narrow ranges because they overestimated they own ability to estimate. If they had picked wider ranges—and, in so doing, acknowledged their own lack of knowledge—they would have scored much better.

Taleb calls our overconfidence in our knowledge “epistemic arrogance.” On the one hand, we overestimate what we know; on the other, we underestimate what we don’t—uncertainty.

It’s important to recognize that Taleb isn’t talking about how much or how little we actually know, but rather the disparity between what we know and what we think we know. We’re arrogant because we think we know more than we actually do.

This arrogance leads us to draw a distinction between “guessing” and “predicting.” Guessing is when we attempt to fill in a nonrandom variable based on incomplete information, whereas predicting is attempting to fill in a random variable based on incomplete information.

Say, for example, someone asks you to estimate how many natural lakes there are in Georgia. There’s a right answer to the question—it’s 0—but you don’t know it, so your answer is a “guess.”

But say that same someone asks you what the U.S. unemployment rate will be in a year. You might look at past figures, GDP growth, and other metrics to try and make a “prediction.” But the fact is, your answer will still be a “guess”—there are just too many factors (unknown unknowns) to venture anything better than a guess.

The Curse of Information

It stands to reason that the greater our information is about a particular problem, the more likely we are to come upon a solution. And the same goes, it would seem, for predictions: The more information we have to make a prediction, the more accurate our prediction will be.

But an array of studies shows that an increase in information actually has negligible—and even negative—effects on our predictions.

For example, the psychologist Paul Slovic conducted a study on oddsmakers at horse tracks. He had the oddsmakers pick the ten most important variables for making odds, then asked the oddsmakers to create odds for a series of races using only those variables.

In the second part of the experiment, Slovic gave the oddsmakers ten more variables and asked them to predict again. The accuracy of their predictions was the same (though their confidence in their predictions increased significantly).

The negative outcome of an increase in information is that we become increasingly sure of our predictions even as their accuracy remains constant.

Experts—The Worst Offenders

The most arrogant group in terms of their predictions—and least aware of their own ignorance—are so-called “experts.” These are the credentialed and/or laureled people whose opinions are granted weight by society.

Taleb divides this group by two. There are those who are arrogant but also display some degree of competence, and then there are those who are arrogant and incompetent.

1) Competent Arrogants

“Competent Arrogants” are those experts with actual predictive abilities and discernible skills. This group includes astronomers, physicists, surgeons, and mathematicians (when dealing exclusively with pure, rather than applied, mathematics).

2) Incompetent Arrogants

“Incompetent Arrogants” are “experts” whose predictive abilities and skills aren’t significantly greater than the average person’s. This group includes stockbrokers, intelligence analysts, clinical psychologists, psychiatrists, economists, finance professors, and personal financial advisors.

In numerous empirical studies, the forecasting ability of the “incompetent arrogants” has been shown to be almost nonexistent.

For example, over the course of several years, psychologist Philip Tetlock asked almost 300 specialists—political scientists, economists, journalists, and politicians—to offer predictions of world events (the timeframe was usually “within the next five years”). He discovered that the experts’ predictions were barely more accurate than random selection and often worse than simple computer simulations.

He also found that the more prominent a person was in his or her field, the worse were his or her predictions. The reason for this finding was that prominent people tend to become prominent based on their having _one big...

What Our Readers Say

This is the best summary of How to Win Friends and Influence People I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →

Shortform Exercise: Learning the Limits of Prediction

Think like Taleb about prediction and the limits of your ability to predict things.


Write down a prediction you recently made, whether it was about a baseball game, the economy, an election, or other event.

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

The Black Swan Summary Chapter 5: Why We Can’t Know What We’ll Know

Epistemic arrogance, the pretensions of “experts,” our ever-increasing access to information—all belie an incontrovertible fact: In many, perhaps even most, areas of our lives, prediction is simply impossible.

Take discoveries, for example. At any given moment, there are scores of scientists, scholars, researchers, and inventors around the world working diligently to better our lives and increase our knowledge. But what often goes unremarked is that the discoveries with the profoundest impact on our lives are inadvertentrandom—rather than the reward for careful and painstaking work.

The discovery of penicillin is a case in point. Biologist Alexander Fleming left a stack of cultures sitting out in his laboratory while he went on vacation, and when he returned, a bacteria-killing mold had formed on one of the cultures. Voila!—the world’s first antibiotic.

The same goes for the discovery of the cosmic microwave background, the omnipresent radiation in space that provides a key piece of evidence for the Big Bang. No researcher had any idea it existed until two radio astronomers noticed a hiss in their listening devices. How unexpected was their discovery? At first, the astronomers thought the hiss was caused by pigeon droppings on their antenna!

Because earth-shattering discoveries are unexpected or inadvertent, their importance, at least in the beginning, often goes unrecognized. When Darwin presented his findings at the Linnean Society, the 19th-century’s preeminent institution of natural history, its president dismissed the theory as “no striking discovery.”

The Scientific Grounds for Uncertainty

The innovator of the scientific method of “falsification,” Karl Popper, also proposed an influential theory of history. The theory held that, because technological advance was “fundamentally unpredictable,” so too was the course of history.

Popper’s theory is echoed by a key law in statistics called “the law of iterated expectations.” The law describes a situation when to predict something means to already know that something.

Think, for example, of the invention of the wheel. If a primitive human were to predict the invention of the wheel, that human would already know enough about the wheel to invent it him- or herself.

The same condition applies to contemporary predictions of discoveries. For example, some have predicted that carbon-capture technology will solve global warming. But to make that prediction, one has to know the specifications of that technology—that is, one has to know already how to create it (and thus how to stop global warming now). In other words, if we don’t know exactly how a certain technology will be created, we can’t make the prediction that the technology will be created.

Poincaré’s Nonlinearities

Henry Poincaré, arguably the most highly regarded mathematician of the late 19th-century in France, contributed to the theory of unpredictability by proposing the existence of nonlinearities—small phenomena that, as time goes on, can have profound consequences.

A seminal illustration of the concept is the “three-body problem.” In a planetary system consisting of two planets (bodies), predictions about those planets’ motion are easy to make: One need only account for the relation between the planets. But if a third body enters the equation—perhaps a comet, passing between the two planets—the relation changes. Although the comet’s effects may not be visible at first, the tiny forces it has exerted on the planets will compound with each cycle of the system, potentially resulting in massive changes.

Another example of nonlinearities is the so-called “butterfly effect.” In the 1960s, a meteorologist at MIT modeling weather patterns discovered that a minuscule rounding change in his input parameters resulted in tremendously different weather events. Thus the concept that a butterfly flapping its wings in one part of the world can cause a hurricane in another.

Each of these examples illustrates the difficulty of making accurate predictions in a “dynamical system,” wherein there are multiple variables and even slight changes can drastically alter results. When it comes to dynamical systems, Poincaré noted that the properties of the system can be observed and discussed, but there’s no way to compute—to use numbers—to describe the system.

Suffice it to say, there is no more dynamical system than human society—Extremistan.

Hayek’s Libertarianism

One of very few esteemed economists (he won a Nobel in 1974) to understand the futility of prediction, Fredrich Hayek spent much of his career railing against the main feature of socialism, central planning.

In a classical socialist society, all economic decisions—allocation of resources, setting of prices, etc.—are made by a single entity. Hayek believed that a dynamical system like the economy was simply too complex for a single entity to master.

Hayek attributed notions of central planning to misguided “experts” (nerds, in Taleb’s terminology) who applied the methods of the physical sciences to social matters. Hayek recognized a stark distinction between soft sciences like economics and hard sciences like physics.

Hayek’s skepticism regarding any entity’s ability to predict the functioning of the economy led him to advocate for an “a-Platonic” approach, one that is open-minded and proceeds from the bottom up rather than the top down. He argued that a libertarian system, wherein individuals are able to pursue their self-interest with a minimum of direction from above, is the best way to manage uncertainty. In a system like this, he believed, the system itself would react accordingly to any unexpected changes in, say, the food or credit supply.

Hayek’s anti-Platonifying approach to economics also puts his thought at odds with contemporary mainstream economists’.

For example, most economists today subscribe to some version of...

Want to Read the Rest of this
In Book Summary?

With Shortform, you can:

Access 1000+ non-fiction book summaries.

Highlight your
Favorite Passages.

Access 1000+ premium article summaries.

Take Notes on your
Favorite Books.

Read on the go with our iOS and Android App.

Download PDF Summaries.

Sign up for free

The Black Swan Summary Chapter 6: Predicting the Past

Through the limitations of inductive reasoning as illustrated by the turkey anecdote, as well as the distortions of the narrative fallacy and silent evidence, we’ve seen how problematic the past is vis-à-vis prediction. But because of these phenomena and others, the past itself is as unknowable as the future.

One of the major obstacles that prevents us from knowing the past with certainty is the impossibility of reverse engineering causes for events. That is, there’s no way to determine the precise cause of an event when we work backward in time from the event itself.

An example should help illustrate.

Think of an ice cube sitting on a table. Imagine the shape of the puddle that ice cube will make as it melts.

Now think of a puddle on the table and try to imagine how that puddle got there.

The second thought experiment is much harder than the first. With the right physics know-how and ample time, one could model exactly what kind of puddle will result from the melting ice cube (based on the cube’s shape, the environmental conditions, etc.). In contrast, it’s nearly impossible to reverse engineer a cause from a random puddle (because the puddle could have been caused by any number of things).

When historians propose causes for certain historical events, they’re looking at puddles and imagining ice cubes (or a spilled glass of water, or some other cause). The problem is that the sheer number of possible causes for a puddle—or a historical event—render any ascription of cause suspect.

Poincaré’s nonlinearities too help illustrate this problem. Again, with the right tools and time, one might be able to observe how the flutter of a butterfly’s wings in India causes a hurricane in Florida, but it would be impossible to work backwards from the hurricane to that cause—there are just too many other tiny events that may have played a part.

Our Information Is Always Incomplete

Mathematicians and philosophers draw a distinction between “true randomness” and “deterministic chaos.” A “random” system is one whose operation is always and forever impossible to predict. A “chaotic” system is one whose operation could be predicted, but whose complexity makes prediction so difficult that it’s effectively impossible. That is, if we had the right information, we would be able to make sense of “chaos.”

Taleb notes that, for normal people trying to make predictions about their stock portfolio, for example, or the appreciation of the value of their house, there’s no difference between “true randomness” and “deterministic chaos.” That’s because when we’re faced with a dynamical system, we always lack the necessary information to decide whether it’s truly random or simply chaotic.

Such is the case with history as well: Perhaps each major historical event conforms to some incredibly complex plan—that is, perhaps history is just chaotic, not random—but we’ll never have enough information to discern that plan.

The Proper Use of History

Historical narratives are harmless if understood properly: as windows on to...

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

The Black Swan Summary Chapter 7: What to Do When You Can’t Predict

If we are surrounded by randomness and unpredictability, if our well-being is radically uncertain, what—besides despair—are our options?

1) Don’t Sweat the Small Predictions

It bears repeating that humans’ ability to predict in the short-term is unique among animal species and quite possibly the reason we’ve survived and thrived as long as we have. To predict is human.

So, when it comes to low-stakes, everyday predictions—about the weather, say, or the outcome of a baseball game—there’s no harm in indulging our natural penchant for prediction: If we’re wrong, the repercussions are minimal. It’s when we make large-scale predictions and incur real risk on their basis that we get into trouble.

2) Maximize Possibilities for Positive Black Swans

Although the most memorable Black Swans are typically the negatively disruptive ones, Black Swans can also be serendipitous. (Shortform note: Love at first sight is undoubtedly a Black Swan.)

Taleb advocates (1) sociability and (2) proactiveness when presented with an opportunity as strategies for opening ourselves up to positive Black Swans. Sociability puts us in the company of others who may be in a position to help us—we never know where a casual conversation might lead. And proactiveness—taking up a successful acquaintance on an invitation to have coffee, for example—ensures we’ll never miss our lucky break.

3) Adopt the “Barbell Strategy”

When Taleb was a trader, he pursued an idiosyncratic investment strategy to inoculate himself against a financial Black Swan. He devoted 85%–90% of his portfolio to extremely safe instruments (Treasury bills, for example) and made extremely risky bets—in venture-capital portfolios, for example—with the remaining 10%–15%. (Another variation on the strategy is to have a highly speculative portfolio but to insure yourself against losses greater than 15%.) The high-risk portion of Taleb’s portfolio was highly diversified: He wanted to place as many small bets as possible to increase the odds of a Black Swan paying off in his favor.

The “barbell strategy” is designed to minimize the pain of a negative Black Swan while, potentially, reaping a positive Black Swan’s benefits. If the market collapses, a person pursuing this strategy isn’t hurt beneath the “floor” of the safe investments (say, 85%), but if the market explodes, he has a chance to capitalize by virtue of the speculative bets.

(Shortform note: There is, of course, the possibility that a Black Swan will affect even the safest investment—in fact, if we take Taleb at his word, there is no such thing as a safe investment.)

4) Distinguish Between Positive Contingencies and Negative Ones

Different areas of society have different exposure to Black Swans, both positive and negative.

For example, scientific research and moviemaking are “positive Black Swan areas”—catastrophes are rare, and there is always the possibility of smashing success. The stock market or catastrophe insurance, meanwhile, are “negative Black Swan areas”—upsides are relatively modest compared to the possibility of financial ruin.

It’s important to note that the history of a negative–Black Swan area will underrepresent the possibility of catastrophe. An obvious example is financial markets, which always appear stable until they crash. Positive–Black Swan areas, oppositely, will underrepresent the possibility of serendipity: Despite years of research, a cure for cancer hasn’t been found; but one could be found any day.

Suffice it to say, we should take more risks in a positive Black Swan area than in a negative Black Swan one.

5) Prepare, Don’t Predict

Because Black Swans are, by definition, unpredictable, we’re better off preparing for the widest range of contingencies than predicting specific events.

That’s because, though Black Swans themselves can never be predicted, their effects can be. For example, no one can predict when an earthquake will strike, but one can know what its effects will be and prepare adequately to handle them.

The same goes for an economic recession. No one can predict precisely when one will occur, but, using the “barbell strategy” or some other means of mitigating risk, we can at least be prepared for one.

The Side-Effects of a Black Swan Obsession

Taleb’s preoccupation with Black Swans—and his fellow humans’ persistence in discounting them—has caused him to develop a contrary personality: where others see safety and stability, he sees risk and the potential for catastrophe; where others see volatility, he sees opportunity.

Taleb’s guiding principle is simple:...

What Our Readers Say

This is the best summary of How to Win Friends and Influence People I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →

Shortform Exercise: The Barbell Strategy

A barbell strategy devotes the majority of resources to safe options, and a minority to highly risky options that can pay off big. How can you integrate this into your life?


Write down a goal you’ve recently set for yourself, whether in your personal or professional life.

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

Shortform Exercise: Maximizing Positive Black Swans

Think about how you can use randomness to your advantage.


Write down an area of your life where you think you could use improvement and why.

What Our Readers Say

This is the best summary of How to Win Friends and Influence People I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →

The Black Swan Summary Appendix: The Contours of Extremistan

Although one needs only an intuitive sense of phenomena like wealth and market returns to understand that they don’t adhere to the same rules as phenomena like height and weight, throughout the book Taleb provides a robust theoretical and statistical scaffolding for his claims about the differences between Mediocristan and Extremistan.

Because these discussions tend toward the technical and aren’t essential for understanding Black Swans and their role in our lives, we at Shortform have decided to summarize them as an appendix.

Unfairness in Extremistan

As exemplified by figures like Beyoncé and Jeff Bezos, social and economic advantages accrue highly unequally in Extremistan.

One reason for this disparity is the “superstar effect.” Coined by economist Sherwin Rosen to describe the unequal distributions of income and prestige in Extremistan sectors like stand-up comedy, classical music, and research scholarship, the “superstar effect” operates when marginal differences in talent yield massive rewards.

The superstar effect, it’s vital to note, is meritocratic—that is, those with the most talent, even if they’re only slightly more talented than their competitors, get the spoils. What a theory like Rosen’s fails to take into account, however, is that all-important aspect of life in Extremistan: dumb luck.

Consider research scholarship for instance. Sociologist Robert K. Merton has observed that academics rarely read all the papers they cite; rather, they look at the bibliography of a paper they have read and pick sources to cite more or less at random.

Now imagine a scholar cites three authors at random. Then a second scholar cites those same three authors, because the first author cited them. Then a third does the same thing. Suddenly the three authors that have been cited are considered leaders in their field, all by dint of dumb luck.

The phenomenon identified by Merton’s study has been called both “cumulative advantage” and “preferential attachment.” These concepts describe the innate human tendency to flock to past successes, regardless of whether those successes are the product of merit or chance.

Although cumulative advantage/preferential attachment provides a better account of unfairness than the superstar effect—because it accounts for randomness in the distribution of advantage/preference—it still isn’t a perfect theory of Extremistan unfairness. This is because, according to cumulative advantage/preferential attachment, winners stay on top. Once an entity—a company, an academic, a professional sports coach—reaches a certain level of success, cumulative advantage/preferential attachments holds that that entity will continue to be successful, because humans naturally favor past success. But this doesn’t reflect reality.

For example, according to cumulative advantage/preferential attachment, Apple will forever be the king of consumer electronics, and Google will forever own the Internet. But even a cursory knowledge of business history shows a belief like this to be misguided. Consider this: If you tracked the 500 largest U.S. companies from 1957 to 1997, you’d discover that only 74 lasted the full 40 years. Some ceased to exist by virtue of mergers, but most simply failed. In other words, even in Extremistan, giants can occasionally be toppled by dwarfs.

Nevertheless, despite the possibility of creative destruction, Extremistan is defined by extreme concentration—of prestige, income, capital, influence, size, or what have you. These concentrations create the appearance of stability—larger, more powerful entities are more resilient—but they actually create the potential for catastrophic black swans.

The example par excellence of the risks of concentration is the banking industry. The explosive growth and interrelation of the major global financial institutions were the catalysts for the Great Recession of 2008.

(Shortform note: The popularity of The Black Swan is due, in large part, to Taleb’s clairvoyance about the vulnerabilities of the financial industry. The book was published in April 2007, before the worst of the crisis, yet Taleb describes the contours of that crisis with eerie precision.)

The most far-reaching unfairness of Extremistan, however, more so than economic inequality, is inequality in intellectual influence. As illustrated by Merton’s citation example just above, intellectual influence can be a matter of dumb luck rather than ability or insightfulness. This problem becomes acute when we trust thought leaders—”experts”—with our lives and livelihoods (see Chapter 4: The Scandal of Prediction).

The Limits of the Bell Curve

The classic bell curve—which is also called the “Gaussian distribution” after German mathematician Carl Friedrich Gauss—is an accurate description of Mediocristan phenomena, but it is dangerously misleading when it comes to Extremistan.

Consider human height, an eminently Mediocristan phenomenon. With every increase or decrease in height relative to the average, the odds of a person being that tall or short decline. For example, the odds that a person (man or woman) is three inches taller than the average is 1 in 6.3; 7 inches taller, 1 in 44; 11 inches taller, 1 in 740; 14 inches taller, 1 in 32,000.

It’s important to note that the odds not only decline as the height number gets further and further away from the average, but they decline at an accelerating rate. For example, the odds of someone being 7’1” are 1 in 3.5 million, but the odds of someone being just four inches taller are 1 in 1 billion, and the odds of someone being four inches taller than that are 1 in 780 billion!

Now consider an Extremistan phenomenon like wealth. In Europe, the probability that someone has a net worth higher than 1 million euros is 1 in 62.5; higher than 2 million euros, 1 in 250; higher than 4 million, 1 in 1,000; higher than 8 million, 1 in 4,000; and higher than 16 million, 1 in 16,000. **The...

Try Shortform for free

Read full summary of The Black Swan

Sign up for free

Shortform Exercise: Reflecting on The Black Swan

Reflect on your takeaways from Taleb’s book.


Which of Taleb’s concepts or examples did you find most surprising and why?

What Our Readers Say

This is the best summary of How to Win Friends and Influence People I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →

Table of Contents

  • 1-Page Summary
  • Shortform Introduction
  • Chapter 1: What Is a Black Swan?
  • Chapter 2: Scalability | Mediocristan and Extremistan
  • Chapter 3: Don’t Be a Turkey | It Pays to Be a Skeptic
  • Exercise: Responding to Randomness
  • Chapter 4: The Scandal of Prediction
  • Exercise: Learning the Limits of Prediction
  • Chapter 5: Why We Can’t Know What We’ll Know
  • Chapter 6: Predicting the Past
  • Chapter 7: What to Do When You Can’t Predict
  • Exercise: The Barbell Strategy
  • Exercise: Maximizing Positive Black Swans
  • Appendix: The Contours of Extremistan
  • Exercise: Reflecting on The Black Swan