

This article is an excerpt from the Shortform summary of "The Black Swan" by Nassim Taleb. Shortform has the world's best summaries of books you should be reading.
Like this article? Sign up for a free trial here .
What is empirical skepticism philosophy? In what ways does it resist faulty reasoning and cognitive bias?
Empirical skepticism philosophy is a skeptical approach steeped in fact and observation. It was practiced by philosophers such as Sextus Empiricus and David Hume.
We’ll cover the history of empirical skepticism philosophy and how it resists five common cognitive errors.
Empirical Skepticism Philosophy
Before we dive into empirical skepticism philosophy, let’s explore inductive reasoning to see why empirical skepticism is necessary.
An example of faulty inductive reasoning, this time from the world of finance, concerns the hedge fund Amaranth (ironically named after a flower that’s “immortal”), which incurred one of the steepest losses in trading history: $7 billion in less than a week. Just days before the company went into tailspin, Amaranth had reminded its investors that the firm employed twelve risk managers to keep losses to a minimum. The problem was that these risk managers—or suckers—based their models on the market’s past performance.
In order not to be suckers, we must (1) cultivate an “empirical skepticism”—that is, a skepticism steeped in fact and observation—and (2) remain vigilant against the innately human tendencies that leave us vulnerable to Black Swans.
Traits of the Empirical Skeptic | Traits of the Platonifier (induction) |
Respects those who say “I don’t know” | Views those who say “I don’t know” as ignorant |
Thinks of Black Swans as a primary incidence of randomness | Thinks of minor deviations as the primary incidence of randomness |
Minimizes theory | Praises theory |
Assumes the world functions like Extremistan rather Mediocristan | Assumes the world functions like Mediocristan rather than Extremistan |
Prefers to be broadly right across a wide range of disciplines and situations | Prefers to be perfectly right in a narrow range of disciplines and situations |
The History of Empirical Skepticism Philosophy
The problem of induction illustrated by the turkey story has been noted by many well-known philosophers, including the great empirical skeptic David Hume. But induction’s shortcomings were noted even in antiquity.
Empirical Skeptic #1: Sextus Empiricus
Either a philosopher himself or simply a copyist of other thinkers, Sextus Empiricus resided in 2nd-century BC Alexandria. In addition to his philosophical pursuits, he practiced medicine, doing so according to empirical observation but without dogmatism (that is, without blind loyalty to a particular approach or method). In fact, Sextus Empiricus was devoutly antidogmatic: He eschewed theory of any kind—the title of his most famous book translates “Against the Professors”—and proceeded according to persistent trial and error, much like Taleb.
Empirical Skeptic #2: Al-Ghazali
An 11th-century Persian philosopher, Al-Ghazali too doubted the wisdom of the intellectual authorities of his time. (The title of his most famous text is The Incoherence of the Philosophers.) He expressed emperical skepticism of “scientific” knowledge (as espoused by his rival, Averroës, who was himself influenced by Aristotle). Unfortunately, Al-Ghazali’s ideas were coopted and exaggerated by later Sufi scholars, who argued that humans were better served by communing with God and leaving behind all earthly matters.
Empirical Skeptic #3: Pierre Bayle
A French emperical skeptic of the 17th century, Bayle is best known for his Historical and Critical Dictionary, which critiques much of what passed for “truth” in his historical moment.
Empirical Skeptic #4: Pierre-Daniel Huet
A contemporary of Bayle, Huet, long before David Hume, proposed that for any event there could be an infinity of causes.
What Empirical Skepticism Philosophy Resists
Empirical skeptics tend to resist five cognitive flaws that filter the truth and prevent the recognization of Black Swans.
Flaw #1: The Error of Confirmation
All too often we draw universal conclusions from a particular set of facts. For example, if we were presented with evidence that showed a turkey had been fed and housed for 1,000 straight days, we would likely predict the same for day 1,001 and for day 1,100.
Taleb calls this prediction the “round-trip fallacy.” When we commit the round-trip fallacy, we assume that “no evidence of x”—where x is any event or phenomenon—is the same as “evidence of no x.”
For example, in the turkey illustration, we might assume that “no evidence of the possibility of slaughter” equals “evidence of the impossibility of slaughter.” To take a medical example, if a cancer screening comes back negative, there is “no evidence of cancer,” not “evidence of no cancer” (because the scan isn’t perfect and could have missed something).
In addition to drawing broad conclusions from narrow observations, we also have a tendency to select evidence on the basis of preconceived frameworks, biases, or hypotheses. For example, a scientist conducting an experiment may, often unconsciously, discount evidence that disconfirms her hypothesis in favor of the evidence that confirms it. Taleb calls this habit “naive empiricism,” but it’s more commonly known as “confirmation bias.” This flaw can be conquered through empirical skepticism philosophy.
Taleb’s solution to naive empiricism/confirmation bias is negative empiricism—the rigorous search for disconfirming, rather than corroborating, evidence. This technique was pioneered by a philosopher of science named Karl Popper, who called it “falsification.” The reason negative empiricism/falsification is so effective is that we can be far more sure of wrong answers than right ones.
Flaw #2: The Narrative Fallacy
Because humans are naturally inclined to stories, with distinct causes and effects, we are perennially in danger of committing the “narrative fallacy”—the ascription of meaning or cause to random events.
Our tendency to narrativize is part and parcel of our compulsion to interpret. Humans are evolutionarily conditioned—by the development of the left hemisphere of our brains—to reduce the complexity of the world’s information (we’ll discuss why in a moment); and the most efficient way of simplifying that complexity is through interpretation.
For example, read the following poem:
All work and no
no play makes Jack
a dull boy
Notice anything strange? (There’s an extra “no” in the second line.) When experimental subjects reading a similar poem had the left-hemispheres of their brains impaired, they were better able to notice the extra “no.”
Neurotransmitters in the brain, too, encourage interpretation. When patients are administered dopamine supplements, they become more likely to see patterns where there are none.
Why are humans predisposed to interpretation? For a very practical reason: It makes information easier for our brains to store. Whereas retaining 100 randomly ordered numbers would be near impossible, retaining 100 numbers that were ordered according to a specific rule would be much easier. When we interpret—or narrativize—we’re attempting to impose our own organizing rule on the random facts of the world.
So if we’re biologically and psychologically conditioned to narrativize—and, thus, to remain blind to the totally random (i.e., a Black Swan)—what choice do we have? Empirical skepticism philosophy. Taleb offers two suggestions:
Using Empirical Skepticism Philosophy
Solution #1: Favor System 2 over System 1
Empirical psychologists like Nobel Laureate Daniel Kahneman and Amos Tversky have distinguished two types of thinking in human beings: “System 1” thinking, which is experiential, kneejerk, intuitive, and effortless; and “System 2” thinking, which is cogitative, slow, and effortful.
System 1 thinking is what kicks in when we immediately situate an event in a particular framework or pattern. For example, when we hear “no evidence of cancer” and think “evidence of no cancer,” we’re receiving the information using our System 1 resources.
When faced with new information, we have to make a concerted effort to engage System 2 thinking—to pause and reason about the real significance of the information.
———End of Preview———

Like what you just read? Read the rest of the world's best summary of "Black Swan" at Shortform . Learn the book's critical concepts in 20 minutes or less .
Here's what you'll find in our full Black Swan summary :
- Why world-changing events are unpredictable, and how to deal with them
- Why you can't trust experts, especially the confident ones
- The best investment strategy to take advantage of black swants