Empirical Skepticism: 5 Ways to Fight Bad Logic

This article is an excerpt from the Shortform summary of "The Black Swan" by Nassim Taleb. Shortform has the world's best summaries of books you should be reading.

Like this article? Sign up for a free trial here .

What is empirical skepticism philosophy? In what ways does it resist faulty reasoning and cognitive bias?

Empirical skepticism philosophy is a skeptical approach steeped in fact and observation. It was practiced by philosophers such as Sextus Empiricus and David Hume.

We’ll cover the history of empirical skepticism philosophy and how it resists five common cognitive errors.

Empirical Skepticism Philosophy

Before we dive into empirical skepticism philosophy, let’s explore inductive reasoning to see why empirical skepticism is necessary.

An example of faulty inductive reasoning, this time from the world of finance, concerns the hedge fund Amaranth (ironically named after a flower that’s “immortal”), which incurred one of the steepest losses in trading history: $7 billion in less than a week. Just days before the company went into tailspin, Amaranth had reminded its investors that the firm employed twelve risk managers to keep losses to a minimum. The problem was that these risk managers—or suckers—based their models on the market’s past performance

In order not to be suckers, we must (1) cultivate an “empirical skepticism”—that is, a skepticism steeped in fact and observation—and (2) remain vigilant against the innately human tendencies that leave us vulnerable to Black Swans.

Traits of the Empirical SkepticTraits of the Platonifier (induction)
Respects those who say “I
don’t
know”
Views those who say “I don’t know”
as ignorant
Thinks of Black Swans as a
primary incidence of
randomness
Thinks of minor deviations as the
primary incidence of randomness
Minimizes theoryPraises theory
Assumes the world functions
like Extremistan rather
Mediocristan
Assumes the world functions like
Mediocristan rather than Extremistan
Prefers to be broadly
right across a wide range of
disciplines and
situations
Prefers to be perfectly right in a
narrow range of disciplines and
situations 

The History of Empirical Skepticism Philosophy

The problem of induction illustrated by the turkey story has been noted by many well-known philosophers, including the great empirical skeptic David Hume. But induction’s shortcomings were noted even in antiquity.

Empirical Skeptic #1: Sextus Empiricus

Either a philosopher himself or simply a copyist of other thinkers, Sextus Empiricus resided in 2nd-century BC Alexandria. In addition to his philosophical pursuits, he practiced medicine, doing so according to empirical observation but without dogmatism (that is, without blind loyalty to a particular approach or method). In fact, Sextus Empiricus was devoutly antidogmatic: He eschewed theory of any kind—the title of his most famous book translates “Against the Professors”—and proceeded according to persistent trial and error, much like Taleb.

Empirical Skeptic #2: Al-Ghazali

An 11th-century Persian philosopher, Al-Ghazali too doubted the wisdom of the intellectual authorities of his time. (The title of his most famous text is The Incoherence of the Philosophers.) He expressed emperical skepticism of “scientific” knowledge (as espoused by his rival, Averroës, who was himself influenced by Aristotle). Unfortunately, Al-Ghazali’s ideas were coopted and exaggerated by later Sufi scholars, who argued that humans were better served by communing with God and leaving behind all earthly matters.

Empirical Skeptic #3: Pierre Bayle

A French emperical skeptic of the 17th century, Bayle is best known for his Historical and Critical Dictionary, which critiques much of what passed for “truth” in his historical moment.

Empirical Skeptic #4: Pierre-Daniel Huet

A contemporary of Bayle, Huet, long before David Hume, proposed that for any event there could be an infinity of causes. 

What Empirical Skepticism Philosophy Resists

Empirical skeptics tend to resist five cognitive flaws that filter the truth and prevent the recognization of Black Swans.

Flaw #1: The Error of Confirmation

All too often we draw universal conclusions from a particular set of facts. For example, if we were presented with evidence that showed a turkey had been fed and housed for 1,000 straight days, we would likely predict the same for day 1,001 and for day 1,100.

Taleb calls this prediction the “round-trip fallacy.” When we commit the round-trip fallacy, we assume that “no evidence of x”—where x is any event or phenomenon—is the same as “evidence of no x.”

For example, in the turkey illustration, we might assume that “no evidence of the possibility of slaughter” equals “evidence of the impossibility of slaughter.” To take a medical example, if a cancer screening comes back negative, there is “no evidence of cancer,” not “evidence of no cancer” (because the scan isn’t perfect and could have missed something).

In addition to drawing broad conclusions from narrow observations, we also have a tendency to select evidence on the basis of preconceived frameworks, biases, or hypotheses. For example, a scientist conducting an experiment may, often unconsciously, discount evidence that disconfirms her hypothesis in favor of the evidence that confirms it. Taleb calls this habit “naive empiricism,” but it’s more commonly known as “confirmation bias.” This flaw can be conquered through empirical skepticism philosophy.

Taleb’s solution to naive empiricism/confirmation bias is negative empiricism—the rigorous search for disconfirming, rather than corroborating, evidence. This technique was pioneered by a philosopher of science named Karl Popper, who called it “falsification.” The reason negative empiricism/falsification is so effective is that we can be far more sure of wrong answers than right ones.

Flaw #2: The Narrative Fallacy

Because humans are naturally inclined to stories, with distinct causes and effects, we are perennially in danger of committing the “narrative fallacy”—the ascription of meaning or cause to random events.

Our tendency to narrativize is part and parcel of our compulsion to interpret. Humans are evolutionarily conditioned—by the development of the left hemisphere of our brains—to reduce the complexity of the world’s information (we’ll discuss why in a moment); and the most efficient way of simplifying that complexity is through interpretation.

For example, read the following poem:

All work and no

no play makes Jack

a dull boy

Notice anything strange? (There’s an extra “no” in the second line.) When experimental subjects reading a similar poem had the left-hemispheres of their brains impaired, they were better able to notice the extra “no.”

Neurotransmitters in the brain, too, encourage interpretation. When patients are administered dopamine supplements, they become more likely to see patterns where there are none.

Why are humans predisposed to interpretation? For a very practical reason: It makes information easier for our brains to store. Whereas retaining 100 randomly ordered numbers would be near impossible, retaining 100 numbers that were ordered according to a specific rule would be much easier. When we interpret—or narrativize—we’re attempting to impose our own organizing rule on the random facts of the world.

So if we’re biologically and psychologically conditioned to narrativize—and, thus, to remain blind to the totally random (i.e., a Black Swan)—what choice do we have? Empirical skepticism philosophy. Taleb offers two suggestions:

Using Empirical Skepticism Philosophy

Solution #1: Favor System 2 over System 1

Empirical psychologists like Nobel Laureate Daniel Kahneman and Amos Tversky have distinguished two types of thinking in human beings: “System 1” thinking, which is experiential, kneejerk, intuitive, and effortless; and “System 2” thinking, which is cogitative, slow, and effortful. 

System 1 thinking is what kicks in when we immediately situate an event in a particular framework or pattern. For example, when we hear “no evidence of cancer” and think “evidence of no cancer,” we’re receiving the information using our System 1 resources.

When faced with new information, we have to make a concerted effort to engage System 2 thinking—to pause and reason about the real significance of the information.

Solution #2: Favor Experiments Over Stories

Rather than trust in “expert” analyses or attempt to extrapolate cause-and-effect narratives from newspapers—materials that are typically based on inductive reasoning—we should seek out empirical experimental evidence (for example, studies conducted by empirical psychologists like Kahneman and Tversky).

Flaw #3: The Disapproval of Society

We invite the disdain of our peers when we structure our lives around the pursuit of “good” Black Swans (for example, writing a novel in the hope it’ll be a bestseller) or the avoidance of bad ones (for example, adhering to a stock-trading strategy that shields us from a catastrophic event). 

This is because humans are naturally inclined toward regular, smaller rewards rather than large windfalls. From the outside, the inventor who toils in his garage in pursuit of a world-changing innovation, or the stock trader who endures persistent losses to inoculate himself against a massive crash (which is what Taleb does), seems foolish. (In fact, it is foolish, at least in terms of financial rewards: Research shows that independent inventors earn at lower rates than venture capitalists.)

Nevertheless, inventors, artists, and iconoclastic stock traders continue to exist. They are able to endure the condescension of their peers and the perception of failure. They persist on hope.

These individuals, the hopeful, are what Taleb calls “reverse turkeys”—those who are prepared for Black Swan events, good or bad.

In his life as a trader, Taleb was a reverse turkey. He used a strategy called “bleed,” taking positions that would pay handsomely if there was a catastrophe but produced losses on a daily basis. He was, essentially, betting on lightning to strike. In 1987, it did. He used empirical skepticism philosophy to help him make good decisions.

Flaw #4: The Distortion of Silent Evidence

History—which Taleb defines as “any succession of events seen with the effect of posterity”—is inevitably, necessarily distorted. That is, no matter how “factual” or “objective,” the historical record is always a product of our tendency to narrate and thus always biased.

What the narratives are biased against is randomness—the inexplicability and importance of Black Swans.

Take most CEOs’ and entrepreneurs’ (auto)biographies. These books attempt to draw a causal link between the CEO/entrepreneur’s (a) character traits, education, and business acumen and (b) later success. The “silent evidence” (which Taleb also calls “the cemetery”) in these narratives is that there are many more people with the same attributes as the triumphant CEOs/entrepreneurs who failed. The fact is, in business, like in so many other fields, the deciding factor is nothing other than luck (i.e., randomness).

Once we become attuned to the existence of “silent evidence”—which we can think of as the “flipside” or contrary to any story we’re told—we can see it everywhere. This is part of empirical skepticism philosophy.

For example, we base our knowledge of criminals and criminality on those that get caught (because the only criminals we know about are the ones that get caught). In other words, we have no real sense of how easy or hard it is to get away with a crime, because our notions about crime are formed solely by reports of failed criminals.

In the case of criminality, our ignorance of silent evidence actually serves a socially positive function because it makes getting away with a crime look hard and thus dissuades would-be criminals. But silent evidence can also encourage socially detrimental behaviors, such as risk-taking. This is because the successful risk-takers are the ones whose story is told; the failed risk-takers end up in the cemetery (in Taleb’s sense, but often literally as well).

Take, for example, Giacomo Casanova, the legendary ladies’ man and adventurer. Casanova was blessed with infallible luck—somehow he always managed to escape whatever predicament he found himself in. He believed he had a “lucky star” that watched out for him.

But for every Casanova, there are dozens of wannabe adventurers and seducers whose risks don’t pay off. We get a skewed sense of the benefits of risk-taking because the successful risk-takers are the ones with the means to tell their tales.

(Shortform note: Taleb doesn’t acknowledge that stories of risks that fail spectacularly—from Bernie Madoff to Theranos to Lehman Brothers—do get told quite often.)

A more far-reaching variation on the concept of silent evidence is the “anthropic cosmological argument.” This argument, touted by physicists and philosophers alike, states that human existence cannot be a random occurrence because of the specificity and amount of factors that provide for that existence. In other words, the odds are so stacked against the fact of human existence, that the only possibility is that the world was created precisely to allow for our existence.

Again, what the anthropic cosmological argument suppresses is the evidence of all the other species that didn’t thrive. We assume our beating the odds is the result of destiny, but it’s really a matter of numbers: If you consider the incredible number of species competing for survival, it stands to reason one—humans, in this case—was going to win the jackpot.

Flaw #5: Our Tendency to “Tunnel”

A repercussion of the Distortion of Silent Evidence, “tunneling” describes the natural human tendency to favor knowns and known unknowns rather than unknown unknowns. In other words, our understanding of uncertainty is based almost exclusively on what has happened in the past rather than what could have happened.

The primary practitioners of tunneling are those Taleb calls “nerds”—academics, mathematicians, engineers, statisticians, and the like. Nerds are those who think entirely “inside the box”; they Platonify the world and can’t perceive possibilities that lie outside their scientific models and academic training.

Nerds suffer from the “ludic fallacy.” (“Ludic” comes from the Latin word ludus, which means “game.”) That is, they treat uncertainty in real life like uncertainty in games of chance, for example roulette or blackjack. The problem with this approach is that, unlike games of chance, real life has no rules.

Nerds aren’t the only ones guilty of the ludic fallacy, however; average people indulge it as well. For example, most people think casino games represent the height of risk and uncertainty. In truth, casino games hail from Mediocristan—there are clear and definite rules that govern play, and the odds of winning or losing are calculable. Unlike real life, the amount of uncertainty in a casino game is highly constrained.

The incredible unpredictability of real life can be illustrated by a look at casinos themselves, rather than their games. Casinos spend huge sums of money on sophisticated security systems—cameras, dealer training, alarms—in an effort to prevent cheating. These systems are designed to foil cheats who use methods that are either known or expected. But the casino Taleb studied suffered its worst losses not from cheating but from completely unexpected real-life events:

  • an entertainer was attacked by the tiger with which he performed, resulting in a $100 million loss due to cancelled shows; 
  • an employee, for years and for no good reason, neglected to file an essential tax form with the IRS, resulting in a huge fine; and
  • The casino owner’s daughter was kidnapped, and to pay the ransom, the casino owner withdrew cash from the casino’s fund.

Taleb’s study of this particular casino yielded the insight that unexpected, real-life events caused greater losses to the casino than cheating by a factor of almost 1,000 to 1. Suffice it to say, to prepare ourselves for Black Swans, we must resist the ludic fallacy and think outside the rules of the games to which we’re accustomed. We must engage with empirical skepticism philosophy.

Empirical Skepticism: 5 Ways to Fight Bad Logic

———End of Preview———

Like what you just read? Read the rest of the world's best summary of "Black Swan" at Shortform . Learn the book's critical concepts in 20 minutes or less .

Here's what you'll find in our full Black Swan summary :

  • Why world-changing events are unpredictable, and how to deal with them
  • Why you can't trust experts, especially the confident ones
  • The best investment strategy to take advantage of black swants

Amanda Penn

Amanda Penn is a writer and reading specialist. She’s published dozens of articles and book reviews spanning a wide range of topics, including health, relationships, psychology, science, and much more. Amanda was a Fulbright Scholar and has taught in schools in the US and South Africa. Amanda received her Master's Degree in Education from the University of Pennsylvania.

Leave a Reply

Your email address will not be published.