Epistemic Arrogance: If You’re an “Expert,” You’re Guilty

This article is an excerpt from the Shortform summary of "The Black Swan" by Nassim Taleb. Shortform has the world's best summaries of books you should be reading.

Like this article? Sign up for a free trial here .

What is epistemic arrogance? Who’s guilty of epistemic arrogance, and how does thinking we know what we actually don’t get us into trouble?

Epistemic arrogance is the tendency to overestimate our ability to predict when we’re overconfident in our knowledge. We’re all guilty of epistemic arrogance, but it hits “experts” the hardest.

We’ll cover how epistemic arrogance leads to bad predictions and why more information isn’t always better.

The Scandal of Prediction

Before we dive into epistemic arrogance, let’s look at why predictions are really lies.

With the rapid advance of technology—computer chips, cellular networks, the Internet—it stands to reason that our predictive capabilities too are advancing. But consider how few of these groundbreaking advances in technology were themselves predicted. For example, no one predicted the Internet, and it was more or less ignored when it was created.

(Shortform note: It’s unclear how Taleb defines “predicted.” Plenty of science-fiction writers and cultural commentators anticipated recent technologies like the Internet and augmented and virtual reality.)

It is an inconvenient truth that humans’ predictive capabilities are extremely limited; we are continuously faced with catastrophic or revolutionary events that arrive completely unexpectedly and for which we have no plan. Yet, nevertheless, we maintain that the future is knowable and that we can adequately prepare for it. Taleb calls this tendency the scandal of prediction. But our obvliviousness to the scandal can lead to epistemic arrogance.

Epistemic Arrogance

The reason we overestimate our ability to predict is that we’re overconfident in our knowledge. This is epistemic arrogance.

A classic illustration of epistemic arrogance comes from a study conducted by a pair of Harvard researchers. In the study, the researchers asked subjects to answer specific questions with numerical ranges. (A sample question might be, “How many redwoods are there in Redwood Park in California?” To which the subject would respond, “I’m 98% sure there are between x and y number of redwoods.) The researchers found that the subjects, though they were 98% sure of their answers, ended up being wrong 45% of the time! (Fun fact: The subjects of the study were Harvard MBAs.) In other words, the subjects picked overly narrow ranges because they overestimated they own ability to estimate. If they had picked wider ranges—and, in so doing, acknowledged their own lack of knowledge—they would have scored much better.

Taleb calls our overconfidence in our knowledge “epistemic arrogance.” On the one hand, we overestimate what we know; on the other, we underestimate what we don’t—uncertainty.

It’s important to recognize that Taleb isn’t talking about how much or how little we actually know, but rather the disparity between what we know and what we think we know. We’re arrogant because we think we know more than we actually do.

This epistemic arrogance leads us to draw a distinction between “guessing” and “predicting.” Guessing is when we attempt to fill in a nonrandom variable based on incomplete information, whereas predicting is attempting to fill in a random variable based on incomplete information.

Say, for example, someone asks you to estimate how many natural lakes there are in Georgia. There’s a right answer to the question—it’s 0—but you don’t know it, so your answer is a “guess.”

But say that same someone asks you what the U.S. unemployment rate will be in a year. You might look at past figures, GDP growth, and other metrics to try and make a “prediction.” But the fact is, your answer will still be a “guess”—there are just too many factors (unknown unknowns) to venture anything better than a guess. If you think the figures give you answers, you’re demonstrating epistemic arrogance.

The Curse of Information

It stands to reason that the greater our information is about a particular problem, the more likely we are to come upon a solution. And the same goes, it would seem, for predictions: The more information we have to make a prediction, the more accurate our prediction will be.

But an array of studies shows that an increase in information actually has negligible—and even negative—effects on our predictions.

For example, the psychologist Paul Slovic conducted a study on oddsmakers at horse tracks. He had the oddsmakers pick the ten most important variables for making odds, then asked the oddsmakers to create odds for a series of races using only those variables. 

In the second part of the experiment, Slovic gave the oddsmakers ten more variables and asked them to predict again. The accuracy of their predictions was the same (though their confidence in their predictions increased significantly). The more information you have, the higher your epistemic arrogance.

The negative outcome of an increase in information is that we become increasingly sure of our predictions even as their accuracy remains constant

Experts—The Worst Offenders of Epistemic Arrogance

The most epistemically arrogant group in terms of their predictions—and least aware of their own ignorance—are so-called “experts.” These are the credentialed and/or laureled people whose opinions are granted weight by society.

Taleb divides this group by two. There are those who are arrogant but also display some degree of competence, and then there are those who are arrogant and incompetent.

1) Competent Arrogants

“Competent Arrogants” are those experts with actual predictive abilities and discernible skills. This group includes astronomers, physicists, surgeons, and mathematicians (when dealing exclusively with pure, rather than applied, mathematics).

2) Incompetent Arrogants

“Incompetent Arrogants” are “experts” whose predictive abilities and skills aren’t significantly greater than the average person’s. This group includes stockbrokers, intelligence analysts, clinical psychologists, psychiatrists, economists, finance professors, and personal financial advisors. These people are sufferers of epistemic arrogance.

In numerous empirical studies, the forecasting ability of the “incompetent arrogants” has been shown to be almost nonexistent.

For example, over the course of several years, psychologist Philip Tetlock asked almost 300 specialists—political scientists, economists, journalists, and politicians—to offer predictions of world events (the timeframe was usually “within the next five years”). He discovered that the experts’ predictions were barely more accurate than random selection and often worse than simple computer simulations

He also found that the more prominent a person was in his or her field, the worse were his or her predictions. The reason for this finding was that prominent people tend to become prominent based on their having one big idea. These experts marry themselves to their singular idea and neglect other possibilities—and thus, when randomness rears its head, they’re shown to have been woefully misguided. Still, they maintain their epistemic arrogance.

Tetlock’s main interest, however, wasn’t the fact of experts’ poor forecasting abilities but rather those experts’ lack of accountability for being wrong. He found that experts (unconsciously) employ a number of excuses to explain away their errors. 

Defenses of the Epistemically Arrogant

The “Different Game” Defense

Experts will claim that the event that disproved their prediction could have been predicted if the right data were available. For example, a specialist in the Soviet Union who failed to predict its collapse might say that, because the Soviet Union was so adept at hiding its economic data, he wasn’t able to make an accurate prediction.

The “Almost Right” Defense

Experts will claim that, if a minor variable or two had been different, they would have been proven correct. For example, a specialist in the Soviet Union who predicted that it would collapse twenty years after it actually did might say something like, “Well, I knew it would collapse, I just didn’t know exactly when.”

The “Outlier” Defense

Experts will claim that the event they failed to predict was a complete anomaly, a thousand-year flood—a Black Swan. For example, a specialist in the Soviet Union will claim its collapse couldn’t have been predicted, and so her inability to predict it doesn’t mean her predictive ability is suspect.

Epistemic Arrogance: If You’re an “Expert,” You’re Guilty

———End of Preview———

Like what you just read? Read the rest of the world's best summary of "Black Swan" at Shortform . Learn the book's critical concepts in 20 minutes or less .

Here's what you'll find in our full Black Swan summary :

  • Why world-changing events are unpredictable, and how to deal with them
  • Why you can't trust experts, especially the confident ones
  • The best investment strategy to take advantage of black swants

Amanda Penn

Amanda Penn is a writer and reading specialist. She’s published dozens of articles and book reviews spanning a wide range of topics, including health, relationships, psychology, science, and much more. Amanda was a Fulbright Scholar and has taught in schools in the US and South Africa. Amanda received her Master's Degree in Education from the University of Pennsylvania.

Leave a Reply

Your email address will not be published.