Why Modern Academia Fails to Practice Good Science

This article is an excerpt from the Shortform book guide to "Skin in the Game" by Nassim Nicholas Taleb. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

What does good science look like? What are the implications of scientific research being done in institutions that aren’t even involved in its implementation?

In his book Skin in the Game, Nassim Taleb argues that modern academics fail to practice good science because they lack what he calls “skin in the game”—they are isolated from the real world in which their ideas are implemented. Taleb argues that areas of science and academia that lack skin in the game yield faulty, harmful ideas and theories that would cause major damage if implemented at a large scale.

In this article, we’ll take a look at Taleb’s criticism of science and academia and look at his proposed solution to these problems and explain how he would reshape modern research.

What Good Science Looks Like

The Lindy effect is at the core of good science. This is because science is about disproving ideas until those that can’t be disproved are all that remain. For example, Ptolemy disproved the astronomical model of the Ancient Greeks, then Copernicus disproved Ptolemy’s model by showing that the earth moves around the sun. Progress is made in the areas in which theories fail.

The point of scientific experimentation is to speed up time’s elimination process. Scientists intentionally create observable stakes that determine whether or not a hypothesis is effective at achieving specific results. If it isn’t, it’s discarded.

(Shortform note: This is an example of another one of the major ideas that appears throughout Taleb’s Incerto: “via negativa,” the principle stating that it’s easier to prove that something doesn’t work than that it does, and consequently, taking things away is a more reliable course of action than adding something. Taleb argues that via negativa is a reliable guiding principle in a wide range of situations: politics, markets, medicine. He devotes several chapters in Antifragile to this topic.)

Ideally, scientific fields would be dominated by skeptical experimenters who are rewarded for disproving theories and devising a more accurate understanding of how the world works. Unfortunately, modern science has strayed from this ideal because researchers lack skin in the game—they are too removed from the contexts in which their theories are implemented.

This lack of academics’ skin in the game results in the growth of systemic flaws, as researchers are not punished for inefficiencies or mistakes.

Taleb argues that these flaws yield inaccurate conclusions and misguided theories that can cause major harm if applied at a large scale in the real world. For example, Taleb particularly scorns economist Richard Thaler’s argument that policymakers and private companies should actively “nudge” people away from making “irrational” choices—“irrational” choices are sometimes safer, for reasons distant third parties didn’t consider. According to Taleb, many of these errors are then taught in universities, cheapening the value of higher education.

Taleb notes that flaws in science are especially severe in social sciences such as economics, psychology, and history (as opposed to hard sciences like chemistry and physics), where definitive evidence is relatively difficult to gather and theories are difficult to disprove.

Why Do Researchers Lack Skin in the Game?

Taleb argues under the assumption that academia lacks skin in the game, but he doesn’t explain how or why this came to be. Only a small fraction of scientific research ends up providing practical benefits to society, and it’s difficult to predict how valuable any given line of research will be until after these discoveries are made. If valuable discoveries were predictable, they would have already been discovered.

For example, Alexander Fleming, the scientist who discovered penicillin, initially failed to recognize the practical value of his discovery. It was more than ten years before it was used as a revolutionary antiseptic. Since it’s difficult to predict the value of scientific exploration, the majority of researchers get paid salaries for research that in the end generates little value. Researchers don’t bear the financial risks of their research, and this imbalance between risk and reward is a lack of skin in the game. Thus, fallible human judges are the ones evaluating research instead of the Lindy effect, which, as we’ve discussed, causes problems.

The Flaws of Peer Review

Instead of judging research by how well it stands up to skeptical experimentation, peer review has become the ultimate judge of quality science.

According to Taleb, in peer review-driven academia, reputation is everything. If your peers don’t see value in your theories, it doesn’t matter if they’re correct or not—your research won’t be accepted. Since peer approval is a reciprocal process, as long as a group of academics reaches consensus, the quality of their ideas doesn’t matter. They can create a feedback loop of approving each others’ research, earning themselves funding and tenure with no penalty for being wrong.

In short, Taleb is arguing that peer review rewards science that supports the research of others instead of science that tries to disprove it. But the intent to disprove existing theories is what makes science effective.

Reforming Peer Review

This isn’t just Taleb’s problem—peer review has long been a common target of criticism within academia as well. Institutionalized science existed long before peer review, which only became industry-standard in the years following World War II. Critics argue that peer review is biased against the research that could prove to be the most valuable—that which challenges existing knowledge and comes from less prestigious institutions. A significant number of heavily cited papers were initially rejected by peer review before later earning recognition, including several that eventually won the Nobel Prize.

Some academics are attempting to invent innovative new forms of peer review. One model opens papers up to online public discussion prior to publication, where experts and amateurs could critique and engage in conversation with the authors, which editors took into account before deciding whether or not to publish. Online journal PLOS One is still peer-reviewed, but makes a point to accept a wide range of research—even if it lacks obvious significance to a given field—and allows the paper’s audience to play a part in determining what research has value.

Insufficient Real-World Experimentation

In Taleb’s eyes, academia tends to overlook the role skin in the game plays in the discovery of knowledge. One clear illustration of this: academics divide research into two categories: theoretical and empirical. Theoretical research is abstract, a series of assumptions based on logical reasoning. Empirical research is concrete, based on data gathered from experiments, surveys or other means.

Empirical research is typically seen as reliable, but Taleb makes the argument that it’s still barely more than theoretical. Just because inferences are based on objective data does not mean those inferences are objectively true. If good data is misinterpreted under false assumptions, the data is useless. Humans are poor judges of data. All it takes is one unnoticed variable to render a conclusion invalid. 

Additionally, experiments conducted in isolation are a far cry from the real world and could yield false conclusions. Borrowing from the field of medicine, Taleb asserts that clinical research is also necessary—implementing and studying theories in the real world. However, the idea of clinical research is unheard of in most scientific disciplines.

More Data to Misinterpret

In an article for WIRED, Taleb cautions against the dangers of “Big Data.” He argues that as technology has allowed us to measure greater quantities of data from more sources than ever before, it also causes us to make more false conclusions than ever before. As the number of measured variables grows, so does the number of correlations between variables. And where there are more correlations, more researchers purport to show evidence where there is none. In this way, counterintuitively, more data can mean less accurate knowledge. Taleb offers an example in the WIRED article: a geneticist’s observational study claims to have discovered ten genes that successfully predict the incidence of ALS. When he studied the data, however, Taleb found that the “statistically significant” conclusions reached were the result of random chance—the geneticist was studying so many hundreds of thousands of genes that some correlation to ALS incidence was inevitable.

Overcomplicating for Profit

The last problem Taleb has with academia that we’ll discuss is that academics are rewarded for overcomplicating their own work, to the point of diminishing value. 

Here’s why: Academics are praised and rewarded for complex, uncommon insights that have not yet been covered by other members of the field. But without skin in the game to verify which insights are true or useful, complexity becomes the primary way to measure success. This is especially true in fields where it’s difficult or impossible to disprove theories, such as economics or psychology.

Even in the more problem-solving fields of applied science, complexity is often prioritized over effectiveness. When scientists are getting paid regardless of how well they solve the problem—when they lack skin in the game—they are incentivized to appear more valuable by devising the most complex solution possible. Complex solutions increase dependence on those who invented them and often entail financial rewards—for instance, if a scientist is the patent owner of an unnecessarily complex piece of technology.

Taleb is adamant that science intended to needlessly complicate instead of solve problems isn’t science at all—it’s “scientism.”

Taleb illustrates what dangerous scientism looks like with an anecdote about genetically engineered rice. When faced with the problem of malnourished people in third-world countries, dangerously deficient in Vitamin A, scientists were quick to develop the solution of genetically engineered, vitamin-enriched rice.

Taleb and his colleagues composed a detailed counterargument, based on the idea that genetic engineering is a relatively new science, and could have disastrous unforeseen side effects if it were implemented on a large scale. Additionally, they saw no reason not to simply give starving people rice and Vitamin A separately.

However, when Taleb and his colleagues argued against the overly complicated solution, they were branded as “anti-science” by those who stood to profit from the creation of genetically modified rice.

Obviously, not all academics and intellectuals intentionally seek to profit by deceiving the public, but a system without skin in the game incentivizes them to appear valuable instead of actually being valuable.

How to Fix Academia

Taleb asserts that the only way to ensure effective scholarship is to stop paying scientists to conduct research. Instead, we should require working professionals to conduct research on their own time.

This puts the skin of researchers back in the game, as they would have to sacrifice time, money, and effort in the name of science if they want to discover something valuable. Their rewards become contingent on their success—the only way they can make money off of a discovery is if it’s valuable enough that someone is willing to pay for it.

(Shortform note: Taleb’s call to abolish research institutions entirely seems far too extreme to be realistic. It also seems to contradict his opinion that incremental change is the best way forward. That being said, in the book, Taleb frames this shift away from salaried research as an inevitable eventuality rather than his own advice: “The deprostitutionalization of research will eventually be done as follows…” He likely believes that the current academic system will collapse without the need for critics to intervene.)

Currently, Taleb asserts that a good way to distinguish good research from bad is to give more credence to academics with more skin in the game. Specifically, you should listen more closely to those who have more to lose if their arguments are faulty—people who contradict the opinions of their peers and risk being ostracized by their community.

(Shortform note: Conversely, researchers who confirm what we assume to be true are less likely to be held accountable for mistakes. Diederik Stapel, a social psychologist, was caught in 2011 having made up experimental data for at least 58 published studies over the course of several years. He remained undetected because he was tweaking data to fit reasonable hypotheses that the scientific community would support—correlations such as: children primed with the idea of someone crying are more likely to share their candy, subjects primed with capitalist ideas will consume more M&Ms.)

Why Modern Academia Fails to Practice Good Science

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Nassim Nicholas Taleb's "Skin in the Game" at Shortform .

Here's what you'll find in our full Skin in the Game summary :

  • Why having a vested interest is the single most important contributor to human progress
  • How some institutions and industries were completely ruined by not being invested
  • Why it's unethical for you to not have skin in the game

Darya Sinusoid

Darya’s love for reading started with fantasy novels (The LOTR trilogy is still her all-time-favorite). Growing up, however, she found herself transitioning to non-fiction, psychological, and self-help books. She has a degree in Psychology and a deep passion for the subject. She likes reading research-informed books that distill the workings of the human brain/mind/consciousness and thinking of ways to apply the insights to her own life. Some of her favorites include Thinking, Fast and Slow, How We Decide, and The Wisdom of the Enneagram.

Leave a Reply

Your email address will not be published.