Cherry-Picking Data in Healthcare: The Unethical Practice

This article is an excerpt from the Shortform book guide to "Bad Science" by Ben Goldacre. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here.

What does cherry-picking data mean? Why do many companies in healthcare do this?

There are some drug and healthcare companies that cherry-pick results, referencing the positive and ignoring the negative. In Bad Science, Ben Goldacre expresses how this is an unethical practice.

Let’s look at why cherry-picking data isn’t advised.

Cherry-Picking Positive Results

Goldacre notes that cherry-picking data is especially common in alternative medicine. For example, while the vast majority of medical literature states that homeopathic treatments are no more effective than placebos, homeopaths tend to cherry-pick from the few studies that support their practices. When confronted with studies that contradict their claims, cherry-pickers will dismiss them, claiming that those studies used flawed methods. By contrast, they’ll often choose to ignore the methodological flaws in the studies that support their treatments.

(Shortform note: Responding to these criticisms, some homeopaths point out that similar cherry-picking exists in conventional medicine. While cherry-picking is also a problem in conventional medicine, this doesn’t mean that alternative medicine is any better or worse. Instead, it implies that both the conventional and alternative medical industries are plagued by similar problems.)

As an example, Goldacre describes how companies and government officials used cherry-picking to promote ineffective treatments during the height of the AIDS epidemic in South Africa. According to Goldacre, a vitamin company led by Matthias Rath made widespread claims throughout South Africa that vitamins were more effective for the treatment of AIDS than antiretrovirals (the type of drug typically prescribed to treat AIDS). These claims were based on unscientific, unpublished trials conducted by Rath himself, and they contradicted the overwhelming scientific consensus that antiretroviral drugs are an effective treatment for AIDS patients.

(Shortform note: In addition to making claims based on his own, unpublished trials, Rath also referenced medical studies that didn’t actually exist. The few legitimate studies he referenced showed that vitamins could improve outcomes when taken alongside antiretrovirals, not that vitamins should be taken instead of antiretrovirals.)

Due to their own biases, South African government officials cherry-picked Rath’s results and used them to justify the decision not to roll out antiretrovirals across South Africa. Instead, Rath’s ineffective vitamin treatments were promoted nationwide. 

(Shortform note: The South African government choosing to accept Rath’s findings is an example of confirmation bias. Confirmation bias is a logical fallacy that leads you to accept new information only when it reinforces your prior beliefs. Left unchecked, confirmation bias can lead directly to cherry-picking.)

What was profitable for Rath was tragic for the people of South Africa—Goldacre estimates that hundreds of thousands of people died due to lack of access to antiretrovirals. This tragedy illustrates why it is so important to be able to call out bad science. When we allow companies to use misleading techniques to promote their treatments, real people suffer for it.

(Shortform note: During the Covid-19 pandemic, medical treatments became a similarly divisive issue in the United States. Some politicians and doctors widely promoted ivermectin as a treatment for Covid, even though few studies supported it. While some believe ivermectin is an effective treatment, its unauthorized use has also led to several accidental poisonings.)

Accounting for Cherry-Picking

To account for cherry-picking, Goldacre says you never take a single study’s results to be the final word. Instead, look up multiple studies about the topic in question, and evaluate each one’s methods. When you’ve determined which studies you think are most trustworthy, compare their results. While they’ll probably differ from each other slightly, they’ll give you a range of reasonable perspectives. New information that falls well outside this range may be cherry-picked.

(Shortform note: To protect yourself from cherry-picking, experts recommend that you remain skeptical about new evidence. This doesn’t mean ignoring new findings, but it does mean you should thoroughly investigate them. In particular, you should be suspicious of studies that don’t use diverse sample groups—for example, if all participants are the same age—as they may not be giving you the complete picture.)

Meta-Analysis Counteracts Cherry-Picking

When evaluating treatments, scientists also compare research results, using a tool called meta-analysis. A meta-analysis collects, compares, and summarizes all the published studies about a particular topic. By looking at all the data at once, meta-analysis can often reveal patterns that aren’t apparent in individual studies. 

For example, suppose that multiple small studies show that a new cold medicine offers mild benefits. In individual studies, the benefit appears so slight that it’s not taken to be significant. However, if you combined the sample sizes of all these studies, you’d be able to get a clearer, more reliable picture of the medication’s effects, based on a larger body of evidence.

If you’re worried about cherry-picking, try to find a meta-analysis of the treatment you’re investigating. The many sources included in a meta-analysis ensure that a few strategically chosen studies don’t overpower the rest.

Problems With Meta-Analysis

While meta-analysis can be a powerful tool for counteracting cherry-picking, meta-analyses can also fall victim to many of the same problems as regular clinical trials.

One problem is that it takes a lot of labor to evaluate a meta-analysis’s methods. To produce a meta-analysis in the first place, researchers must seek out and painstakingly assess many studies to determine if they’re solid enough to be included. When editors are determining whether or not to publish a meta-analysis, they have to repeat the same time-consuming process. Because the process of evaluating a meta-analysis’s methods is so arduous, editors may make mistakes or cut corners, which can in turn lead them to publish sub-par meta-analyses.

Cherry-Picking Data in Healthcare: The Unethical Practice

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Ben Goldacre's "Bad Science" at Shortform.

Here's what you'll find in our full Bad Science summary:

  • The strategies researchers, corporations, and journalists use to mislead the public
  • The tools you need to identify and call out shady science when you see it
  • Why media outlets have an ethical obligation to publish retractions

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published.