PDF Summary:Bad Science, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of Bad Science by Ben Goldacre. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of Bad Science

From vaccine panics to fad diets, dubious medical information seems to be everywhere you look. Bad Science, by doctor, professor, and science journalist Ben Goldacre, details the strategies researchers, corporations, and journalists use to mislead the public, all while lining their own pockets. Goldacre gives you the tools you need to identify and call out shady science when you see it.

In our guide, we’ll shine a light on manipulative practices at every level, from individual poorly designed studies, to bogus claims from drug companies, and finally mass misinformation from media outlets. We’ll also include commentary that provides more strategies you can use to understand the scientific process and avoid being misled, as well as highlighting even more misleading practices in medical science.

(continued)...

Marketing Drugs, Marketing Diseases

While Goldacre focuses on the techniques drug companies use to deceptively market their treatments, there’s another part of the process that isn’t addressed here. In addition to marketing drugs, pharmaceutical companies also produce and market diseases.

In order to open up new market niches, companies create new conditions by scaremongering about normal body functions. Generalized symptoms such as tiredness, back and body aches, and stress can be used to sell the public on the need for new supplements and other treatments.

Researchers have described recent disease awareness campaigns about low testosterone levels in older men as examples of this phenomenon. While it’s perfectly natural for testosterone levels to decrease with age, pharmaceutical companies were still able to increase their sales by drumming up a scare.

Publish the Good, Bury the Bad

Goldacre argues that in order to make their treatments seem more effective, companies only publish results when their treatments perform well, and they actively conceal negative results. This is commonly referred to as publication bias. For instance, if a company conducts three trials for a new drug, and the drug only performs well in one of them, that company could choose not to publish the other two studies. By hiding these crucial pieces of data, the company makes its treatment seem more effective than it really is.

(Shortform note: As Goldacre describes, publication bias often occurs because drug companies want their drugs to sell. However, some researchers believe that journal editors may also be responsible for publication bias, even in credible, peer-reviewed journals. In order to sell subscriptions and stay relevant, editors may seek out studies in which new treatments were shown to have strong effects, as these kinds of advancements are seen as more exciting for readers.)

As an example, Goldacre notes that when selective serotonin reuptake inhibitors (SSRIs) were being researched as a treatment for depression, pharmaceutical companies buried multiple studies in which SSRIs were shown not to work better than placebos. By hiding these lukewarm studies, the pharmaceutical industry was able to make sure SSRIs made it to market as quickly as possible.

(Shortform note: While these early studies of SSRIs reported that they vastly outperformed placebos in treating depression, more recent studies have shown that SSRIs only outperform placebos by a small margin. One possible explanation for this change is that publication bias in earlier studies made SSRIs seem more effective than they really are. However, researchers have noted that depression is now more widely diagnosed than it used to be, and that this may have also impacted results. According to this argument, because milder cases of depression are now treated with SSRIs, the relatively small impact of treatment in those cases may help explain the small clinical impact of SSRIs in general.)

In addition to omitting mediocre results, drug companies sometimes hide results that show that their products cause harm. In particular, Goldacre references Vioxx, a painkiller that was marketed from 1999 to 2004. The company that manufactured Vioxx hid evidence that Vioxx caused an increased risk of heart attack and pushed the drug to market anyway. Goldacre estimates that Vioxx caused tens of thousands of heart attacks in its brief time on the market. It’s important to understand and identify bad science, because, as illustrated here, when drugs are pushed to market irresponsibly it can cause tremendous harm.

(Shortform note: In addition to drug companies, irresponsible regulators may also play a part in allowing dangerous drugs to reach consumers. For example, the chairman of the safety board tasked with evaluating Vioxx owned tens of thousands of dollars of the company’s stock. This clear conflict of interest created a financial incentive for irresponsible behavior.)

Spotting Publication Bias

To spot publication bias, Goldacre recommends analyzing as many studies as possible when evaluating any given treatment. In general, studies with larger sample sizes and better funding should generally agree with each other. Their large sizes and superior methods tend to produce more consistent results. By contrast, the smaller studies should naturally produce a wider range of results due to their smaller sample sizes. If the smaller studies all agree with each other, it may be a sign that some studies have been omitted.

(Shortform note: Some statisticians argue that comparing results may not be a useful tool for spotting publication bias. According to this argument, small studies might agree with each other for many reasons that are unrelated to publication bias. Furthermore, some researchers don’t feel we need to test for publication bias at all—they argue that because it’s such a widespread problem, you should always assume publication bias is present.)

Cherry-Picking Positive Results

In addition to trying to control which research gets published, companies cherry-pick results, referencing only positive results and ignoring or dismissing negative ones.

(Shortform note: In addition to cherry-picking which studies they reference, drug companies and researchers also cherry-pick within individual studies.). Within single studies, researchers cherry-pick by only including data that supports their treatments.)

Goldacre notes that cherry-picking is especially common in alternative medicine. For example, while the vast majority of medical literature states that homeopathic treatments are no more effective than placebos, homeopaths tend to cherry-pick from the few studies that support their practices. When confronted with studies that contradict their claims, cherry-pickers will dismiss them, claiming that those studies used flawed methods. By contrast, they’ll often choose to ignore the methodological flaws in the studies that support their treatments.

(Shortform note: Responding to these criticisms, some homeopaths point out that similar cherry-picking exists in conventional medicine. While cherry-picking is also a problem in conventional medicine, this doesn’t mean that alternative medicine is any better or worse. Instead, it implies that both the conventional and alternative medical industries are plagued by similar problems.)

As an example, Goldacre describes how companies and government officials used cherry-picking to promote ineffective treatments during the height of the AIDS epidemic in South Africa. According to Goldacre, a vitamin company led by Matthias Rath made widespread claims throughout South Africa that vitamins were more effective for the treatment of AIDS than antiretrovirals (the type of drug typically prescribed to treat AIDS). These claims were based on unscientific, unpublished trials conducted by Rath himself, and they contradicted the overwhelming scientific consensus that antiretroviral drugs are an effective treatment for AIDS patients.

(Shortform note: In addition to making claims based on his own, unpublished trials, Rath also referenced medical studies that didn’t actually exist. The few legitimate studies he referenced showed that vitamins could improve outcomes when taken alongside antiretrovirals, not that vitamins should be taken instead of antiretrovirals.)

Due to their own biases, South African government officials cherry-picked Rath’s results and used them to justify the decision not to roll out antiretrovirals across South Africa. Instead, Rath’s ineffective vitamin treatments were promoted nationwide.

(Shortform note: The South African government choosing to accept Rath’s findings is an example of confirmation bias. Confirmation bias is a logical fallacy that leads you to accept new information only when it reinforces your prior beliefs. Left unchecked, confirmation bias can lead directly to cherry-picking.)

What was profitable for Rath was tragic for the people of South Africa—Goldacre estimates that hundreds of thousands of people died due to lack of access to antiretrovirals. This tragedy illustrates why it is so important to be able to call out bad science. When we allow companies to use misleading techniques to promote their treatments, real people suffer for it.

(Shortform note: During the Covid-19 pandemic, medical treatments became a similarly divisive issue in the United States. Some politicians and doctors widely promoted ivermectin as a treatment for Covid, even though few studies supported it. While some believe ivermectin is an effective treatment, its unauthorized use has also led to several accidental poisonings.)

Accounting for Cherry-Picking

To account for cherry-picking, Goldacre says you never take a single study’s results to be the final word. Instead, look up multiple studies about the topic in question, and evaluate each one’s methods. When you’ve determined which studies you think are most trustworthy, compare their results. While they’ll probably differ from each other slightly, they’ll give you a range of reasonable perspectives. New information that falls well outside this range may be cherry-picked.

(Shortform note: To protect yourself from cherry-picking, experts recommend that you remain skeptical about new evidence. This doesn’t mean ignoring new findings, but it does mean you should thoroughly investigate them. In particular, you should be suspicious of studies that don’t use diverse sample groups—for example, if all participants are the same age—as they may not be giving you the complete picture.)

Meta-Analysis Counteracts Cherry-Picking

When evaluating treatments, scientists also compare research results, using a tool called meta-analysis. A meta-analysis collects, compares, and summarizes all the published studies about a particular topic. By looking at all the data at once, meta-analysis can often reveal patterns that aren’t apparent in individual studies.

For example, suppose that multiple small studies show that a new cold medicine offers mild benefits. In individual studies, the benefit appears so slight that it’s not taken to be significant. However, if you combined the sample sizes of all these studies, you’d be able to get a clearer, more reliable picture of the medication’s effects, based on a larger body of evidence.

If you’re worried about cherry-picking, try to find a meta-analysis of the treatment you’re investigating. The many sources included in a meta-analysis ensure that a few strategically chosen studies don’t overpower the rest.

Problems With Meta-Analysis

While meta-analysis can be a powerful tool for counteracting cherry-picking, meta-analyses can also fall victim to many of the same problems as regular clinical trials.

One problem is that it takes a lot of labor to evaluate a meta-analysis’s methods. To produce a meta-analysis in the first place, researchers must seek out and painstakingly assess many studies to determine if they’re solid enough to be included. When editors are determining whether or not to publish a meta-analysis, they have to repeat the same time consuming process. Because the process of evaluating a meta-analysis’s methods is so arduous, editors may make mistakes or cut corners, which can in turn lead them to publish sub-par meta-analyses.

Overstating Results Using Surrogate Outcomes

Lastly, drug companies often exaggerate the effectiveness of their treatments based on surrogate outcomes. A surrogate outcome is an experimental result that implies a treatment may work, but doesn’t indicate conclusively if the treatment works or not. It’s important to identify surrogate outcomes because they’re often used to push unproven and potentially dangerous drugs to market.

(Shortform note: In some cases, scientists do consider surrogate outcomes to be acceptable stand-ins for real-world outcomes. However, this is only true when there’s a strong, proven link between the surrogate outcome and patient outcomes. For instance, because lowering blood pressure has been shown to reduce the risk of stroke, drugs that lower blood pressure can be said to reduce stroke risk.)

For example, suppose you’re testing a new drug designed to treat infection. You find that in petri dishes, your drug successfully kills bacteria. Based on this finding, it’d be easy to jump to the conclusion that your drug can be used to fight bacterial infections. However, to be sure, you’d also want to measure whether or not real, human patients receiving your treatment actually experienced improved outcomes. In this case, the petri dish test result is the surrogate outcome—it suggests that your drug works as intended, but it isn’t conclusive on its own.

Spotting Surrogate Outcomes

You can better assess a company’s claims when you’re able to identify surrogate outcomes. Specifically, you should look out for claims that reference blood test results, experiments done with cells in petri dishes, and experiments performed on lab animals. These are all surrogate outcomes. When you come across them, take them as a suggestion that a treatment might work, and not proof that the treatment actually does work. By contrast, if a company offers results in terms of real-world outcomes, such as heart attacks or deaths, you can take the study’s findings more seriously.

(Shortform note: When you notice that a study’s findings include surrogate outcomes, it can be helpful to look up whether they’re considered acceptable stand-ins, so that you know how seriously to take them. The FDA has produced a comprehensive table that includes all the surrogate outcomes that it has allowed researchers to use as evidence for their drugs. If a study’s surrogate outcomes are listed in this table, it’s a good sign that you can take them seriously.)

Section Three: Bad Press

In this final section, we’ll cover the various ways mainstream media outlets misrepresent scientific findings to the public. According to Goldacre, errors in science reporting usually happen because publications prioritize profit, which leads them to pressure their reporters to generate the most exciting stories they can as quickly as possible. This pressure leads reporters to choose sensational stories, and leaves them little time to verify the accuracy of their reporting.

(Shortform note: As opposed to the profit motive, journalists argue that one of the main reasons for their errors is that they often have to report on evolving events. Journalists describe trying to balance getting information out quickly with taking enough time to gain a complete understanding of a topic. Naturally, they argue, this process involves some mistakes.)

Goldacre argues that it’s important for publications to accurately report medical findings because of their unique role in spreading ideas. Misinformation can undermine public health campaigns and lead individuals to make dangerous health care decisions.

(Shortform note: Many journalists would agree that they have an ethical responsibility to report accurately, especially when writing about medical issues. Interestingly, journalistic codes of ethics often include the imperative to minimize harm—a sentiment echoed by the Hippocratic Oath, which is the cornerstone of medical ethics.)

There are a few major ways journalists tend to misrepresent scientific studies. Specifically, publications fail to evaluate their sources, sensationalize results, and fail to correct their mistakes by publishing retractions and updates. We’ll describe how all these practices can be used to mislead the public, and we’ll give you the tools to identify shoddy science reporting.

Bad Sources, Bad Stories

According to Goldacre, journalists often fail to properly vet the scientific sources they reference. When journalists use untrustworthy sources, they risk presenting misleading and potentially harmful information to the general public.

Goldacre argues that one of the main reasons journalists end up using weak sources is because most journalists aren’t scientists, and they lack the training to evaluate medical research. Even though mainstream publications often employ a few science reporters with more specific expertise, major stories tend to be assigned to higher-ranking general journalists instead.

(Shortform note: One reason general journalists may have a hard time reporting on science is that scientific results are often uncertain. As compared to current events, where there is a clear chain of events to report, scientific findings often suggest more tentative links. General journalists who are unfamiliar with these nuances tend to overstate inconclusive findings.)

However, Goldacre asserts that regular people can learn to evaluate scientific sources with a little work. The issue isn’t that journalists aren’t capable of understanding science, but that low industry standards and the pressure of deadlines disincentivize journalists from doing so.

(Shortform note: Journalists themselves corroborate Goldacre’s assertion that high-pressure environments in journalism are a systemic problem. Studies show that the vast majority of journalists experience trauma and burnout on the job. Without reforms in journalism, these stressful environments may continue to lead to mistakes in reporting.)

Spotting Bad Sources

When you read science stories in mainstream media outlets, you should investigate their sources. Once you’ve found the studies an article references, you can evaluate them yourself using the techniques we’ve developed in this guide. And, as Goldacre sees it, if an article doesn’t link to its sources, it probably isn’t trustworthy.

It can also be helpful to research the authors of the articles you read. Look into the author’s background and other work. If they have a science background and focus primarily on science reporting, it’s more likely that they have the right tools for the job. On the other hand, if the author of an article has no science background and reports mostly on other topics, it may be a sign that you should investigate their claims more thoroughly.

(Shortform note: In addition to looking into the backgrounds of science journalists you come across, you can also seek out trustworthy scientists on your own. Since the advent of social media, many scientists have taken to the internet to share their findings. Taking it on yourself to identify and follow reliable sources can help ensure you have access to accurate information.)

Sensational Stories

Furthermore, journalists misrepresent scientific information to the general public by sensationalizing clinical findings. Goldacre notes that this isn’t unique to science journalism, but is a more general problem with journalism. To maximize readership and revenue, publications often encourage their reporters to write shocking, provocative stories, even if it means exaggerating. However, because science journalism impacts public health, sensational science reporting can cause significant harm.

(Shortform note: While Goldacre primarily blames the media for sensationalizing science, some members of the scientific community argue that scientists themselves may be equally to blame. Seeking to bring themselves acclaim and notoriety, researchers and institutions may exaggerate their findings to the media.)

As Goldacre describes it, journalists tend to focus on scary findings because they’re seen as better attention grabbers. For instance, stories about common household chemicals causing cancer are more likely to attract readers than stories about incremental advancements in treating lactose intolerance. It isn’t inherently misleading to publish frightening stories, as long as those stories are accurate, but sadly this isn’t always the case.

(Shortform note: While scary stories may benefit publications in the short term, they may have negative impacts in the long run. Studies have shown that sensational stories cause consumers to become distrustful and apathetic toward news media, leading them to consume less news. At scale, growing distrust in news media likely hurts circulation for many publications.)

Spotting Sensationalism

If you think a certain publication is exaggerating in order to play upon your fears, look into the rest of their health and science reporting. If they tend to publish only negative, frightening stories, they probably aren’t giving you all the information.

Additionally, you should pay close attention to the way journalists report statistics. Often, stats are presented misleadingly to make research results sound more dramatic than they really are.

(Shortform note: Misleading statistics are not unique to science journalism. Business leaders note that issues with stats are rampant in the business world. As in science, these errors are usually due to the financial interests of researchers and journalists. It’s simply more exciting to readers to publish a story about dramatic success or failure, in both science and business.)

Specifically, Goldacre notes that journalists tend to reference relative risk increase out of context. Relative risk increase is the percentage increase in risk of a certain phenomenon. Citing relative risk increase without including hard data, such as case numbers and death counts, can make small increases in risk appear very large.

For example, suppose that a new treatment causes appendicitis risk to go from one patient per thousand to two patients per thousand. Expressed in terms of relative risk, that’s an increase of 100%, which sounds a lot scarier than an additional one case per thousand patients. Because of this effect, if an article references relative risk, you should refer back to the article’s sources and check the numbers out yourself.

(Shortform note: Because of the misleading nature of relative risk increase, some researchers recommend ignoring risk increase entirely. Instead, they argue that you should focus only on the hard numbers that affect patients, such as clinical outcomes and cost.)

Rare Retractions

Finally, Goldacre laments that when journalists publish findings that are later proven to be false, they rarely publish retractions that include new evidence. Because of the potential impact of journalism on public health, Goldacre argues that media outlets have an ethical obligation to provide accurate, up-to-date information.

(Shortform note: Some studies have found that journalists only publish retractions when researchers make a press release detailing how their findings have changed. These studies suggest that when retractions are necessary, researchers may be able to help the process along by reaching out to journalists directly and making press releases.)

The panic over the MMR vaccine is a perfect example of the real-world consequences that can occur when journalists don’t take it upon themselves to publish retractions. As Goldacre recounts, in the early 2000s, British journalists published numerous articles about the supposed link between the MMR (measles, mumps, and rubella) vaccine and autism. These stories were based on the results of a single study, which used poor experimental controls and relied on surrogate outcomes. While the study in question was later discredited due to its shoddy methods, few publications published retractions. As a result of the ensuing panic over the MMR vaccine, vaccination rates in the UK plummeted, leading to waves of preventable illness and death.

(Shortform note: Even when publications do publish retractions, it often happens years after the damage has been done. One publication involved in the British MMR scare launched an admirable campaign to promote the MMR vaccine. This 2019 campaign likely saved lives by encouraging vaccination, and had it been published sooner, it could have saved many more.)

Accounting for Missing Retractions

When you read an article, you should check to see when it was published, and even more importantly, you should check to see when the studies it references were published. Scientific research is an ongoing process, and results from just a few months ago may have already been overturned. Because of this, you should try to find the most recently published research into any particular subject. That way, you can be sure you’re up to date on the most effective treatments.

(Shortform note: While it may seem daunting to continually keep up to date on new findings, openness to new evidence is part of what makes the scientific method reliable. To account for constant change, remember that science is an ongoing process, and be ready to change your opinion when necessary.)

Want to learn the rest of Bad Science in 21 minutes?

Unlock the full book summary of Bad Science by signing up for Shortform.

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's Bad Science PDF summary:

What Our Readers Say

This is the best summary of Bad Science I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example