PDF Summary:Calling Bullshit, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of Calling Bullshit by Carl T. Bergstrom and Jevin D. West. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of Calling Bullshit

With the proliferation of misinformation online, in the news, and even in academia, the modern world is replete with bullshit—a phenomenon that professors Carl T. Bergstrom and Jevin D. West define as the use of misleading evidence to persuade an audience. But we aren’t defenseless against this bullshit, they argue. On the contrary, in their 2020 book, Calling Bullshit, Bergstrom and West contend that anyone can learn how to detect and refute bullshit in its many forms.

In this guide, we’ll discuss Bergstrom and West’s conception of bullshit and the varieties of bullshit most prevalent in contemporary society—namely, bullshit based on data. We’ll also examine their concrete strategies for identifying and calling bullshit. Throughout the guide, we’ll also discuss supplemental tips for addressing bullshit and analyze real-world examples of bullshit infecting our information sources.

(continued)...

Strategy #2: Numbers Without Context

In a similar vein, Bergstrom and West contend that taking numbers out of context can lead to bullshit. For example, if a politician proudly announces that his economic plan has increased the average citizen’s income by $10,000, the economic plan could seem beneficial for the average citizen. However, if this plan only increased the income of the wealthiest 1% of citizens, thus inflating the average, then it may be less universally beneficial than it appears. Thus, by presenting the average income increase without the context of how this increase was distributed across the population, the politician can promote a false view of his economic plan.

(Shortform note: Statistics are regularly taken out of context in advertisements, where, without being strictly inaccurate, they can mislead consumers into purchasing a product. For example, in 2007 Colgate toothpaste advertised the claim that “80% of dentists recommend Colgate,” making it seem as if 80% of dentists recommended it over other brands. That implication was false: The 80% figure stemmed from a survey that allowed dentists to “recommend” as many brands as they wanted to.)

Strategy #3: Misleading Percentages

Further, Bergstrom and West explain that using percentages can obscure true values and promote bullshit. For instance, a tobacco company might point out that in 2022, of 330 million Americans, only 0.2% died of cancer. While technically correct, this percentage obscures the fact that cancer accounted for almost 20% of deaths among Americans in that same year, as it caused 600,000 deaths out of 3.2 million. Percentages can therefore be a powerful vehicle for disguising bullshit.

(Shortform note: Misleading percentages were especially common during the Covid-19 pandemic, in which those skeptical about Covid-19’s severity often claimed that 99% of patients with Covid-19 survived. While this claim implied that concerns about Covid-19 were vastly overblown, it obscures the fact that a 1% mortality rate is remarkably high for a virus—nearly 10 times higher than the seasonal flu, which has around a 0.1% mortality rate. Further, it obscures the fact that because Covid-19 was sufficiently widespread, it still caused over one million deaths in the US.)

Misleading Data Visualizations

According to Bergstrom and West, bullshitters can further manipulate valid data by using strategies to create misleading representations. Although Bergstrom and West list an array of strategies for misleading viewers, we’ll focus on three particularly illuminating ones: prioritizing form over function, shoehorning data in suboptimal visualizations, and using disproportionate visualizations.

Misleading Strategy #1: Prioritizing Form Over Function

Bergstrom and West argue that prioritizing form over function can create bullshit that obscures the data that a visualization attempts to convey. For instance, imagine that you wanted to represent the proportion of Americans associated with a specific religious affiliation in graphical form. If, instead of representing this information via a simple bar chart, you chose to represent it via a shaded map of the United States, that shaded map would be neglecting function for the sake of form. After all, although the shaded map of the United States might look cooler, it would also be more difficult to interpret.

(Shortform note: Although graphs that prioritize form to the detriment of function are harmful, that doesn’t mean form should be given no consideration whatsoever. On the contrary, one team of researchers found that, when interpreting certain aesthetically appealing line graphs, viewers could extract the relevant information just as easily as they could on more austere line graphs. Moreover, these viewers rated the aesthetic graphs as more visually pleasing than the austere graphs, meaning they were more attractive without compromising functionality.)

Misleading Strategy #2: Shoehorning Data Into Inappropriate Visualizations

While an excessive focus on form can be innocuous, Bergstrom and West contend that shoehorning data into inappropriate forms is an unequivocal attempt to create bullshit. The clearest example of this is the proliferation of visual representations that parallel the periodic table; for instance, there are periodic tables of marketing techniques, cryptocurrencies, and typefaces. But, while the original periodic table was carefully designed to convey the chemical groupings of different elements, these mock periodic tables are not. Rather, they attempt to convey an appearance of rigor which their underlying data lack.

(Shortform note: Despite Bergstrom and West’s insistence that these faux periodic tables lend fake credence to unrigorous data, some of these tables have even slipped into formal academic research. For example, the periodic table of cryptocurrencies—an attempt to categorize various cryptocurrencies that exist on the blockchain—was first published in an academic paper.)

Misleading Strategy #3: Using Disproportionate Visualizations

Finally, the authors argue that certain data visualizations spread bullshit by violating statistician Edward Tufte’s requirement of proportionate visualizations. In his 1983 book, The Visual Display of Quantitative Information, Tufte argues that the size of regions that represent data should be proportionate to the values they represent. For instance, if you’re creating a bar chart that illustrates the number of Nobel Prize winners by country, then the bar representing Germany’s 111 Nobel laureates should be just over five times as large as the bar representing Austria’s 22 Nobel laureates.

(Shortform note: In his book, Tufte’s principle of proportionate visualization is one in a series of foundational principles for accurately representing data. Counterintuitively, another such principle is that tables are superior to graphs when dealing with smaller data sets since graphs can easily be manipulated whereas tables provide the data in a neutral manner.)

Bergstrom and West point out that, although the principle of proportionate visualization seems commonsensical, violations of it are common—any time a bar chart’s vertical axis doesn’t start at zero, it will violate this principle. Such charts can convey bullshit because they misrepresent the scale of the difference between two values, providing a ripe ground for false inferences. For example, imagine we created a bar chart measuring Nobel laureates by country that was evenly spaced but started at 20 rather than zero. Then, Germany’s bar would be 91 units tall (111 minus 20) while Austria’s would be 2 units tall (22 minus 2), creating a misleading picture of the difference between the two countries.

(Shortform note: Although bar charts must start at zero, the same principle doesn’t necessarily apply to line graphs. According to experts, line graphs that start at zero can obscure significant trends when the changes in data are, in some sense, small. For example, if we plotted the earth’s average surface temperature between 1900 and 2022—which has risen from about 57°F to 59°F—the increase would hardly be discernible if our line graph started at zero. Consequently, starting our line graph at zero would obscure an important trend in global temperature, since climate scientists consider the two-degree increase to illustrate the dangerous long-term trend of global warming.)

How Experts Give Rise to Bullshit

While there are various ways to mistakenly collect and interpret data, it’s tempting to think that official forms of data analysis are immune from bullshit. But, according to Bergstrom and West, this couldn’t be further from the truth. They argue that bullshit is widespread both in science and in so-called “big data.”

Scientific Bullshit

According to Bergstrom and West, even the institution of science isn’t immune from bullshit. On the contrary, they argue that science’s focus on statistical significance gives rise to bullshit for two reasons: We can easily misinterpret what statistical significance means, and we’re only exposed to statistically significant findings because of publication bias.

For context, they explain that a statistically significant finding is one with a certain p-value—a statistical measure of how likely it is that a study’s result happened by pure chance.

For example, imagine you wanted to see if there was a relationship between smoking cigarettes daily and getting lung cancer. You could perform a statistical analysis comparing the rates of people with lung cancer who did, and did not, smoke cigarettes daily. If you found a positive correlation between smoking and cancer and the resulting p-value of that correlation was less than 0.05, scientists would normally consider that a statistically significant result—one that is unlikely to occur from chance alone. If your analysis yielded a p-value of 0.01, that would mean there would be a 1% chance of it occurring if there weren’t a correlation between smoking and cancer.

(Shortform note: Without going too deep into the underlying mathematics, p-values are normally calculated via integral calculus on a probability distribution of all possible values, such that the p-value is identical to the percentage of the probability distribution’s area that is equal to, or more significant than, the value that you’re testing. For example, if your study between smoking and cancer established a positive correlation of 0.8, and only (say) 1% of correlations on a random distribution are 0.8 or greater, then your p-value would be equal to 0.01.)

However, Bergstrom and West point out that many people misinterpret the p-value, taking it to be the likelihood that there’s no correlation between the variables tested, which leads to bullshit when they overstate the results of statistical analyses. For example, just by sheer chance, it’s possible that if we flipped two coins 100 times, they would land on the same side 60 times, yielding a p-value of around 0.03 (in other words, there’s about a 3% chance of getting this result by pure luck). But we would be mistaken to conclude that the likelihood the two coins are connected is 0.97 because we know that barring any funny business, two simultaneous coin flips are independent events. So, we would instead be justified in concluding that the low p-value was a statistical anomaly.

The Consequences of Misrepresented P-Values

The misinterpretation of p-values can have dire consequences in the courtroom, as seen in the 1998 trial of Sally Clark, a mother accused of double homicide after her first child died with no apparent cause at 11 weeks of age and her second died without cause at eight weeks. The prosecution argued that, statistically, the prior probability of two infants dying an unexplained death was one in 73 million, and thus concluded that Clark had likely murdered her children.

However, they failed to consider that the prior probability of a double homicide of two infants was even lower than one in 73 million, leading statisticians to later point out that it was actually more likely that the infants had suffered unexplained deaths. Although Clark was initially convicted of double homicide, she was exonerated in 2003.

Further, Bergstrom and West contend that publication bias can promote bullshit by creating a distorted view of scientific studies. Publication bias refers to scientific journals’s tendency to only publish statistically significant results since such results are considered more interesting than non-significant results. In practice, this means published scientific studies often report statistically significant results even when these results don’t necessarily indicate a meaningful connection.

For example, even though there isn’t a connection between astrological signs and political views, if 100 studies attempted to test this relationship, we should expect about five to have a p-value below 0.05. Because these five studies would likely get published while the other 95 wouldn’t, scientific journals would inadvertently promote the bullshit view that there’s a connection between astrology and politics because of publication bias.

(Shortform note: Experts point out that, although publication bias is partially due to journals preferring to publish statistically significant results, it’s also partially due to authors often refusing to submit statistically insignificant results in the first place. For many authors, this decision is practical: Because experiments with statistically insignificant results are much more common than those with statistically significant results, submitting insignificant results could overwhelm journals with submissions.)

Big Data Bullshit

In a similar vein, Bergstrom and West argue that big data—a technological discipline that deals with exceptionally large and complex data sets using advanced analytics—can foster bullshit because it can incorporate poor training data, and it can find illusory connections by chance.

For context, Bergstrom and West explain how big data generates computer programs. They relate that researchers input an enormous amount of labeled training data into an initial learning algorithm. For instance, if they were using big data to create a program that could accurately guess people’s ages from pictures, they would feed the learning algorithm pictures of people that included their age. Then, by establishing connections between these training data, the learning algorithm generates a new program for predicting people’s ages. If all goes well, this program will be able to correctly assess new test data—in this case, unfamiliar pictures of people whose ages it attempts to predict.

(Shortform note: ChatGPT, a chatbot launched by OpenAI in November 2022, is itself a byproduct of big-data-fueled machine learning, as it processed an immense amount of training text to create coherent sequences of words in response to test data (inquiries from users). The widespread success of ChatGPT—and other related large language models—suggest that although Bergstrom and West may be correct that big data can propagate bullshit, it can also create revolutionary forms of artificial intelligence whose impact is felt worldwide.)

However, Bergstrom and West argue that flawed training data can lead to bullshit programs. For example, imagine that we used big data to develop a program that allegedly can predict someone’s socioeconomic status based on their facial structures, using profile pictures from Facebook as our training data. One reason why this training data could be flawed is that people from higher socioeconomic backgrounds typically own better cameras, and thus have higher-resolution profile pictures. Thus, our program might not be directly identifying socioeconomic status, but rather camera resolution. In turn, when exposed to training data not sourced from Facebook, the big data program would likely fail to distinguish between socioeconomic status.

(Shortform note: These bullshit programs can perpetuate discrimination in the real world, as illustrated by Amazon’s applicant-evaluation tool that consistently discriminated against women in 2018. As training data, Amazon had exposed AI to resumés from overwhelmingly male candidates in the past, leading its AI program to display prejudice towards male applicants in the test data—that is, when reviewing current applicants’ resumés. For instance, it punished resumés that even included the term “women,” as in “women’s health group,” and it learned to discredit applicants from certain all-women’s universities.)

In addition, Bergstrom and West point out that, when given enough training data, these big data programs will often find chance connections that don’t apply to test data. For instance, imagine that we created a big data program that aimed to predict the presidential election based on the frequency of certain keywords in Facebook posts. Given enough Facebook posts, chance connections between certain terms may appear to predict election outcomes. For example, it’s possible that posts including “Tom Brady” have historically predicted Republican victories, just because the Patriots have happened to win on the verge of Republican presidential elections.

(Shortform note: One way to identify chance connections versus genuine causal connections is to seek out confounding variables—a third factor that explains the chance connection between two variables. For example, the number of master’s degrees issued and box office revenues have been tightly correlated since the early 1900s, but this correlation is likely due to a third factor—population growth—that is driving increases in both master’s degrees and box office revenues.)

How to Deal With Bullshit

Although the modern world is rife with bullshit, we aren’t defenseless against it. On the contrary, Bergstrom and West argue that several strategies can help us recognize and refute bullshit. In this section, we’ll examine these strategies in depth, first focusing on ways to identify bullshit before concluding with ways to call bullshit.

How to Identify Bullshit

We’ll focus on three of Bergstrom and West’s key strategies for identifying bullshit: Evaluate information sources, scrutinize claims that are “too good to be true,” and be wary of confirmation bias.

Strategy #1: Evaluate Information Sources

Bergstrom and West explain that we should assess information sources by asking who the information is coming from and what their possible motivations are. After all, many information sources have ulterior motives, meaning they’re more likely to use bullshit to support their aims. For example, in light of conclusive evidence linking smoking to lung cancer in the 1950s, the tobacco industry conducted a marketing campaign that sought to undermine this scientific consensus (and thus retain their massive profits). This campaign was rife with bullshit, but by asking what the tobacco industry’s underlying motivations were, you could have easily detected this bullshit.

(Shortform note: One important step in evaluating information sources is to scan for bias—that is, a predisposition toward a certain viewpoint that can conflict with neutral reporting. Experts note that there are many indicators of a biased information source. For instance, biased sources typically use extreme language with sweeping generalizations, rather than more circumspect, qualified language. Moreover, biased sources are often emotionally charged rather than dispassionate reporters of facts.)

Strategy #2: Scrutinize Implausible Claims

Next, Bergstrom and West recommend being careful when you encounter claims that seem too good to be true. In other words, if a claim seems absolutely implausible, there is a good chance that it’s bullshit. For instance, in so-called “Nigerian prince scams,” scammers email potential targets claiming to be international royalty in need of financial assistance, promising to quickly return your loans with sizable interest. These scams are too good to be true—Nigerian royalty certainly wouldn’t solicit loans via anonymous emails. Thus, by treating such claims with a healthy amount of skepticism, you’ll often be able to detect bullshit.

(Shortform note: Bergstrom and West’s recommended dose of skepticism jibes well with astronomer Carl Sagan’s aphorism that “extraordinary claims require extraordinary evidence”—that is, to believe an incredible claim, you should have similarly incredible evidence. In philosophy, this notion follows from evidentialism, the view that we should proportion our beliefs to our relevant evidence. By applying this insight from evidentialism, you’ll be less likely to believe outlandish claims without sufficient evidence in their favor.)

Strategy #3: Remain Cognizant of Confirmation Bias

Finally, Bergstrom and West caution us that confirmation bias—the tendency to hold claims that conform to our preexisting views to lower evidentiary standards—can blind us to bullshit. For example, imagine that allegations broke that a politician you disliked had committed fraud or had an affair. If you already distrusted that politician, you might be more likely to accept these allegations at face value—even if they ultimately proved to be bullshit.

(Shortform note: Experts offer several concrete strategies for mitigating the effects of confirmation bias. For instance, because confirmation bias is most prevalent when you’re only exposed to information sources that reflect your views, you should diversify your information sources to create a more objective foundation of facts. Additionally, the experts suggest seeking out discussions with those who disagree with you, since such individuals can make you more cognizant of your own biases and expose you to new information and ideas.)

How to Call Bullshit

Finally, Bergstrom and West acknowledge that identifying bullshit alone isn’t enough to mitigate its spread. To that end, we’ll discuss three of their techniques for calling bullshit so that others don’t fall for it: Construct a reductio ad absurdum, provide counterexamples, and use clarifying analogies.

Technique #1: Construct a Reductio ad Absurdum

Bergstrom and West explain that a reductio ad absurdum (“reduction to absurdity”) can be a powerful tool for exposing bullshit. Constructing a reductio ad absurdum involves showing that a claim has an obviously false consequence, and thus logically cannot be true. For example, imagine that you read an op-ed arguing that parents should have absolutely no restrictions on how they choose to raise their children. In this case, you could construct a reductio ad absurdum pointing out that this op-ed’s view implies that parents should have the right to abuse or neglect their children. Because this implication is ridiculous, the original view must be false.

How to Respond to an Alleged Reductio ad Absurdum

According to philosophers Walter Sinnott-Armstrong and Robert Fogelin, there are three primary responses available to you when you’re faced with a supposed reductio ad absurdum of one of your views.

First, you can flatly deny that your view has the absurd consequence that your critic alleges. For instance, when utilitarians—those who believe that actions are morally permissible if and only if they cause the greatest net pleasure—are faced with absurd consequences of their view, they often deny that their view has these consequences. Critics object, for example, that utilitarianism entails it would be permissible for a sadist to murder someone if he derived enough pleasure from it, but utilitarians might respond that in practice, allowing a sadist to murder someone would cause widespread panic that doesn’t maximize net pleasure.

Second, you can make slight adjustments to your view so that it no longer has an absurd consequence. Returning to the example of an op-ed that argues parents should have no restrictions in raising their children, the author might instead tweak their view and claim that parents should have minimal restrictions on raising their children so that they aren’t committed to allowing child abuse.

Finally, you can “bite the bullet” and accept the consequence that others allege is absurd. For instance, if our author truly was committed to the radical view that parents should be able to raise their children however they see fit, then they might bite the bullet and admit that parents should be allowed to neglect their children. Because biting the bullet can require accepting unpalatable consequences, it’s generally best as a last resort.

Technique #2: Provide Counterexamples

A similar technique for calling bullshit involves providing counterexamples to bullshit claims. To provide a counterexample, point out a situation in which a bullshit theory or claim makes a false prediction. For instance, if someone made the sweeping claim that you must get a college education to make a good living, you could provide a counterexample by pointing to the many millionaire entrepreneurs who don’t have degrees. Counterexamples thus provide a simple way to refute bullshit generalizations.

(Shortform note: Although Bergstrom and West write as if counterexamples are wholly distinct from reductio ad absurdum, the two are actually close relatives. After all, we can understand counterexamples as exposing an absurd consequence of a given view. For instance, the counterexample of millionaires without college degrees underscores a false consequence of the view that you need a college education to be successful—namely, that all millionaires have college degrees.)

Technique #3: Use Clarifying Analogies

Finally, Bergstrom and West argue that analogies can be a useful tool for shedding light on bullshit that hides in seemingly plausible claims. To provide an analogy that exposes bullshit in an argument, create an argument that parallels the bullshit argument but is clearly invalid. For example, imagine that several parents defended the decision to spank children on the grounds that they had spanked their children without any adverse effects. To underscore why this argument is bullshit, you could offer an analogous defense of drunk driving that notes that many people who drive drunk don’t get into car crashes. This analogy makes it clear that just because a practice sometimes has no adverse effects, that doesn’t mean it’s a good practice.

(Shortform note: Philosophers have identified a wide array of commonsense guidelines for assessing how strongly an analogy supports a given conclusion. For example, analogies between two arguments (or domains) that have deep structural similarities are more powerful than those between two arguments which are only superficially similar. Further, providing multiple apt analogies is more persuasive because, while you might find one analogy by sheer coincidence, you’re less likely to find several analogies by chance alone.)

Want to learn the rest of Calling Bullshit in 21 minutes?

Unlock the full book summary of Calling Bullshit by signing up for Shortform.

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's Calling Bullshit PDF summary:

What Our Readers Say

This is the best summary of Calling Bullshit I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example