This section focuses on the early development of probability and its transition from primarily serving as a tool for analyzing games of chance to attempting to tackle broader questions in science, particularly those related to human behavior and society. Clayton emphasizes the challenges and resistance faced by early thinkers who sought to apply probability methods to understand social phenomena.
This section delves into the various ways probability has been understood throughout history, highlighting the ongoing tension between objective and subjective views. It examines the various attempts to pin down a precise definition, their strengths and weaknesses, and their implications for practical applications of probability.
As Clayton explains, the classical view of probability was the initial effort to mathematically define the concept. It defines the chance of an event occurring as the proportion of favorable outcomes to the total possible outcomes, assuming all outcomes are equally likely. This approach, as its name suggests, worked well primarily for games of chance where the uniformity of the apparatus (e.g., the identical sides of a die or the indistinguishable cards in a deck) made the assumption of equiprobability seem natural.
However, problems arose when applying this definition to situations where this symmetry was not very apparent, or where the quantity of possible cases was not easily discernable. As Clayton describes, Galileo was consulted to resolve a dispute over whether rolling a 9 or a 10 with three dice that each have six sides is more likely. At first glance, it would seem the two sums are equally likely, since each can be written as three numbers added together in six different ways. However, Galileo pointed out that the correct way to enumerate the possibilities was not by the digits themselves, but by the distribution of numbers across the dice. This makes, say, the combination (3,3,3) less likely than the combination (6,2,1) because there is only one way for all three dice to show 3, whereas there are six different ways for the three dice to show 6, 2, and 1, depending on their permutation.
Furthermore, certain types of probabilities that seemed to have real-world meaning—like the likelihood that a particular person will die in the next 10 years or that it will rain tomorrow—did not easily fit within this classical definition. What hypothetical equally probable scenarios would you count to calculate these probabilities? Attempts to construct a collection of such cases seemed artificial, at best. This suggests a need to comprehend probability differently.
Context
- The classical probability is often expressed as a fraction, where the numerator is the number of favorable outcomes and the denominator is the total number of possible outcomes. This requires a clear and finite set of outcomes, which is not always available in real-world situations.
- The classical interpretation laid the groundwork for probability theory, but its limitations highlighted the need for more sophisticated models that could handle uncertainty and variability in more complex systems. This evolution was crucial for the advancement of statistics and probability as scientific disciplines.
- In real-world scenarios, systems often involve numerous variables and interactions, making it difficult to identify all possible outcomes or ensure they are equally likely. For example, predicting weather involves countless atmospheric factors that don't lend themselves to simple enumeration.
- Galileo's analysis highlights the importance of considering all possible arrangements of outcomes. For example, the sum of 9 can be achieved with combinations like (3,3,3), (4,4,1), or (5,2,2), but each has a different number of permutations, affecting their likelihood.
- These types of probabilities often rely on subjective judgment or expert opinion rather than objective calculation. For example, predicting the likelihood of rain involves meteorological models and historical data, which are interpreted by experts.
- The assumption of equiprobability works well in controlled environments, such as dice games, where each outcome is designed to be equally likely. However, this assumption becomes problematic in complex, real-world situations where outcomes are influenced by numerous unpredictable factors.
The frequentist interpretation, which Clayton argues is the dominant view in modern statistics, addresses a key weakness of the traditional approach by grounding likelihood in the observable fact of frequency data. It defines an event's probability as the relative frequency with which that event occurs over numerous trials. This resolved, at least for many, the issue of determining whether listing all the equiprobable possibilities in a classical probability calculation had been accurate. If, say, an experiment involving the repeated drawing of pebbles from an urn seemed to stabilize to a particular proportion of white pebbles being drawn, then this could be taken as the chance of selecting a white pebble in any single draw. Bernoulli's Law of Large Numbers, which Clayton refers to as his "golden theorem," gave mathematical weight to this interpretation by showing that any such proportion would, in principle, converge to the "true" probability with an infinite number of trials.
This reliance on measurable frequencies had a great deal of appeal, especially for researchers in disciplines such as astronomy, where the usual technique of calculating an average of measurements of some fixed unknown quantity to correct for error in observation seemed to align with the frequentist idea of averaging...
Unlock the full book summary of Bernoulli's Fallacy by signing up for Shortform.
Shortform summaries help you learn 10x better by:
Here's a preview of the rest of Shortform's Bernoulli's Fallacy summary:
Having established the historical and political circumstances that led to the development of modern frequentist statistics, Clayton now turns his attention to the core fallacies in these techniques. He shows how significance testing and its reliance on p-values are both based on an incomplete understanding of the nature of probabilistic reasoning, and how the frequentist insistence probability must be quantifiable as frequency leads to demonstrably illogical inferences.
This section presents the core logical argument of the book, pointing out the key weakness of any inference procedure, such as significance testing, that tries to use sampling probabilities to make statements about hypotheses. Clayton argues the symmetry that Bernoulli assumed exists in the relationship of sampling probability to inferential probability is merely an illusion, and he describes the missing ingredients of any complete probabilistic inference: priors and other explanations.
Bernoulli's "golden theorem," which he named the Large...
Having thoroughly dismantled the frequentist school, Clayton concludes with a prescription for how inferences in statistics should be done. He presents the Bayesian approach as a natural extension of logical reasoning to cases of uncertainty and shows its strength in handling even the most pathological examples from among those considered earlier.
Clayton explains that Jaynes's probability theory is based on the concept that probability is simply a generalization of deductive reasoning to cases where we have less-than-perfect information. The sum rule and the product rule—which respectively describe how a proposition's probability relates to its negation, and how a joint proposition's probability relates to its individual component parts—can be seen as analogous to logical axioms that allow us to combine propositions consistently. Bayes' theorem then serves to update our beliefs regarding a hypothesis, based on new evidence and those basic logical operations. In this interpretation, then, probability concerns information and plausibility, and frequencies are simply elements of the inferential mechanism rather...
Bernoulli's Fallacy
This is the best summary of How to Win Friends and Influence People I've ever read. The way you explained the ideas and connected them to other books was amazing.