This article is an excerpt from the Shortform book guide to "Naked Statistics" by Charles Wheelan. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

What is significance in statistics? What does “statistically significant” mean?

In statistics, significance refers to the degree of confidence or certainty in the results of a statistical analysis being attributable to a specific cause. When results are said to be “statistically significant,” the observed relationship between variables is unlikely to be due to chance.

Keep reading to learn about the concept of statistical significance, explained in simple terms.

## Statistical Significance Explained

What does “statistically significant” mean? The results of statistical analyses are often reported as being “statistically significant” or “not statistically significant.” **When results are statistically significant, it means that you can be reasonably certain that your observed results are due to the variable you are measuring and not to random chance.**

Statistical significance is often reported with a statistic called the p-value. The p-value pinpoints how likely your results are given a true null hypothesis. In other words, it tells you the likelihood of getting your results if the variable you’re measuring really has no effect and chance alone influenced the data.

**A small p-value adds confidence to rejecting the null hypothesis. **When the p-value is less than your established confidence level, you can report your results as *statistically significant*.

For example, say our vitamin company chooses a .05 confidence level for their statistical analysis, meaning that they want to be 95% confident when they reject the null hypothesis and move forward with the new formula. Their statistical analysis shows a p-value of .01. This means that there is a 1% chance that their results are due to random chance, and the null hypothesis is true (in other words, there is a 99% chance that these results are *not* just due to chance). Since this p-value falls within their 95% confidence level for rejecting the null hypothesis, they report their results as “statistically significant.”

**P-Value Misconceptions**

The p-value is a tool for determining whether the values in our dataset are “significant” from a statistical perspective. Since the p-value can be a tricky value to grasp, people have several common misconceptions about it. We’ll highlight two of them.

First, people mistakenly believe the p-value communicates how strong an effect one variable has on another. For example, if your data showed a statistically significant relationship between eating spinach and being able to lift heavier weights an hour later, and your p-value was .001, you might infer that spinach has a large impact on strength. However, a small p-value only means a small likelihood that your results were due to random chance. The reality might be that participants were able to lift just an extra two ounces after eating spinach.

The second common misconception is that a small p-value means that you are likely to get the same result if you run the experiment again. In our above example, for instance, since your p-value of .001 indicates a .1% chance that your results were due to random chance, you might infer that you don’t need to do the experiment again because your results are pretty conclusive. However, the p-value is specific to the dataset being analyzed and can vary widely even between different samples testing the same variable. This is especially true with smaller sample sizes.

### ———End of Preview———

#### Like what you just read? ** Read the rest of the world's best book summary and analysis of Charles Wheelan's "Naked Statistics" at Shortform ** .

Here's what you'll find in our ** full Naked Statistics summary ** :

- An explanation and breakdown of statistics into digestible terms
- How statistics can inform collective decision-making
- Why learning statistics is an exercise in self-empowerment