Understanding the Limitations of Descriptive Statistics

Understanding the Limitations of Descriptive Statistics

What is descriptive analysis? How do descriptive statistics help us make sense of data? What is the main pitfall of descriptive statistics as a research tool? Descriptive statistics take information in a data set and condense it into a meaningful figure like an average or percentile. Descriptive statistics help us summarize and describe data, characterize relationships, and make predictions. While descriptive statistics can help us make sense of data, they should be used with caution: Descriptive statistics tell us what happened, but they don’t necessarily tell us why. Keep reading to learn about the limitations of descriptive statistics.

Calculating Investment Risk Using Probability

Calculating Investment Risk Using Probability

How do you calculate investment risk? What do you need to know to calculate the expected payoff of a financial investment? Investors often use probability to assess risk when making financial decisions. This is typically done with a statistic called an “expected value.” To calculate the expected value of a financial investment, you need to know the probability of each possible outcome and its respective payoff. Here’s how to use the expected value statistic for calculating investment risk.

Normal Distribution: Explained With Examples

Normal Distribution: Explained With Examples

What is the normal distribution in statistics? How do measures of central tendency relate to the normal distribution? The normal distribution is a foundational concept in statistics. When data is normally distributed, it means that most values cluster around the center. Therefore, the mean, median, and mode are exactly the same in a normally distributed data set. Keep reading for the theory of normal distribution, explained in simple terms.

Mean and Mode: Which Should You Use?

Mean and Mode: Which Should You Use?

What is the difference between mean and mode? Which of the two is a more accurate measure? Both the mean and the mode are measures of central tendency. For non-skewed distributions, the mean is more accurate because it takes into account every value in the data set. For skewed data, the median is better because it isn’t influenced by outliers. Keep reading to learn about the difference between mean vs. mode and when to use which.

Everybody Lies: Big Data Interesting Facts

Everybody Lies: Big Data Interesting Facts

What are some interesting facts from Everybody Lies? What do these facts tell us about big data? Everybody Lies draws on data scientist Seth Stephens-Davidowitz’s research using Google search results as well as data from PornHub, Wikipedia, and more. The book contains many surprising and fascinating findings from his research. Read more for Everybody Lies‘s big data facts that will surprise you.

Charles Wheelan: Naked Statistics (Overview)

Charles Wheelan: Naked Statistics (Overview)

What is Charles Wheelan’s Naked Statistics about? What statistics does Wheelan explore in the book? Naked Statistics puts the math behind statistics into digestible terms and explains statistics concepts with relatable, relevant, and even humorous examples. Readers also benefit from additional socio-political insight from the book, as Wheelan uses real-world anecdotes to explore how statistics can inform collective decision-making. Below is a brief overview of the key themes and concepts from Wheelan’s Naked Statistics.

What Does “Statistically Significant” Mean?

What Does “Statistically Significant” Mean?

What is significance in statistics? What does “statistically significant” mean? In statistics, significance refers to the degree of confidence or certainty in the results of a statistical analysis being attributable to a specific cause. When results are said to be “statistically significant,” the observed relationship between variables is unlikely to be due to chance. Keep reading to learn about the concept of statistical significance, explained in simple terms.

The Doppelganger Effect in Big Data, Explained

The Doppelganger Effect in Big Data, Explained

What is the doppelganger effect in data? How is the method used to study people? A big data technique Seth Stephens-Davidowitz identifies is the doppelganger method. It’s a technique where researchers make predictions about one person by studying another person who’s statistically similar to the first person.  Learn more about the power of doppelgangers, as explained in Everybody Lies.

The Importance of Reliability in Data Collection

The Importance of Reliability in Data Collection

Why is reliability important in data collection? What are the main challenges inherent in collecting reliable data? As we use statistical data to inform our lives and society, we need them to be both accurate and precise. Therefore, collecting quality data is the true challenge and art of producing reliable, constructive statistics. Keep reading to learn about the importance of reliability in data collection.