
In How to Measure Anything, Douglas W. Hubbard challenges conventional notions about measurement and provides practical insights on making informed decisions based on measurable data. An expert in the field of applied information economics, measurement, and decision analysis, Hubbard’s work focuses on helping organizations make better decisions by quantifying uncertainty and measuring things that many believe are unmeasurable.Â
Below, we’ll explain that measurement is simply the reduction of uncertainty—not the elimination of it; why every measurement needs to be taken to help inform a decision; and we’ll offer some measurement tools and techniques you can use to put these principles into practice. Continue reading for our overview.
Overview of How to Measure Anything
In How to Measure Anything, Douglas W. Hubbard argues that we can measure almost anything, and that by doing so, we gain valuable, quantifiable insights that guide us to better decisions. In this overview, we’ll explore Hubbard’s key concepts and strategies for effective measurement, helping readers break down seemingly complex problems into quantifiable components. By the end, you’ll walk away with a new understanding of measurement as a powerful tool for decision-making and risk management:
- In Part 1, we’ll explain the purpose of measurement—to reduce uncertainty, not eliminate it.
- In Part 2, we’ll describe what to measure and what not to measure—based on the relative importance of different factors in making a decision.
- In Part 3, we’ll take you through some specific measurement tools and techniques to reduce uncertainty and guide decisions.
Part 1: Understand Why You’re Measuring
Hubbard writes that if something can be observed, it can be measured and quantified. By “quantified,” he means that it can be represented numerically—by assigning specific values, ranges, or probabilities to observable phenomena rather than relying on vague descriptors like “high risk” or “good quality.” This is crucial, because the more we measure, the better decisions and predictions we can make than if we relied on intuition.
While Hubbard’s principles of measurement apply across all fields—from scientific research to personal decisions—his book is primarily aimed at helping business professionals improve their measurement and decision-making processes. He notes that even modest reductions in uncertainty can translate into significant competitive advantages and financial gains, whether you’re gauging customer satisfaction, quantifying market risks, or estimating return on investment.
In this section, we’ll explain the purpose of measurement—to reduce uncertainty. We’ll also look at how measurement reduces uncertainty by defining something’s true value in probabilistic terms.
We Measure to Reduce Uncertainty
Hubbard defines measurement as the reduction of uncertainty to help make a decision. It’s important, he notes, to distinguish between the reduction of uncertainty and the elimination of uncertainty: The first is possible, but the second is not. This is because the world is too complex and dynamic for us to achieve absolute certainty about anything. Instead, what we can do is harness the powers of observation and measurement to tell us something about the decision we’re considering.
According to Hubbard, even a modest reduction in uncertainty can prove extraordinarily valuable. Indeed, businesses often leave enormous value on the table by relying on intuition or vague estimates when even simple measurement techniques could provide the clarity needed to make substantially better choices
Measurement Reduces Uncertainty Through Probabilities
Hubbard writes that measurement reduces uncertainty because it gives us the probability of something’s true value falling within a defined range. This expression is called a confidence interval. For example, instead of saying “The project will cost $50 million,” you would express this probabilistically by saying, “There’s a 90% chance that the project will cost between $40 million and $60 million.” This communicates both your best estimate (the range) and your level of certainty about that estimate.
Hubbard writes that confidence intervals reduce uncertainty by replacing guesses and gut-level intuition with quantified ranges. They give decision-makers specific bounds to work with, and they explicitly state the risk that the true value of a measurement falls outside those bounds (10% in our previous example). Acknowledging this uncertainty provides useful information for decision-making—because pretending we have absolute certainty when we don’t is misleading.
Part 2: Know What to Measure
Now that we understand the purpose of measurement, we can look at what Hubbard says about deciding what to measure. This means defining the scope of your measurement—what will and won’t be included. He emphasizes that defining your scope properly is critical to measurement success. If you define your scope too broadly, you run the risk of collecting excessive data that doesn’t provide meaningful insights. Defining your scope too narrowly means you might miss important factors and fail to address the problem you’re trying to solve.
For example, let’s say a company wants to measure customer satisfaction to reduce customer loss. If they define their scope too broadly, they’ll survey customers about everything—product features, packaging, social media presence, company values, and more. This much data won’t clearly indicate what drives customers to leave. On the other hand, if they define their scope too narrowly, such as by using only product ratings, they might miss that customers are departing due to factors like poor customer service. The right scope focuses on the most relevant customer touchpoints: support interactions, billing processes, and core product functionality. These provide actionable insights without drowning the company in irrelevant data.
There are a few factors to consider when defining the scope of your measurement, writes Hubbard. These are 1) your current state of knowledge, 2) what you’re planning to do with the information, and 3) the decision’s impact potential. Let’s look at each in turn.
Scope Factor #1: Your Current State of Knowledge
Hubbard emphasizes that assessing your current state of knowledge is a crucial first step in defining your measurement scope. This helps you identify where uncertainty exists and which gaps in knowledge most significantly impact your decisions. Gauging your current state of knowledge like this allows you to focus your measurement efforts where they’ll provide the greatest value.
For example, a restaurant owner wondering how to improve profits might first identify what she already knows (food costs, staffing expenses) versus what she’s uncertain about (optimal pricing). This assessment shows that her biggest knowledge gap is understanding the right pricing strategy. Accordingly, she collects customer feedback and tracks repeat visits. Rather than trying to measure everything at once, she selects a few specific things to measure that can quickly reveal how to optimize her pricing.
Use Decomposition to Assess Your Current State of Knowledge
Hubbard writes that one way to assess what you already know is to use a tool called decomposition—breaking down complex, difficult-to-measure concepts into smaller components. This helps refine your measurement scope by showing you which components truly require new measurements and which can be adequately addressed with existing knowledge. Decomposition also makes measurement more feasible, since you can typically measure the smaller components of an issue more easily or estimate them more precisely than you can with the complex whole.
For example, a marketing director initially feels overwhelmed trying to predict the success of a new campaign—a complex concept that seems impossible to measure directly. She applies decomposition to break “campaign success” down into its component parts: how many people see the campaign, how many interact with it, how many actually buy the product, and how much those customers spend over time. This process immediately reveals that she knows more than she initially thought—she has reliable historical data on how many people typically see similar campaigns, as well as solid benchmarks for how often people click or engage across different advertising channels.
Decomposition also highlights where her uncertainty is highest: what percentage of people will actually purchase this specific product after seeing the campaign, and how much money these new customers will spend in the long run. Rather than attempting to create a single complex measurement system for “overall campaign success,” she can now refine her measurement scope to focus specifically on these high-uncertainty components. In addition to being more targeted, these measurements are far more feasible because tracking purchase rates and monitoring spending patterns over time are much more concrete and actionable steps than trying to measure “campaign success” as a single entity.
Calibration Training: Reduce Your Uncertainty Before You Measure
Beyond using statistical tools like random sampling and confidence intervals, Hubbard writes that you can improve your ability to estimate uncertainty through a technique called calibration training.
Here’s how it works. When you’re being “calibrated,” the trainer has you answer a series of trivia questions or estimate various quantities in the form of confidence intervals. They then give you immediate feedback about whether your intervals contained the true values; after this, you adjust your confidence levels accordingly. Through repetition and feedback, you learn to recognize the difference between what you actually know versus what you think you know, gradually becoming more realistic about your true state of uncertainty. The training typically involves answering hundreds of practice questions until your stated confidence levels match your actual accuracy rates.
Here’s how calibration training might work in practice: In the initial practice round, you get 20 trivia questions and have to provide 90% confidence intervals for each. For “What year was Microsoft founded?” you might say “I’m 90% confident it was between 1970 and 1980.” For “How many employees does Amazon have?” you might estimate that you’re “90% confident that there are between 500,000 and 800,000.”
You continue through questions about geography, business facts, and historical dates. When results are revealed, you discover that only 12 of your 20 intervals (60%) actually contained the correct answers, even though you claimed 90% confidence. For instance, Microsoft was founded in 1975 (within your stated range), but Amazon has over 1.5 million employees (well outside your range). For the next round of questions, you consciously widen your ranges and find that 16 of your 20 intervals contain the true values (an 80% success rate).
After several rounds of practice with hundreds of questions, you eventually learn to recognize when you’re truly confident versus when you’re just guessing. At the end, your 90% confidence intervals would contain correct answers about 90% of the time.
Scope Factor #2: What You Plan to Do With Your Measurement
Hubbard writes that identifying what you plan to do with a measurement is crucial to defining your scope. Your measurement should be taken to inform some decision you have to make where there are multiple options. Hubbard emphasizes that every measurement should have a clear inflection point—a specific tipping point at which the data you collect would cause you to choose one decision over another. He warns that if you don’t ground your measurements in specific decisions, you run the risk of spending lots of time and money collecting data that has no purpose.
For example, a product team working on a website redesign decides to streamline things by clearly defining their specific decision: “Should we implement a single-page checkout or maintain our multi-page process?” This decision clarity transforms their measurement approach and gives them a clear inflection point: If the single-page checkout shows a 15% or greater improvement in completion rates, they’ll change the existing system. With this decision framework in place, they focus their measurement efforts exclusively on checkout-specific metrics—abandonment rates at each stage of the process, completion times, error rates, and customer satisfaction scores specifically related to the checkout experience.
Scope Factor #3: The Decision Impact Potential
Hubbard explains that you want to focus your efforts on collecting the data that will most reduce your uncertainty. We’ll call this concept the “decision impact potential”—the degree to which reducing uncertainty about a specific factor would change your decision. High decision impact potential means that getting better data about that factor could significantly shift you toward one choice or another, while low decision impact potential means that even perfect information about that factor wouldn’t meaningfully alter your decision.
Hubbard’s core insight is that not all uncertainties are created equal—some gaps in knowledge have enormous impact on your decisions, while others don’t. Remember, you’re not looking for perfect knowledge. You’re looking for enough information to move you from a state of indecision to a state of well-informed confidence.
For example, a healthcare administrator must decide whether to invest $200,000 in a new patient scheduling system. Rather than collecting data on everything possible, she applies Hubbard’s approach and evaluates her current confidence levels: She’s already 85% confident about patient satisfaction problems and 90% certain about staff preferences for a new system. However, she’s only 40% confident that the new system would actually reduce no-shows and wait times—the core benefits that would justify the investment. Since this uncertainty has the biggest impact on her decision, she focuses her research budget on getting better data about that specific question rather than studying areas where she’s already reasonably confident.
Part 3: Perform Your Measurement
In the first section, we explored why we measure. We followed that up by looking at what to measure and the purpose of measurement. In this section, we’ll conclude by examining some of the ways Hubbard says that we can perform measurements. We’ll cover techniques and tools like random sampling, controlled experiments, and Monte Carlo simulations.
Measurement Technique #1: Random Sampling
Hubbard writes that you can use random sampling to significantly reduce uncertainty. This is a method where you examine a small, randomly selected portion of something larger to learn about the whole thing. It’s particularly useful for measuring ongoing behaviors, activities, or characteristics across large groups—like determining what percentage of work time employees spend on different tasks, how often customers experience certain problems, or what proportion of your inventory has quality issues.
For the sample to be truly random—and therefore representative of the whole—every item in the sample has to have an equal probability of being selected. This is because If certain items have higher selection probabilities than others, those items will be overrepresented in your sample, meaning their characteristics will have disproportionate influence on your findings. This creates systematic bias: Your sample statistics will consistently deviate from the true population values, making your conclusions unreliable for the broader population you’re trying to understand.
Imagine you’re a retail manager trying to measure customer satisfaction across your 10,000 monthly customers, but surveying everyone would be prohibitively expensive and time-consuming. Instead, you could employ random sampling by assigning each customer a number and using a random number generator to select 200 customers for your survey. This gives every customer an equal chance of being chosen, avoiding biases like only surveying customers who shop on weekends or who frequent certain departments. With this random sample of just 2%, you can estimate overall satisfaction levels with a reasonable degree of confidence—and at a fraction of the cost of a comprehensive survey.
Small Sample Sizes and Misunderstanding “Statistical Significance”
Hubbard writes that most people believe that you need to collect huge amounts of data to know anything concrete or useful about a population, but this isn’t actually the case. He writes that a lot of this confusion comes from people misunderstanding the concept of “statistical significance.” Hubbard argues that “statistically significant” doesn’t mean “having a large number of samples.” Rather, it has a precise mathematical meaning that most lay people (and even many scientists) get wrong.
Basically, writes Hubbard, statistical significance is a mathematical test that asks: “What are the chances that a set of results happened purely by random luck?” It’s typically measured using something called a p-value—if your p-value is below a certain threshold (usually 5%), statisticians say your results are “statistically significant,” meaning there’s less than a 5% chance your findings were just a fluke. This standard makes sense in academic research where scientists need to prove their theories with extremely high confidence before publishing.
But when you’re considering a business decision, Hubbard points out that all you need to do is reduce your uncertainty. So, even if there’s a 10% or 20% chance your sample results were lucky, that information might still dramatically improve your decision-making relative to the information you had before you collected the sample. For example, if you’re completely unsure whether a new product feature will take users five minutes or two hours to complete, and a small test suggests it takes around 30 minutes, that’s valuable information, regardless of whether it meets statistical significance thresholds.
Small Sample Sizes Contain Lots of Information
Having clarified statistical significance—and its relative insignificance for most of the kinds of measurement we’re discussing in this guide—Hubbard explains why small samples can be so revealing. He writes that a small, randomly chosen sample can actually tell you a lot about the bigger picture with a high level of accuracy. This is because your knowledge about a population increases greatly when you increase your sample size above zero. A sample of six or seven already tells you vastly more than a sample size of none at all. The initial jump from no information to some information will probably give you your biggest reduction in uncertainty.
The only way to get statistical robustness is to compute the sample size needed to convincingly demonstrate a difference of a certain magnitude. The smaller the difference, the larger the sample needed to get statistical significance on the difference. By the same token, there are diminishing returns to collecting additional sample data after your first few observations. In other words, the amount of uncertainty you’ll reduce by increasing your sample size from seven to 700 is actually quite small—usually not worth the time, money, and energy you’d spend collecting those additional samples.
Let’s say a restaurant chain with 10,000 customers wants to measure customer satisfaction. They’re debating between surveying six randomly selected customers versus conducting a comprehensive survey of 700 customers. Before they’ve surveyed anyone (i.e., a sample size of zero), they’re completely in the dark: They don’t know anything about customer satisfaction. But after they’ve surveyed six random customers, they get satisfaction scores ranging from six to nine (out of 10). This immediately tells them customers are generally satisfied (with an average score of 7.5). The jump from zero knowledge to this insight is enormous.
But let’s say they kept going and decided to sample 700 customers: After spending weeks and thousands of dollars on the survey, they find an average satisfaction of 7.6 with the same range of six to nine out of 10. The key insight remains the same across both sample sizes, but the small sample provided the same decision-making value at a fraction of the cost and effort.
Measurement Technique #2: Controlled Experiments
Hubbard identifies controlled experiments as another powerful measurement tool. A controlled experiment is any test where you deliberately change one thing and observe what happens, comparing the results to what would have happened without that change. They’re best used when you want to figure out which factor produced a certain result, when there are multiple factors that could have plausibly done so. A proper experiment thus has three key elements:
- Intervention: You actively do something different.
- Control: You use a comparison group or baseline.
- Measurement: You observe and quantify the results.Â
Importantly, Hubbard notes that an experiment doesn’t require laboratory conditions, statistical sophistication, or a large research and development budget—it just requires systematically changing one specific variable while keeping everything else constant and measuring what happens. Many useful business experiments can be conducted with simple changes to existing processes. For example, a restaurant might test customer satisfaction by having servers offer complimentary appetizers to randomly selected tables, then measuring return visit rates. Likewise, an email marketing team might test subject line effectiveness by sending different versions of the same email with different subject lines.
Measurement Technique #3: Monte Carlo Simulations
Hubbard notes that running a Monte Carlo simulation can be an effective technique to reduce your uncertainty. In a Monte Carlo simulation, a computer creates a large number of different “what-if” scenarios that get fed back to it. Basically, you’re running the same experiment over and over again, but each time tweaking the variables within their realistic ranges. The Monte Carlo simulation is effective because it reveals patterns and probabilities that would be impossible to calculate by hand. You can run a Monte Carlo simulation through a wide range of software options, from simple Excel add-ins to more sophisticated, specialized programs.
But the principle is the same whichever program you use. When you run the simulation, you don’t get a single answer; instead, you get a complete picture of what could happen. You’ll see not just the most probable outcomes, but also how likely the extreme scenarios are. This tool is especially valuable when you have a lot of different uncertainties, any of which can influence your decision.
For example, let’s imagine that Sarah, the CEO of a coffee shop chain, is deciding whether to open three new locations. Each store involves uncertain costs: Construction might run $180,000 to $280,000, daily customers could range from 300 to 500, and monthly operating expenses might fall between $12,000 and $18,000. Using traditional planning, Sarah’s team would plug in middle estimates—say $230,000 for construction, 400 customers daily, and $15,000 monthly costs. This yields a single projected return of 18%, which looks promising enough to proceed.
Instead, Sarah runs a Monte Carlo simulation with 10,000 scenarios, each randomly combining different values from these ranges. The results reveal the full picture: while average returns still hit 18%, there’s a 25% chance of spectacular success above 30% returns, but also a 15% chance of losing money entirely. Most importantly, the simulation shows that construction cost overruns pose the biggest threat to profitability.
Armed with this insight, Sarah negotiates fixed-price construction contracts to eliminate her biggest risk, chooses locations with predictable demographics, and plans a phased rollout. Per Hubbard’s explanation, the Monte Carlo simulation didn’t just give her an average—it showed her which uncertainties mattered most and guided her toward a safer strategy.
