PDF Summary:The Lean Startup, by Eric Ries
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of The Lean Startup by Eric Ries. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of The Lean Startup
In The Lean Startup, entrepreneur Eric Ries argues that your startup’s primary objective should be discovering what customers actually want and are willing to pay for, in an efficient and cost-effective way. Ries writes that this entails constant experimentation—you put something in front of customers, pay close attention to how they respond, and use that information to decide what to do next. This cycle repeats continuously as you feel your way toward a product people actually want.
In this guide, we’ll explore this repeating cycle of experimentation, covering how to 1) form a hypothesis about your customers, 2) put out a simple version of your product or service to test that hypothesis, 3) collect data and observations from how your customers react, and 4) adjust your strategy based on observed results. As we go through Ries’s experimental cycle, we’ll complement his analysis with insights from other experts in entrepreneurship and startup formation.
(continued)...
To illustrate, let’s return to our hiking app example: You’ve hypothesized that hikers care most about trail crowdedness, and you’ve predicted that adding a crowdedness indicator will increase app use before weekend mornings. Now you need to build something to test this. The temptation might be to develop a sophisticated real-time crowdedness system that pulls data from trail cameras, parking lot sensors, and user check-ins—but that would take months and significant resources.
Instead, you create a simple feature where you manually update estimated crowdedness levels for a handful of popular local trails based on historical patterns and weather forecasts. It’s imperfect and limited in scope, but it lets you put something in front of real users within a week or two. And it can test your hypothesis: If hikers don’t engage with this rough version of the feature, you’ve at least learned something valuable without having invested heavily in infrastructure you might not need.
(Shortform note: One potential disadvantage of putting out a bare-bones test product is that its streamlined nature may not provide a comprehensive user experience or fully represent your idea. This could damage your brand if early adopters associate it with a primitive, stripped-down product. One alternative is the minimum marketable product or MMP. Rather than stripping a product down to its barest testable form, this strategy aims to deliver something polished enough to compete from day one. The product includes complete functionality rather than partial features, presents an attractive and sophisticated design rather than bare-bones usability, and launches with substantial marketing support rather than quiet experimentation.)
Experimental Cycle Step #3: Collect Your Data
Ries writes that once you’ve given customers the chance to interact with your test product, it’s time to collect data about how it performs. This is where you find out whether your hypothesis holds up in the real world. Let’s explore Ries’s tips for collecting actionable metrics—data you can use to prove or disprove your hypothesis. Specifically we’ll look at avoiding misleading metrics, tracking cohorts of customers, and using split testing.
Data-Collection Tip #1: Beware Misleading Metrics
Ries warns that it’s important to watch out for misleading metrics—measurements that create a false sense of progress by tracking numbers that naturally increase over time without reflecting actual improvement. Cumulative totals are a common pitfall: A product might gain the same number of users each week, making the total count grow steadily larger. While this expanding figure feels impressive, it masks the reality that the underlying growth rate hasn't changed at all.
(Shortform note: In How to Measure Anything, management consultant Douglas Hubbard writes that one way to avoid useless or misleading metrics is to define the scope of your measurement—what data you will and won’t collect. If you define your scope too broadly, you run the risk of collecting excessive data that doesn’t provide meaningful insights. If you define your scope too narrowly, you might miss important factors and fail to address the problem you’re trying to solve. Hubbard emphasizes three factors you must examine to define your measurement scope: 1) your current state of knowledge, 2) what you plan to do with your measurement, and 3) the degree to which reducing uncertainty about a specific factor would change your decision.)
Data-Collection Tip #2: Track Cohorts
According to Ries, an effective way to gauge progress is to organize your data into cohorts—groups of users who joined during the same time period—and examine each cohort individually. For example, you’d track all January signups as one group, February signups as another, and so on. This method tells you whether your recent work is actually making things better, or whether you’re just accumulating results from earlier momentum while your current performance stagnates.
Returning to our hiking app example, you’re not just celebrating that total feature views keep climbing week after week—logically, that number would have to go up as long as you have any users at all. But you’re testing your specific hypothesis by tracking whether users check the crowdedness indicator right before busy hiking times. To do that, you separate users into weekly groups based on when they downloaded the app: Week 1 users, Week 2 users, Week 3 users, and so on. Then you measure what percentage of each group checks crowdedness on Friday evenings and Saturday mornings.
If Week 1 shows 12% of users checking at those peak times, Week 2 shows 11%, and Week 3 shows 13%, your hypothesis isn’t being validated—you're essentially flat despite your efforts to promote the feature. But if those numbers read 12%, 18%, then 24%, then you’re seeing genuine improvement. Each new group of users is engaging with the feature more than the last, which suggests your recent changes are actually working.
(Shortform note: While separating users into time-based cohorts can reveal whether your metrics are genuinely improving, this approach has an important limitation: It shows you patterns but can’t definitively explain what’s causing them. Was it a specific feature you launched? A change in your onboarding process? Different marketing channels attracting higher-quality users? External factors like seasonal trends or economic conditions? The data reveals correlation—that something changed between these time periods—but not causation. We’ll explore this correlation/causation issue in greater detail later in the guide.)
Data-Collection Tip #3: Use Split Testing
Ries recommends using split testing: showing different versions of your product to different groups of users, then comparing what happens. This is because when your metrics improve after you make a change, you can’t automatically assume your change caused the improvement. Maybe external factors like seasonal trends or unexpected media attention are at play, or maybe it’s just a random variation. In other words, the timing of the improvement might be coincidental rather than causal.
By splitting your audience and measuring the results from each group separately, you eliminate the guesswork. Any outside factors will affect both groups equally, so the difference between them will reveal what your change actually accomplished. This approach gives you concrete evidence about what’s genuinely driving user behavior rather than just a hunch based on numbers that happened to move in the right direction.
For your hiking app, you might apply this method as follows: Half your users see a new notification system that alerts them when their favorite trails are less crowded, while the other half continue using the app without these alerts. After two weeks, you discover that the alert group opens the app four times per week on average, while the no-alert group opens it only 2.5 times per week. This gap tells you the alerts themselves are driving more engagement. You now have solid evidence that this specific feature changes how people use your product.
(Shortform note: Many experts believe random control trials (RCTs) offer more accurate results than split testing. According to Matthew Syed (Black Box Thinking), an RCT involves establishing a control and introducing a variable to measure its impact against the control. For example, to test a new landing page, you’d compare the performance of your current page, the control, with the new experimental design. In Measure What Matters, Hubbard notes that this kind of experimentation doesn’t require laboratory conditions, statistical sophistication, or a large research and development budget—it just requires systematically changing one specific variable while keeping everything else constant and measuring what happens.)
Experimental Cycle Step #4: Adjust Your Strategy Based on Results
After you’ve collected your data, writes Ries, you step back and evaluate the results of your experiment honestly. How accurate was your original belief? What surprised you? Based on what you’ve learned, should you continue your approach (maybe with a few small adjustments), or does the evidence suggest you should head in a different direction?
Ries writes that the feedback you get from your experimental cycle informs different strategic redirections you’ll need to make in the next round of experimentation. Strategic redirections take various forms, including feature concentration, scope expansion, and audience replacement.
Strategic Redirection #1: Feature Concentration
According to Ries, feature concentration means narrowing your product down to focus exclusively on one specific capability, rather than offering multiple features or broader functionality. You conduct this type of redirection when customers demonstrate strong enthusiasm for one particular capability within your broader offering. For example, say your hiking app started with multiple features—trail maps, weather forecasts, crowdedness indicators, and user reviews—but after testing, you discover that users overwhelmingly engaged with the crowdedness indicator while ignoring everything else. You then rebuild the app to focus solely on showing real-time trail crowdedness, eliminating the other features entirely.
(Shotform note: Concentrating on one feature also gives you the opportunity to gain a competitive advantage by offering something unique to the market. In Competitive Strategy, economist and Harvard Business School professor Michael Porter argues that the main advantage of a strategy based on novelty is that it acts as a defense against buyers looking for the cheapest deal. This is because, without comparable alternatives, buyers have no choice but to purchase from you—even if your prices are high. Additionally, offering something distinctive captures customer attention and cultivates loyalty among those who value uniqueness and are willing to pay premium prices for it.)
Strategic Redirection #2: Scope Expansion
According to Ries, scope expansion means broadening your product to incorporate substantially more capabilities, rather than keeping it minimal or focused on a single function. You conduct this type of redirection when customers find the basic version insufficient. For example, let’s say your hiking app initially just showed crowdedness indicators, but testing revealed that users kept opening other apps to check weather, trail conditions, and parking availability—then returning to your app. Based on this, you would expand the product to include all these additional features in one place, creating a comprehensive trail-planning tool rather than a single-purpose crowdedness tracker.
(Shortform note: Although it can make sense to expand the scope of your product, it’s possible to overdo it. In an effort to create the best product and please the widest audience, designers sometimes unintentionally overcomplicate a product by adding too many features. This is called “feature creep” or “scope creep,” and it can make your product harder to use than it was before. If you find that expanding your scope leads to worse metrics during your next experimental cycle, consider whether this is the cause.)
Strategic Redirection #3: Audience Replacement
According to Ries, audience replacement means redirecting your product toward a different customer type rather than continuing to serve your original target market. You conduct this type of redirection when your solution works well but serves the wrong market—often occurring when initial enthusiasts are exhausted and broader audiences require different approaches.
For example, your hiking app initially targeted casual weekend hikers who wanted to avoid crowds, but testing revealed lukewarm engagement from this group. However, you noticed that trail maintenance crews and park rangers were using the crowdedness data to plan their work schedules and allocate resources. You then reposition the app to serve these professional land managers, adding features like historical traffic patterns and predictive analytics that help them optimize staffing and maintenance operations.
(Shortform note: Alexander Osterwalder and Yves Pigneur (Business Model Generation) suggest that you may have more than one core audience, what they call “customer segments.” This may be the case if you find that you need to create different products and services to meet their needs, reach them through different channels of distribution, develop different types of relationships with them, or adapt your pricing structures to accommodate their needs.)
Want to learn the rest of The Lean Startup in 21 minutes?
Unlock the full book summary of The Lean Startup by signing up for Shortform .
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's The Lean Startup PDF summary: