So you want to answer your key hypothesis about your product or company. How do you get the right data? Through split testing or A/B testing. Learn the key principles of Split Testing from Lean Startup here.
Intro to Split Testing
Let’s say you develop a new feature to your product and you release it to all your users. Suddenly your metrics improve. But how do you know seasonal effects aren’t at play – that the users who joined later aren’t just naturally more engaged? Or that you got a burst of users from an unexpected news article?
An A/B test or split test avoids bias by splitting users into seeing two different versions of your product. By analyzing the metric resulting from both groups, you get strong quantitative evidence about which version users like more.
For example, let’s say you have a landing page MVP listing your potential features and a signup form. You’re not sure which of two features your users will like more. So you set up an A/B test – half of visitors see feature A on your landing page; the other half sees feature B. You measure the difference in signup rate. If feature A gets a 5% signup rate, but feature B gets 2%, this is evidence that your users may prefer feature A!
A/B testing should be applied at critical stages of your progress. You can test variations in your marketing pages, different signup and payment processes, different product interfaces, even turn features on and off in your product.
As described, one serious benefit to A/B split testing is reducing time confounds. Since the same group of users are randomly assigned to two groups at once, you don’t need to worry that an earlier group of users is different from a later group.
Another benefit of A/B split testing is that it lowers politics. You don’t have to squabble with your team over which features are better – you can put it to the test with an MVP. SPlit A/B testing also lets you assign credit where it’s due. If you launch a bunch of marketing and product changes at once, and your metrics improve, who’s responsible? Without independent experiments, it’s difficult to tell. But by running separate A/B tests, you would be able to see that marketing’s redesigned pages caused the signup boost, not the new product feature.
Finally, split testing lets you gauge the real effect your work is having on users. The product team may obsess over features that users absolutely don’t care about.
Useful Reporting in Split Testing
To make A/B split testing work, you need to have discipline around analyzing your experiments and planning the next step. Often this means producing reports, which Eric Ries believes should fit 3 A’s:
Actionable: correctly designed experiments will show a clear cause and effect, and the right metrics will show whether you’re really making progress. From here, you can iterate through the Built-Measure-Learn loop.
Accessible: simplify the metrics and help people understand what they mean and why they’re important. Consider making metrics publicly viewable by any member of your team.
Auditable: people should be able to dig into the raw data and trace how the metrics are compiled. If someone doesn’t like the result of an experiment, she may be tempted to question technicalities of the data. You need to be able to prove that the analysis is faultless.
Now when you’re facing the data, you need to decide what to do. Is there still promise in your direction, and should you keep trying to iterate through the Build-Measure-Learn loop? Or have the metrics come back so disappointing so often that it’s time to change your strategy entirely – to pivot?
———End of Preview———
Like what you just read? Read the rest of the world's best summary of "The Lean Startup" at Shortform. Learn the book's critical concepts in 20 minutes or less.
Here's what you'll find in our full The Lean Startup summary:
- How to create a winning Minimum Viable Product
- How to understand how your startup will grow
- The critical metrics you need to track to make sure your startup is thriving