Experiments allow us to measure the impact of our products and determine what works and what doesn’t. They should be a quick and inexpensive way for product managers to validate and prioritise their ideas. In the first of this two-part series we’ll dig into the types of experimentation product managers can use and reveal some of the pitfalls and challenges.
- Generative experiments are qualitative and start with learning about your customers’ context and problems.
- Evaluative or quantitative experiments start with a hypothesis which defines the type of test you use.
- Make sure you understand the value of qualitative research and know how to analyse metrics before you embark on quantitative research.
- Every experienced product manager should know how to run a high-quality customer research interview and how to conduct an A/B test.
- Document your biases and get others to review your experiments before you put anything into production.
- Make sure you always refer back to the bigger picture.
What kinds of experimentation can we/should we do?
Experiments are either generative, in that they don’t necessarily start with a hypothesis and generate many ideas, or evaluative, where they test a hypothesis for a clear yes or no result.
Product consultant, and former director of training at Mind the Product, Rosemary King comments that the place to start in any business is learning about and understanding your customers’ context and problems. That’s done through generative qualitative research.
The first experiment then for any product manager is to validate customers through customer interviews. Says Rosemary: “It’s very generative. You’re not asking anything specific, you’re asking what problems they have and how they solve them currently. You generate a lot of information around the challenges and opportunities that exist for clients and that moves you on to brainstorming and ideation about what you can build.”
From there you can move to evaluation and experiments. “In the evaluative stage you want to gather as much realistic data as possible without having to build anything really expensive,” says Rosemary.
In a startup especially it’s important to be able to build up a community and have a group of early adopters who pay attention to what you say. As first steps Rosemary suggests doing things like creating a landing page for people to sign up, or a Facebook group to see how many people you attract. “For a startup, building a community is your first experiment. Once you’ve built up a semblance of a community, you can start testing the content, to see what content resonates, what makes people engage, what drives behaviour.”
How do we know what test to use when?
The hypothesis defines the type of evaluative test you use. So make sure you start with a hypothesis and don’t fall into the trap of getting excited by a methodology and failing to take a holistic view of what you need to. Terry Lee, Senior Product Manager at Flo Health, comments: “Don’t start with ‘I want to run a fake door test’. Start with the hypothesis that you think ‘you can improve retention by 5%’, or whatever it may be.”
Sometimes it may not be obvious what test to use, even with a strong hypothesis. So draw on the expertise of the people around you. If the business has analysts or UX designers who work closely with the product team, then discuss your potential experiments with them.
Terry says that you must also understand and document your biases along the way so that anyone else looking at the test understands they have been taken into account. “At Flo, our analysts independently review all our experiments,” he says. “As a product manager I can look at the live data coming in. But I will always discuss with our analysts before saying this is the winner and putting something into production.”
Terry doesn’t try to match specific experiments to a given situation, but rather to keep in mind a general idea of what’s appropriate where: “For example, if you’re working on a low traffic product or a low traffic area of the product, then you should be running things like A/B tests – just this or that. You should be limiting your variation because you’ll struggle to get to statistical significance in an acceptable time with multi-variant testing.”
What tests should all product managers know how to run?
There are a multitude of proven tests and experiments for product managers, from straightforward A/B testing to complex and highly nuanced card sorting exercises. If they don’t have access to colleagues like analysts and UX experts, then, as Rosemary says, every product manager should at least have read a few books (Eric Ries’ Lean Startup is the obvious place to start) or blogs on experimentation. She says that the resources on the Kromatic website are very useful.
She says that a high-quality customer research interview is the first experiment everybody should know. She adds: “Usually I don’t recommend PMs jump into A/B testing or multivariate testing until they’ve become at running more qualitative experiments and are structuring hypotheses around what they need to learn with ease. They need to have absorbed experimentation templates and structures, and understand the boundaries of experiments.”
Rosemary recommends only moving on from qualitative to quantitative experiments once you understand:
- How long an experiment should run
- How your audience should be segmented for experimentation
- What false negatives and false positives might look like
As an intermediate step between qualitative and quantitative experiments, Rosemary also recommends you develop the skills to analyse metrics. “You should be starting with qualitative experiments,” she says. “Then when you really understand how to think about metrics and how to measure if something is working, then you can move to quantitative experiments.”
But if there’s one test that every experienced product manager should know how to conduct, it’s an A/B test and its multivariate version. Says Terry: “Simple A/B tests, simple split tests, fake door tests, everything else apart from that is quite nuanced to your particular situation. And, to be honest, you could probably achieve almost anything with an A/B test.”
What pitfalls of experimentation should product managers be aware of?
- Resist the urge to leapfrog over qualitative data in favour of quantitative data because it feels more scientific
- Try to keep your tests as simple as possible. Don’t jump into complex tests without a full understanding of the questions you want to ask.
- If your organisation’s data tracking is not robust enough then any testing you do won’t tell you anything. As Rosemary says, “it’s just going to make you more confused”.
- Make sure you address the bigger plan. Product teams are often working at such speed that they fail to fully address biases or take an holistic view of what they’re trying to achieve, says Terry. He’s seen teams just focus on short-term metrics and end up adversely affecting other teams and delivering an inconsistent user experience.
Look out for the next part of this series (publishing on Monday 28th March) when we’ll dig into some case studies and examine some experimentation successes and failures.
- When A/B Tests Aren’t Enough, by Scott Castle
- Data-Driven Blunders and how to Avoid Them by Jorge Rodriguez-Ramos
- Evaluating Experiments: When the Numbers lie by Jacob de Lichtenberg
- How do you Implement Effective Product Experimentation? By John Noronha