Watch this video or read on for key highlights from the talk.
- Experiments can be seen as additional work
- Commonly challenges come with engineering, data ingestion, decision making and product culture
- Teams can find it hard to capture data and to work with it
- Psychological safety and support are needed for successful experiments
Cameron heads up the decision science team at feature management platform LaunchDarkly and talks to hundreds of customers of all shapes and sizes in different industries and with different goals. He says these conversations show a fairly consistent set of themes that prevent product teams from measuring their work in a way that they feel good about. In this talk he looks at the friction areas and challenge points in experimentation and at what you can do to get your team to start running experiments or to run more of them.
What is an experiment?
An experiment, says Cameron, is a way to compare an old version against a new version or many versions against themselves so that you quantify the impact of decisions you make and increase certainty in your decision making. He says: “One of the superpowers of experimentation is that you isolate exclamation points and question marks such that you can understand the relative impact of a change on your system regardless of all the other changes in your system.” A product manager’s goal is to identify the right decisions to make, he adds, and experiments remove subjective noise to give you a higher level of certainty.
Why aren’t more product teams running experiments?
Cameron has found there are four friction areas that prevent teams from running experiments or running as many as they would like. They are:-
Engineering: issues around workflow, speed, risk. Teams see experiments as additional work to implement when they need to focus on getting the things they’ve already scoped out the door. It adds to complexity and delivery time and can add to security and performance risks.
Cameron recommends that by placing equal priority on tooling that fits the dev teams’ workflows you can run more experiments in the long run.
Data ingestion: issues around effort, reliability, trust. Teams may feel they have a hard time connecting the data they need and trusting that is reliable enough to use to make decisions. Setting up data intake is real work, says Cameron, and it needs to be properly planned. There may be challenges in ensuring data is captured in a scalable way and in validating that data is accurate. Cameron recommends that you partner with data science/ engineering upfront to map out data flows and it should mean that going forward you get less pushback about adding new data points or locations.
Decision making: issues around comprehension and trust. Teams aren’t confident about running their own experiments. This lack of confidence may start with concern about how to interpret statistics, says Cameron, a lack of trust that teams know how to set up experiments, or that there is enough data to get results. To overcome these challenges Cameron recommends bringing delivery teams into evaluation and planning of tooling in order to assess how the results will be used in practice. You can then better democratise data-driven decisions and free up your experts to focus on more strategic efforts.
Product culture: issues around momentum, fear, buy-in. Cameron says teams can be so focused on speed of delivery that they are afraid that experiments will slow them down. Often there’s a lack of priority, comfort or habit around using data to make decisions, teams quantify their success by looking at how often they launch things and failed experiments equate with failure. “If you’re unwilling to take the time to test and validate something that you’re launching then essentially you’re in a feature factory, “ Cameron says. It’s a challenge for leadership, he adds, and recommends that teams are given safety and support so that they can take time to validate releases.