Stay Humble or Be Humbled — Case Studies from the Experimentation Trenches by Anthony Rindone "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs October 10 2021 False #mtpcon, Experimentation, Mind the Product Mind the Product Ltd 550 Product Management 2.2

Stay Humble or Be Humbled — Case Studies from the Experimentation Trenches by Anthony Rindone

BY ON

In this #mtpcon Digital Americas session, Anthony Rindone, product manager for experimentation at Split discusses how to make better product decisions through experimenting.

Breaking metrics down

Anthony says that experimenting with products is a humbling experience, if you succeed in 30% of the experiments you make, then you are in a world-class position, meaning that the number of times experiments you invest in turn out to fail. At Split, he explains that every change is a feature, and every feature is an experiment. He goes on to discuss three case studies from the experimentation trenches.

In his first example, he explains that he has encountered using the wrong overall evaluation criterion (OEC) to initiate experiments. A better approach to experimentation is to leverage a broader field of possibilities. By opening your range, you’re able to break things down and create sub-metrics and track what is specifically changing through that.

Integrating multiple variations over time

Throughout his second case study he talks about when introducing a new feature, it experiences a novelty effect. This can be misleading to how well received a new feature will be. If it’s a brand new addition it’s bound to receive more traction, but this reception may not necessarily last for long. By running multiple variations over four weeks, you can see what actual effect these experiments have. While running several variations, it’s important to sweat out every piece of detail.

“When you think about your whole application from end to end, there’s going to be some overlapping through new campaigns or feature roll-outs,” Anthony says, “improving your segmentation and being more precise with testing is key to tracking new variations effectively over time.”

Have a specific hypothesis

Finally, Anthony talks about his third example of having a vague hypothesis. The wider you cast your net, the more difficult it is to have a clear decision process or answer, he explains.

“If you don’t have a clear decision-making process, it creates loads of answers of a wide variety,” he says. Consequently, you end up setting yourself up for failure where you have two opposite conclusions. Both can’t be right. “The conclusion is normally decided by the highest-paid person in the room, which may not always be the right one.”

A hypothesis is more than a guess, it’s a foundational element on how to make decisions, he says. Build a falsifiable hypothesis upfront, identify the target segment at the start of your experiment. Doing so will provide an action plan for each outcome which everyone agrees on.

To wrap up

Anthony closes by providing some best practices with experimentation:

  1. Focus on the bigger picture
  2. …But still, sweat the details
  3. Having precise, falsifiable hypothesis’ are foundational needs to effective decisions-making

Explore more content like this

Ordinarily, premium content such as this can only be accessed by Mind the Product members. Today, we’re offering up this talk to everyone in the community. If you like the content and want more like it, you can check out our Mind the Product membership plans.

Still in need of inspiration? Browse even more user testing content or use our Content A-Z to find specific topics of interest.

If you’re a Mind the Product member, you can also check out more #mtpcon video content.

In this #mtpcon Digital Americas session, Anthony Rindone, product manager for experimentation at Split discusses how to make better product decisions through experimenting.

Breaking metrics down

Anthony says that experimenting with products is a humbling experience, if you succeed in 30% of the experiments you make, then you are in a world-class position, meaning that the number of times experiments you invest in turn out to fail. At Split, he explains that every change is a feature, and every feature is an experiment. He goes on to discuss three case studies from the experimentation trenches. In his first example, he explains that he has encountered using the wrong overall evaluation criterion (OEC) to initiate experiments. A better approach to experimentation is to leverage a broader field of possibilities. By opening your range, you’re able to break things down and create sub-metrics and track what is specifically changing through that.

Integrating multiple variations over time

Throughout his second case study he talks about when introducing a new feature, it experiences a novelty effect. This can be misleading to how well received a new feature will be. If it’s a brand new addition it’s bound to receive more traction, but this reception may not necessarily last for long. By running multiple variations over four weeks, you can see what actual effect these experiments have. While running several variations, it’s important to sweat out every piece of detail. “When you think about your whole application from end to end, there’s going to be some overlapping through new campaigns or feature roll-outs,” Anthony says, “improving your segmentation and being more precise with testing is key to tracking new variations effectively over time.”

Have a specific hypothesis

Finally, Anthony talks about his third example of having a vague hypothesis. The wider you cast your net, the more difficult it is to have a clear decision process or answer, he explains. “If you don’t have a clear decision-making process, it creates loads of answers of a wide variety,” he says. Consequently, you end up setting yourself up for failure where you have two opposite conclusions. Both can’t be right. “The conclusion is normally decided by the highest-paid person in the room, which may not always be the right one.” A hypothesis is more than a guess, it’s a foundational element on how to make decisions, he says. Build a falsifiable hypothesis upfront, identify the target segment at the start of your experiment. Doing so will provide an action plan for each outcome which everyone agrees on.

To wrap up

Anthony closes by providing some best practices with experimentation:
  1. Focus on the bigger picture
  2. ...But still, sweat the details
  3. Having precise, falsifiable hypothesis’ are foundational needs to effective decisions-making

Explore more content like this

Ordinarily, premium content such as this can only be accessed by Mind the Product members. Today, we’re offering up this talk to everyone in the community. If you like the content and want more like it, you can check out our Mind the Product membership plans. Still in need of inspiration? Browse even more user testing content or use our Content A-Z to find specific topics of interest. If you’re a Mind the Product member, you can also check out more #mtpcon video content.