The key to a great hypothesis – Mark Tsirekas on The Product Experience "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs June 06 2022 False Podcasts, The Product Experience, Mind the Product Mind the Product Ltd 5039 The Product Experience - Mark Tsirekas Product Management 20.156

The key to a great hypothesis – Mark Tsirekas on The Product Experience

BY ON

What does a good hypothesis look like in product and what does it include? In this weeks’ podcast episode, Lily and Randy say down with Mark Tsirekas, VP of Product at ZOE to discuss both the scientific and the non-scientific methods behind hypothesis and testing.

Listen to more episodes


Featured Links: Follow Mark on LinkedIn and Twitter | Zoe are hiring – work with Mark! | Mark’s website MTsireud

Episode transcript

Lily Smith: 

Randy, do you remember back in the day in your science lesson, lighting Bunsen burners in the lab, turning the gas on? So the other students freaked out of the smell and sitting on those high wooden benches and throwing paper planes at your mates?

Randy Silver: 

Oh, golly, that sounds like a very whimsical interpretation of a science lesson from an English school. I’m from New York and I went to high school where the Beastie Boys kinda sort of went. So science was more like setting fire to doughnuts and putting them out with fire hydrants.

Lily Smith: 

I can totally imagine you doing that? Well, one thing both our classes probably had in common was writing hypotheses. And honestly, I did love that part, writing up conclusions not so much. I mean,

Randy Silver: 

the conclusion was that the experiment didn’t work, right. But today, we’re going to talk to Mark C rec s. He’s the VP of product at Zoe, and we’re going to talk to him about hypotheses and testing. So let’s hear his thoughts and what his science lessons taught him.

Lily Smith: 

The product experience is brought to you by mind the product.

Randy Silver: 

Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love.

Lily Smith: 

Because it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos,

Randy Silver: 

browse for free, or become a minor product member to unlock premium articles, unseen videos, AMA’s roundtables, discount store conferences around the world training opportunities for

Lily Smith: 

mine, the product also offers free product tank meetups in more than 200 cities. And there’s probably one for you.

Randy Silver: 

Mark, thank you so much for joining us on the podcast this week.

Mark Tsirekas: 

Thank you very much for having me.

Randy Silver: 

So for the few people who don’t already know you, can you just give us a quick introduction to yourself, What are you doing these days? And how did you get into product in the first place?

Mark Tsirekas: 

Absolutely. So I am the VP of product at Zoey. And Zoey is a company that focuses on being the most scientifically advanced and personalised nutrition programme, where we help you understand how food affects your body. So you can eat for your best health. And I also advise startups specifically from pre product market fit all the way to scale. And how I got into product is by accident, because I started building stuff online and learned the hard way that it’s very difficult to actually create something that is valuable to people. And I stumbled into product management as a more methodical way of approaching that and never look back.

Randy Silver: 

Okay. And speaking of methodology, this is something we want to talk to you about today. You spend a lot of time working with your teams on coming up with good hypotheses and using them as part of the process. But let’s start at the beginning. When do you actually use a hypothesis? Is it at Division level at the strategy level at the quarterly plan is an epic story? Where does it start?

Mark Tsirekas: 

If if we’re being liberal with a term, I think everything is a hypothesis. So a startup is a hypothesis and a bet around what the future will look like. Your vision as a company is a bet of how the future will look like after you’ve succeeded in altering it. But in more product management terms and what we do on a day to day basis, I think we’re focusing more on how we can augment the customer experience and so focused more on the if this than that type of hypotheses where we have one variable and one customer problem that we’re trying to solve. And, you know, by trying different things, we’re trying to figure out what works and what doesn’t. So I think it’s more common to use hypotheses and quarterly, all the way down to the story level.

Lily Smith: 

So when you’re planning your regular work, how much of the hypothesis kind of fits into that that planning phase? Like, is everything you know, every sort of piece of work that you’re doing have a hypothesis attached to it? Or, you know, just some have a hypothesis, depending on sort of how confident you are? Like, how much does the hypothesis feature in the day to day planning of work?

Mark Tsirekas: 

I think I mean, fundamentally, there are problems that, you know, are pretty clear cut, right? Like they need to be done for a business as usual. Like, you know, you have to do an integration for tax purposes, right. So I think those kinds of things just are normal business, and they obviously don’t have hypotheses attached to them. But when it comes to like how we set goals, and we use OKRs, for that purpose, I think everything has a hypothesis attached to it. And that hypothesis is very much focused on the customer problem. So that’s how we kind of how we structure both our objectives and our key results and on that planning level, I would say that is the case. Similarly, when we have you know, features as well, we always have a templated tickets were effectively the hypothesis is part of it, and also the metrics of the counter metrics and a lot of other relevant stuff. Yeah.

Lily Smith: 

Okay, great. So you have your OKRs, and then your hypotheses that connected to the different OKRs that you’re looking to achieve.

Mark Tsirekas: 

Correct. So the OKRs, for us, most of the time, and objective would be a validated customer problem. And then the hypothesis we’re making is more focused on the solution, whether the solution will deliver the antidote almost to the customer problem that we see or the opportunity that we perceive exists and customers have validated with us already. Yeah.

Randy Silver: 

So this sounds like if we were using Teresa tours as approach of the opportunity solutions tree, you’ve got your goals and your objectives. And directly under that is your hypotheses of how you’re going to reach those objectives.

Mark Tsirekas: 

I think that makes sense. Often, if often, it would look like that. But it doesn’t doesn’t necessarily mean that the structure changes. So you could have a hypothesis on the feature level, or you could have it on a, on an idea level or, you know, on a on a problem level, even right, I think they fall into different process to validate and approach them.

Lily Smith: 

So what does a good hypothesis look like? What does it include?

Mark Tsirekas: 

You know, there’s always a is a scientific company. And we work a lot with science in our core. So I’m going to, I’m going to actually work my product management hat and focus on the non scientific part for a second. Even though that might not be obvious, I think, a great hypothesis fundamentally is hypothesis whose outcome delivers a clear action for the product team. And that action delivers a better customer experience. And I think what tends to happen very often is that we focus instead on like business metrics or, you know, solutions that we’re trying to validate and are already biassed with and forget about the customer problem in the experience, such that, you know, in the end of the day, you get the outcome of a hypothesis that you have, and you’re not really sure what to do with it. And so to give you an example of something that is very relevant to that, and Zoey, we recently had to change the first part of the of the product, which is the the quiz flow, we have a quiz, you know, where we ask relevant health questions. And our hypothesis was not we want to optimise that part in our hypothesis was not around, fundamentally the conversion optimization or anything like that, but fundamentally around the learnings and the value of the customers are getting in the very first part of the quiz. And so rather than looking at the drop off points only and optimising for, you know, for conversion optimization will be aimed to do was hypothesise where the customers can start seeing the value assessed by whether their knowledge increased after the quiz in the very beginning, which is effectively to connect the value of the product that we the users have when they use it to the very beginning. And so I guess a great hypothesis is actionable. And it helps you fall in love with the problem and focus on the customers. I think that’s what I would have done in my mind as a great hypothesis.

Randy Silver: 

That’s great. Okay, so how do you work with people who maybe their first take on a hypothesis isn’t quite so strong? What kind of advice do you give them to ensure that they’re formulating better hypotheses?

Mark Tsirekas: 

So I think I think what we do very much so a joy, and it’s the only part that I actually focus on, in working really closely with with the product management team, is to go and understand really how to approach a problem. So before we even formulate hypotheses, any hypothesis is fair game, right? You can have a hunch, you can have you might have observed something you might have heard something in a customer interview. But when you want to propose something, presumably to dedicate time to it, there’s a whole process that we go around validation, getting more and more data, so that the hypothesis becomes both obvious to everyone. Right? So we have a common amount of understanding and data. And so it becomes obvious to everyone. And secondarily, you know, it’s it’s accepted by the team as a whole,

Randy Silver: 

the Gurmeet just gonna expand on that a little bit further. So each team, though, might have a hypothesis, how do you make sure that the different hypotheses that are being worked on the different team hypotheses are working together rather than at cross purposes?

Mark Tsirekas: 

So I think that has to do a lot more with our planning rather than the hypotheses itself, right. So what we do is periodically we have, you know, we dedicate time for user research and discovery. And every quarter we basically reassess our customer problem backlog and these are big, hairy, you know, problems that we don’t know the solution of. We don’t exactly know why they’re happening, but we know that they are happening and so what we do is before we the next cycle starts like teams are already working in the type of category and Right now they’re in semantic in different thematic topics, they sort of start picking from their backlog problems, and they start like going through the customer validation. So, in other words, like it’s pretty visible across the whole company how, and when, and who is working on which problem. And so the and fundamentally above that, then there’s a level of, you know, strategy, where we know where we’re going to focus on and why. And that trickles down all the way to the bottom.

Lily Smith: 

So we talked about what makes a good hypothesis. And in terms of the actual validation of this hypothesis, how do you go about deciding how to validate or invalidate the assumptions that you’ve made?

Mark Tsirekas: 

Yeah, so that’s something that we have developed quite a rigorous process, purely from experience, we started doing it and then you know, things would break. And we’ve found a way that we have something which is specific enough as to guide you know, our teams in the process, but at the same time is generic enough that you can can adapt to it. For short, we call it cusp. The first part is like the customer user journey. The second part is the user problem, validation. There’s a third part, which is around the solution validation. And finally, we prioritise based on what we find, I think it’s extremely important to understand, and this is where we all start from, to understand the customer journey, then there’s a lot of things that come with it, it’s the touch points of the customer, the mood of the customer, their worldview about a problem. And at that point, we don’t really even have a hypothesis, right? We, we just observe that something is happening, and we try to understand it. And we and we start interviewing people. And at that point, we only do interviews, pretty quickly, you start understanding both why people are doing things are doing. And also like, what is the opportunities? And what are the the problems. The head of design that I very often run interviews with nowadays, he specifically asks them to behave as if they’re going to give a script to an actor or an actress to to impersonate them. And so we go into a lot of detail in that context in minut, detail of the past days, and we look at the past. So once we detect opportunities, then we try to quantify them, right? So we send out surveys, just to try to quantify the points in that map that we think are interesting, and that we can solve. So they’re relevant to us. They’re within the realm of our product and our mission. And we think that they’re interesting. So once we quantify that, then we move on to potential solution treat. And just when we’re starting to look at solutions, we also need to make sure we have made sure at that point, that the customers have a problem, they’re clearly acknowledging that they have a problem, and that they have tried to solve it. So it’s quantified, it’s known. And then we move on to two solutions. And the solution part is where the fun begins. Because, you know, the teams get together and they brainstorm with complete autonomy. And the only rule that remains is effectively can you get something more than nicety from from a customer? Right? Because we’re not looking for, oh, what a great idea, or absolutely, I would use it, but we’re looking for some really tangible commitment. And I think that’s been baked in a lot in the, you know, culture that we’re building that instead of, you know, showcasing something, and maybe even compare and contrast, we take it to the next level, and we’re trying to get commitment from our customers.

Randy Silver: 

What does that commitment look like? Do you measure it from a quantitative approach? Or qualitative? Or do you use a mix of both?

Mark Tsirekas: 

We generally use mixed methods. And we can get back into that. I think we’re trying to basically at that level, it’s qualitative, and at the same time is it’s pretty clear cut. So what I mean by that is, we asked for commitments that are Boolean statements, yes or no, right? If you’re, if you’re in a b2b product, in the past, for example, have been working in b2b products, it’s not uncommon to ask for a pre sale, right? You’re doing the demo and the commitment, then for the customers like, you know, if I deliver this in a month, we can sign the contract today, would you be up for that, and I think that’s great. On a continuous product, you know, development process. And specifically, in a consumer context, that might not always be the right way to do it. But you can ask for other things. So for instance, you know, last couple of weeks, we’ve been testing something pretty exciting for customers, we thought it was amazing. We were very excited. And then we went to customers. And we asked them that, you know, there’s a limited seat of alpha testers, which is true. And, you know, in order to to get in one of those, you need to spend three hours next week in any given day, you want with us to go through the specific details of the product. And that is a big commitment, if you think about it to find three hours within your day. And that is why we set the bar really high because at that point, we started having some people go, Oh, you know, I’ll wait a little bit that sounds great and whatnot. But we also had customers that basically were really happy to commit to that and they were enthusiastic, because they said that that will be you know, actually making a difference into how they’re in their daily livelihoods and how they approach with aid. So once you realise that then you have some in sight, you started making further hypotheses. So for instance, what was the reason that these customers have come to the product? In the very beginning? What’s the job to be done? Do they fit in a particular segment? And you start quantifying? How does that particular user proposition resonate to other segments? And there’s a few ways you can do that.

Lily Smith: 

So when you’re doing the user research, to validate some of the hypotheses that you have, how do you decide? Or how do you kind of ensure that you are asking the right people or kind of testing with the right people?

Mark Tsirekas: 

at which level of the product? Is it? Are you asking for a particular feature? Or is it for a new initiative? Is it for a big customer problem that we have observed? By accident?

Lily Smith: 

I mean, I guess it would be at whatever level your hypothesis is that and if it’s specific to your general target market, and persona or likes, you know, specific personas, you know, maybe it’s existing subscribers or churn subscribers.

Mark Tsirekas: 

Yeah, so. So usually this kind of process starts, we usually employ this process for new product development. So when we’re doing something completely new, where we mix the methods, otherwise, you know, you we might have different types of variations around it. It really depends is the answer. So for instance, sometimes, you know, the, you know, the product that the product in question is something that’s very pivotal, it’s a core action, so we might want to change something that touches everyone in the product. So in which case, you you mix and match. So right this morning, we’re talking about, uh, you know, we’re talking about a case where, you know, we want to test people who have not used the product, people that use the product insurance and people who are active users. And we want to compare contrast, because we believe that we’re going to identify as different use some use cases or value propositions that these users are finding by using it. So it really depends. But it is a big part of how we structure the questions and how we recruit participants.

Randy Silver: 

So one of the things you’re looking for from this, and it’s something that you shared with me earlier, Mark, is you want to solid in hard validation. So can you go into what that is, and why it’s so important to get that rather than something that’s not well, I guess, squishy.

Mark Tsirekas: 

I think Randy, we’re, there’s this this this line that I really like, which is if you’re asking for a compliment, you’re gonna get lies. And if you’re, if you’re pushing in the bar, you’re gonna get a fake phone number. And what you really want to get out is the truth. And I think you get the truth with with commitments, and people speaking with their actions. And so that is what we mean, by having a hard validation, where we basically get the customers to work for it a little bit, at least in the very beginning, just to make sure that what we’re giving isn’t, is really what they want, and actually going to deliver to them the value that they that they need. So I think there’s a whole hierarchy of things that you can ask, and again, in some cases might be appropriate in some cases might not. But, you know, first of all, if somebody pays you, I think there’s already appetite enough. So if you can basically get money in advance, or a commitment or subscription or anything like this is great, then you have time, I think time is, you know, is very important to people, and we safeguard and more and more. And then finally, you have social currency, right? You have people advocating on your behalf, and like introducing you to in sharing your message, and so on, and so forth. So, in that order, these are some of the three most commonly used that we use in order to make sure that what we’re getting and what we’re building is, you know, hitting the target, versus, you know, people telling us what they like and building or, you know, they would use it in a hypothetical world.

Lily Smith: 

So, Mark, you mentioned you worked with other startups and scale ups, I’d be interested to know how much this method is used, and other businesses. And, you know, if you’re introducing it to to businesses, like how easy is it for businesses to adapt to this type of approach?

Mark Tsirekas: 

Yeah, that’s a great question. I think it varies a lot. And it varies a lot for a few reasons. So the first one is that there needs to be an alignment from all the people that participate in the product and influence the product team. Often, you know, the product team might understand the value of having a hypothesis driven product development process. And other parts of the business might discount that value. Assuming that, you know, we already know this or, you know, we have an entrepreneurial hunch, or, you know, this is superfluous, and we don’t have the time. So I think being having an aligned view of the value of properly going through hypotheses with your business stakeholders is crucial. And, you know, if you don’t have the time to do it, right, when are you going to find the time to do it over right because you will have to do it over. It’s a very nuanced problem. We need to understand the cost summers context, and we need to build around it. Otherwise, adoption is going to be low. So I think the first one is aligned incentives and understanding that. The second issue, which I think is, you know, adjacent to the first one is time, and the and the perception of what fast means. I’m a firm believer that you have to go slow to go fast, the attention to detail in the very beginning will propel you to move very, extremely fast as you move along. And I think this is something that people who either don’t have necessarily a traditional product management background, or have have a lot of experience or industry experts, for example, tend to discount. And I think that’s the second thing that I find to be quite a new endpoint, but it really affects the startups and large organisations even more. And the third thing is, you know, a lot of times people tend to get excited by having validation or by finding exciting the customer and they don’t full finish the process as it should be. So they’ll find an insight or a customer will say something, they’ll get over excited and excitement is going to take over, everybody’s super excited to just go and build the thing. But they don’t ask the full extent of the questions, which is, you know, how big is that problem? How frequently is it? You know, would they pay for it? And more, even more importantly, like, is this something that fits with our strategy? Is this something that’s hard to imitate? Why are we building white white us? So alignment is definitely one not having the time and being rushed is definitely the second. And I think this this over excitement and confusion of like, what really validation is about, I think, are the three common themes that I’ve seen hindering really different types of companies at different stages from really executing perfectly in this.

Randy Silver: 

So Mark, we all know the Demings quote, In God, we trust, all others must bring data. And the problem is sometimes we bring the data, and it’s not what necessarily our boss expected, or what we wanted. Or maybe it’s taking much longer to figure something out than we thought, what happens when it goes a bit wrong? If it doesn’t go as expected? How do you know when it’s time to when it’s time to do more research? Or when it’s time to just say, Okay, it’s a sunk cost? Let’s move on to the next thing.

Mark Tsirekas: 

Yeah, I think that’s a great point. So I think from from the very beginning, when you start a process around a problem, would you really need is clarity, you need to have clarity, and you don’t need to focus on being right, but you need to focus on being clear. And I think this kind of goes back to like managing stakeholders and creating alignment, right? You see an opportunity, you see a problem hypothesis, and you understand why that might be pivotal for the product. If you really have this clarity, then you can communicate it, you know, to your your other stakeholders, to upper management, to your team and to everybody. So you’re all aligned. When the data comes back? There is that transparency. And I think if you started from that position of clarity, then you all should agree on whether the data that you have is sufficient or not.

Randy Silver: 

So what you’re saying, Mark is at the beginning, you’re setting out some research objectives for this, you’re saying, but what is how will we know when this is validated at the beginning, and you’re getting that agreement up front? That is that fair to say?

Mark Tsirekas: 

Correct. So so that’s why you have hard validation statements. So what you’re doing is you’re doing a mixture of like quantifying the reach and quantifying the impact early on. And if you don’t see, you know, the calculation of the to the product of the two to be sufficient, then the data is out there in the open, and I think you know, then becomes also a business decision as well. So we’re recently developing a feature, we thought there reaches like 80%, we started talking to our users, we actually turns out that a lot of our users are using different sources, different devices to get to the point of specifically finding recipe and the feature that we were looking to develop, suddenly went down to 20%. Right. And then we went for hard commitments with our users. And we sub segmented them, we saw that there was a particular category, a particular segment that was really deeply interested in that product. So whilst we did find validation, the actual business case was not big enough. And the disagreement was, there was no disagreement, because we all saw the truth for what it was we found validation, but it was small enough, and therefore was deprioritized, which is the final point of this framework that we’re following, right? We’re amassing all the data. We’re finding out the truth in terms of like both the problem and the solution. And then we’re looking at it and saying, like, where’s that going to go in terms of stack ranking it?

Lily Smith: 

Not. This has been such an interesting conversation. And we are running out of time, but I just have one more question for you. And so for those that are kind of using hypotheses in their day to day, what’s your top tip for anyone who’s using hypotheses in their work or wants to get started,

Mark Tsirekas: 

fall in love with a problem, not with a solution? Just focus on everything you’re doing to be around the customer problem, not the solution, not the business metrics. And once you do that, no matter where this hypothesis goes, no matter what the outcome is, you’re going to be one step closer to delivering great customer Experience.

Lily Smith: 

Lovely. That’s fantastic. Felt like that’s the quote of the episode for sure. Thank you so much for joining us and it’s been a real pleasure the product experience is the first and the best podcast from mine the product. Our hosts are me, Lily Smith and me Randy silver. Louron Pratt is our producer and Luke Smith is our editor.

Randy Silver: 

Our theme music is from Hamburg baseband power. That’s P AU. Thanks to Arnie killer who curates both product tank and MTP engage in Hamburg and who also plays bass in the band for letting us use their music. You can connect with your local product community via product tank, regular free meetups in over 200 cities worldwide.

Lily Smith: 

If there’s not one near you, maybe you should think about starting one. To find out more go to mind the product.com forward slash product tank

 

What does a good hypothesis look like in product and what does it include? In this weeks' podcast episode, Lily and Randy say down with Mark Tsirekas, VP of Product at ZOE to discuss both the scientific and the non-scientific methods behind hypothesis and testing.

Listen to more episodes


Featured Links: Follow Mark on LinkedIn and Twitter | Zoe are hiring - work with Mark! | Mark's website MTsireud

Episode transcript

Lily Smith:  Randy, do you remember back in the day in your science lesson, lighting Bunsen burners in the lab, turning the gas on? So the other students freaked out of the smell and sitting on those high wooden benches and throwing paper planes at your mates? Randy Silver:  Oh, golly, that sounds like a very whimsical interpretation of a science lesson from an English school. I'm from New York and I went to high school where the Beastie Boys kinda sort of went. So science was more like setting fire to doughnuts and putting them out with fire hydrants. Lily Smith:  I can totally imagine you doing that? Well, one thing both our classes probably had in common was writing hypotheses. And honestly, I did love that part, writing up conclusions not so much. I mean, Randy Silver:  the conclusion was that the experiment didn't work, right. But today, we're going to talk to Mark C rec s. He's the VP of product at Zoe, and we're going to talk to him about hypotheses and testing. So let's hear his thoughts and what his science lessons taught him. Lily Smith:  The product experience is brought to you by mind the product. Randy Silver:  Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love. Lily Smith:  Because it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos, Randy Silver:  browse for free, or become a minor product member to unlock premium articles, unseen videos, AMA's roundtables, discount store conferences around the world training opportunities for Lily Smith:  mine, the product also offers free product tank meetups in more than 200 cities. And there's probably one for you. Randy Silver:  Mark, thank you so much for joining us on the podcast this week. Mark Tsirekas:  Thank you very much for having me. Randy Silver:  So for the few people who don't already know you, can you just give us a quick introduction to yourself, What are you doing these days? And how did you get into product in the first place? Mark Tsirekas:  Absolutely. So I am the VP of product at Zoey. And Zoey is a company that focuses on being the most scientifically advanced and personalised nutrition programme, where we help you understand how food affects your body. So you can eat for your best health. And I also advise startups specifically from pre product market fit all the way to scale. And how I got into product is by accident, because I started building stuff online and learned the hard way that it's very difficult to actually create something that is valuable to people. And I stumbled into product management as a more methodical way of approaching that and never look back. Randy Silver:  Okay. And speaking of methodology, this is something we want to talk to you about today. You spend a lot of time working with your teams on coming up with good hypotheses and using them as part of the process. But let's start at the beginning. When do you actually use a hypothesis? Is it at Division level at the strategy level at the quarterly plan is an epic story? Where does it start? Mark Tsirekas:  If if we're being liberal with a term, I think everything is a hypothesis. So a startup is a hypothesis and a bet around what the future will look like. Your vision as a company is a bet of how the future will look like after you've succeeded in altering it. But in more product management terms and what we do on a day to day basis, I think we're focusing more on how we can augment the customer experience and so focused more on the if this than that type of hypotheses where we have one variable and one customer problem that we're trying to solve. And, you know, by trying different things, we're trying to figure out what works and what doesn't. So I think it's more common to use hypotheses and quarterly, all the way down to the story level. Lily Smith:  So when you're planning your regular work, how much of the hypothesis kind of fits into that that planning phase? Like, is everything you know, every sort of piece of work that you're doing have a hypothesis attached to it? Or, you know, just some have a hypothesis, depending on sort of how confident you are? Like, how much does the hypothesis feature in the day to day planning of work? Mark Tsirekas:  I think I mean, fundamentally, there are problems that, you know, are pretty clear cut, right? Like they need to be done for a business as usual. Like, you know, you have to do an integration for tax purposes, right. So I think those kinds of things just are normal business, and they obviously don't have hypotheses attached to them. But when it comes to like how we set goals, and we use OKRs, for that purpose, I think everything has a hypothesis attached to it. And that hypothesis is very much focused on the customer problem. So that's how we kind of how we structure both our objectives and our key results and on that planning level, I would say that is the case. Similarly, when we have you know, features as well, we always have a templated tickets were effectively the hypothesis is part of it, and also the metrics of the counter metrics and a lot of other relevant stuff. Yeah. Lily Smith:  Okay, great. So you have your OKRs, and then your hypotheses that connected to the different OKRs that you're looking to achieve. Mark Tsirekas:  Correct. So the OKRs, for us, most of the time, and objective would be a validated customer problem. And then the hypothesis we're making is more focused on the solution, whether the solution will deliver the antidote almost to the customer problem that we see or the opportunity that we perceive exists and customers have validated with us already. Yeah. Randy Silver:  So this sounds like if we were using Teresa tours as approach of the opportunity solutions tree, you've got your goals and your objectives. And directly under that is your hypotheses of how you're going to reach those objectives. Mark Tsirekas:  I think that makes sense. Often, if often, it would look like that. But it doesn't doesn't necessarily mean that the structure changes. So you could have a hypothesis on the feature level, or you could have it on a, on an idea level or, you know, on a on a problem level, even right, I think they fall into different process to validate and approach them. Lily Smith:  So what does a good hypothesis look like? What does it include? Mark Tsirekas:  You know, there's always a is a scientific company. And we work a lot with science in our core. So I'm going to, I'm going to actually work my product management hat and focus on the non scientific part for a second. Even though that might not be obvious, I think, a great hypothesis fundamentally is hypothesis whose outcome delivers a clear action for the product team. And that action delivers a better customer experience. And I think what tends to happen very often is that we focus instead on like business metrics or, you know, solutions that we're trying to validate and are already biassed with and forget about the customer problem in the experience, such that, you know, in the end of the day, you get the outcome of a hypothesis that you have, and you're not really sure what to do with it. And so to give you an example of something that is very relevant to that, and Zoey, we recently had to change the first part of the of the product, which is the the quiz flow, we have a quiz, you know, where we ask relevant health questions. And our hypothesis was not we want to optimise that part in our hypothesis was not around, fundamentally the conversion optimization or anything like that, but fundamentally around the learnings and the value of the customers are getting in the very first part of the quiz. And so rather than looking at the drop off points only and optimising for, you know, for conversion optimization will be aimed to do was hypothesise where the customers can start seeing the value assessed by whether their knowledge increased after the quiz in the very beginning, which is effectively to connect the value of the product that we the users have when they use it to the very beginning. And so I guess a great hypothesis is actionable. And it helps you fall in love with the problem and focus on the customers. I think that's what I would have done in my mind as a great hypothesis. Randy Silver:  That's great. Okay, so how do you work with people who maybe their first take on a hypothesis isn't quite so strong? What kind of advice do you give them to ensure that they're formulating better hypotheses? Mark Tsirekas:  So I think I think what we do very much so a joy, and it's the only part that I actually focus on, in working really closely with with the product management team, is to go and understand really how to approach a problem. So before we even formulate hypotheses, any hypothesis is fair game, right? You can have a hunch, you can have you might have observed something you might have heard something in a customer interview. But when you want to propose something, presumably to dedicate time to it, there's a whole process that we go around validation, getting more and more data, so that the hypothesis becomes both obvious to everyone. Right? So we have a common amount of understanding and data. And so it becomes obvious to everyone. And secondarily, you know, it's it's accepted by the team as a whole, Randy Silver:  the Gurmeet just gonna expand on that a little bit further. So each team, though, might have a hypothesis, how do you make sure that the different hypotheses that are being worked on the different team hypotheses are working together rather than at cross purposes? Mark Tsirekas:  So I think that has to do a lot more with our planning rather than the hypotheses itself, right. So what we do is periodically we have, you know, we dedicate time for user research and discovery. And every quarter we basically reassess our customer problem backlog and these are big, hairy, you know, problems that we don't know the solution of. We don't exactly know why they're happening, but we know that they are happening and so what we do is before we the next cycle starts like teams are already working in the type of category and Right now they're in semantic in different thematic topics, they sort of start picking from their backlog problems, and they start like going through the customer validation. So, in other words, like it's pretty visible across the whole company how, and when, and who is working on which problem. And so the and fundamentally above that, then there's a level of, you know, strategy, where we know where we're going to focus on and why. And that trickles down all the way to the bottom. Lily Smith:  So we talked about what makes a good hypothesis. And in terms of the actual validation of this hypothesis, how do you go about deciding how to validate or invalidate the assumptions that you've made? Mark Tsirekas:  Yeah, so that's something that we have developed quite a rigorous process, purely from experience, we started doing it and then you know, things would break. And we've found a way that we have something which is specific enough as to guide you know, our teams in the process, but at the same time is generic enough that you can can adapt to it. For short, we call it cusp. The first part is like the customer user journey. The second part is the user problem, validation. There's a third part, which is around the solution validation. And finally, we prioritise based on what we find, I think it's extremely important to understand, and this is where we all start from, to understand the customer journey, then there's a lot of things that come with it, it's the touch points of the customer, the mood of the customer, their worldview about a problem. And at that point, we don't really even have a hypothesis, right? We, we just observe that something is happening, and we try to understand it. And we and we start interviewing people. And at that point, we only do interviews, pretty quickly, you start understanding both why people are doing things are doing. And also like, what is the opportunities? And what are the the problems. The head of design that I very often run interviews with nowadays, he specifically asks them to behave as if they're going to give a script to an actor or an actress to to impersonate them. And so we go into a lot of detail in that context in minut, detail of the past days, and we look at the past. So once we detect opportunities, then we try to quantify them, right? So we send out surveys, just to try to quantify the points in that map that we think are interesting, and that we can solve. So they're relevant to us. They're within the realm of our product and our mission. And we think that they're interesting. So once we quantify that, then we move on to potential solution treat. And just when we're starting to look at solutions, we also need to make sure we have made sure at that point, that the customers have a problem, they're clearly acknowledging that they have a problem, and that they have tried to solve it. So it's quantified, it's known. And then we move on to two solutions. And the solution part is where the fun begins. Because, you know, the teams get together and they brainstorm with complete autonomy. And the only rule that remains is effectively can you get something more than nicety from from a customer? Right? Because we're not looking for, oh, what a great idea, or absolutely, I would use it, but we're looking for some really tangible commitment. And I think that's been baked in a lot in the, you know, culture that we're building that instead of, you know, showcasing something, and maybe even compare and contrast, we take it to the next level, and we're trying to get commitment from our customers. Randy Silver:  What does that commitment look like? Do you measure it from a quantitative approach? Or qualitative? Or do you use a mix of both? Mark Tsirekas:  We generally use mixed methods. And we can get back into that. I think we're trying to basically at that level, it's qualitative, and at the same time is it's pretty clear cut. So what I mean by that is, we asked for commitments that are Boolean statements, yes or no, right? If you're, if you're in a b2b product, in the past, for example, have been working in b2b products, it's not uncommon to ask for a pre sale, right? You're doing the demo and the commitment, then for the customers like, you know, if I deliver this in a month, we can sign the contract today, would you be up for that, and I think that's great. On a continuous product, you know, development process. And specifically, in a consumer context, that might not always be the right way to do it. But you can ask for other things. So for instance, you know, last couple of weeks, we've been testing something pretty exciting for customers, we thought it was amazing. We were very excited. And then we went to customers. And we asked them that, you know, there's a limited seat of alpha testers, which is true. And, you know, in order to to get in one of those, you need to spend three hours next week in any given day, you want with us to go through the specific details of the product. And that is a big commitment, if you think about it to find three hours within your day. And that is why we set the bar really high because at that point, we started having some people go, Oh, you know, I'll wait a little bit that sounds great and whatnot. But we also had customers that basically were really happy to commit to that and they were enthusiastic, because they said that that will be you know, actually making a difference into how they're in their daily livelihoods and how they approach with aid. So once you realise that then you have some in sight, you started making further hypotheses. So for instance, what was the reason that these customers have come to the product? In the very beginning? What's the job to be done? Do they fit in a particular segment? And you start quantifying? How does that particular user proposition resonate to other segments? And there's a few ways you can do that. Lily Smith:  So when you're doing the user research, to validate some of the hypotheses that you have, how do you decide? Or how do you kind of ensure that you are asking the right people or kind of testing with the right people? Mark Tsirekas:  at which level of the product? Is it? Are you asking for a particular feature? Or is it for a new initiative? Is it for a big customer problem that we have observed? By accident? Lily Smith:  I mean, I guess it would be at whatever level your hypothesis is that and if it's specific to your general target market, and persona or likes, you know, specific personas, you know, maybe it's existing subscribers or churn subscribers. Mark Tsirekas:  Yeah, so. So usually this kind of process starts, we usually employ this process for new product development. So when we're doing something completely new, where we mix the methods, otherwise, you know, you we might have different types of variations around it. It really depends is the answer. So for instance, sometimes, you know, the, you know, the product that the product in question is something that's very pivotal, it's a core action, so we might want to change something that touches everyone in the product. So in which case, you you mix and match. So right this morning, we're talking about, uh, you know, we're talking about a case where, you know, we want to test people who have not used the product, people that use the product insurance and people who are active users. And we want to compare contrast, because we believe that we're going to identify as different use some use cases or value propositions that these users are finding by using it. So it really depends. But it is a big part of how we structure the questions and how we recruit participants. Randy Silver:  So one of the things you're looking for from this, and it's something that you shared with me earlier, Mark, is you want to solid in hard validation. So can you go into what that is, and why it's so important to get that rather than something that's not well, I guess, squishy. Mark Tsirekas:  I think Randy, we're, there's this this this line that I really like, which is if you're asking for a compliment, you're gonna get lies. And if you're, if you're pushing in the bar, you're gonna get a fake phone number. And what you really want to get out is the truth. And I think you get the truth with with commitments, and people speaking with their actions. And so that is what we mean, by having a hard validation, where we basically get the customers to work for it a little bit, at least in the very beginning, just to make sure that what we're giving isn't, is really what they want, and actually going to deliver to them the value that they that they need. So I think there's a whole hierarchy of things that you can ask, and again, in some cases might be appropriate in some cases might not. But, you know, first of all, if somebody pays you, I think there's already appetite enough. So if you can basically get money in advance, or a commitment or subscription or anything like this is great, then you have time, I think time is, you know, is very important to people, and we safeguard and more and more. And then finally, you have social currency, right? You have people advocating on your behalf, and like introducing you to in sharing your message, and so on, and so forth. So, in that order, these are some of the three most commonly used that we use in order to make sure that what we're getting and what we're building is, you know, hitting the target, versus, you know, people telling us what they like and building or, you know, they would use it in a hypothetical world. Lily Smith:  So, Mark, you mentioned you worked with other startups and scale ups, I'd be interested to know how much this method is used, and other businesses. And, you know, if you're introducing it to to businesses, like how easy is it for businesses to adapt to this type of approach? Mark Tsirekas:  Yeah, that's a great question. I think it varies a lot. And it varies a lot for a few reasons. So the first one is that there needs to be an alignment from all the people that participate in the product and influence the product team. Often, you know, the product team might understand the value of having a hypothesis driven product development process. And other parts of the business might discount that value. Assuming that, you know, we already know this or, you know, we have an entrepreneurial hunch, or, you know, this is superfluous, and we don't have the time. So I think being having an aligned view of the value of properly going through hypotheses with your business stakeholders is crucial. And, you know, if you don't have the time to do it, right, when are you going to find the time to do it over right because you will have to do it over. It's a very nuanced problem. We need to understand the cost summers context, and we need to build around it. Otherwise, adoption is going to be low. So I think the first one is aligned incentives and understanding that. The second issue, which I think is, you know, adjacent to the first one is time, and the and the perception of what fast means. I'm a firm believer that you have to go slow to go fast, the attention to detail in the very beginning will propel you to move very, extremely fast as you move along. And I think this is something that people who either don't have necessarily a traditional product management background, or have have a lot of experience or industry experts, for example, tend to discount. And I think that's the second thing that I find to be quite a new endpoint, but it really affects the startups and large organisations even more. And the third thing is, you know, a lot of times people tend to get excited by having validation or by finding exciting the customer and they don't full finish the process as it should be. So they'll find an insight or a customer will say something, they'll get over excited and excitement is going to take over, everybody's super excited to just go and build the thing. But they don't ask the full extent of the questions, which is, you know, how big is that problem? How frequently is it? You know, would they pay for it? And more, even more importantly, like, is this something that fits with our strategy? Is this something that's hard to imitate? Why are we building white white us? So alignment is definitely one not having the time and being rushed is definitely the second. And I think this this over excitement and confusion of like, what really validation is about, I think, are the three common themes that I've seen hindering really different types of companies at different stages from really executing perfectly in this. Randy Silver:  So Mark, we all know the Demings quote, In God, we trust, all others must bring data. And the problem is sometimes we bring the data, and it's not what necessarily our boss expected, or what we wanted. Or maybe it's taking much longer to figure something out than we thought, what happens when it goes a bit wrong? If it doesn't go as expected? How do you know when it's time to when it's time to do more research? Or when it's time to just say, Okay, it's a sunk cost? Let's move on to the next thing. Mark Tsirekas:  Yeah, I think that's a great point. So I think from from the very beginning, when you start a process around a problem, would you really need is clarity, you need to have clarity, and you don't need to focus on being right, but you need to focus on being clear. And I think this kind of goes back to like managing stakeholders and creating alignment, right? You see an opportunity, you see a problem hypothesis, and you understand why that might be pivotal for the product. If you really have this clarity, then you can communicate it, you know, to your your other stakeholders, to upper management, to your team and to everybody. So you're all aligned. When the data comes back? There is that transparency. And I think if you started from that position of clarity, then you all should agree on whether the data that you have is sufficient or not. Randy Silver:  So what you're saying, Mark is at the beginning, you're setting out some research objectives for this, you're saying, but what is how will we know when this is validated at the beginning, and you're getting that agreement up front? That is that fair to say? Mark Tsirekas:  Correct. So so that's why you have hard validation statements. So what you're doing is you're doing a mixture of like quantifying the reach and quantifying the impact early on. And if you don't see, you know, the calculation of the to the product of the two to be sufficient, then the data is out there in the open, and I think you know, then becomes also a business decision as well. So we're recently developing a feature, we thought there reaches like 80%, we started talking to our users, we actually turns out that a lot of our users are using different sources, different devices to get to the point of specifically finding recipe and the feature that we were looking to develop, suddenly went down to 20%. Right. And then we went for hard commitments with our users. And we sub segmented them, we saw that there was a particular category, a particular segment that was really deeply interested in that product. So whilst we did find validation, the actual business case was not big enough. And the disagreement was, there was no disagreement, because we all saw the truth for what it was we found validation, but it was small enough, and therefore was deprioritized, which is the final point of this framework that we're following, right? We're amassing all the data. We're finding out the truth in terms of like both the problem and the solution. And then we're looking at it and saying, like, where's that going to go in terms of stack ranking it? Lily Smith:  Not. This has been such an interesting conversation. And we are running out of time, but I just have one more question for you. And so for those that are kind of using hypotheses in their day to day, what's your top tip for anyone who's using hypotheses in their work or wants to get started, Mark Tsirekas:  fall in love with a problem, not with a solution? Just focus on everything you're doing to be around the customer problem, not the solution, not the business metrics. And once you do that, no matter where this hypothesis goes, no matter what the outcome is, you're going to be one step closer to delivering great customer Experience. Lily Smith:  Lovely. That's fantastic. Felt like that's the quote of the episode for sure. Thank you so much for joining us and it's been a real pleasure the product experience is the first and the best podcast from mine the product. Our hosts are me, Lily Smith and me Randy silver. Louron Pratt is our producer and Luke Smith is our editor. Randy Silver:  Our theme music is from Hamburg baseband power. That's P AU. Thanks to Arnie killer who curates both product tank and MTP engage in Hamburg and who also plays bass in the band for letting us use their music. You can connect with your local product community via product tank, regular free meetups in over 200 cities worldwide. Lily Smith:  If there's not one near you, maybe you should think about starting one. To find out more go to mind the product.com forward slash product tank  

Leave a Reply

Your email address will not be published.