When not to use a North Star – Maahir Shah on The Product Experience "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs March 03 2022 False Metrics, north star metric, Podcasts, The Product Experience, Mind the Product Mind the Product Ltd 5670 When not to use a North Star - Maahir Shah on The Product Experience Product Management 22.68

When not to use a North Star – Maahir Shah on The Product Experience

BY ON

Figuring out what to prioritise is one of a product team’s biggest challenges. Using a North Star metric to orient around can make it easier—but, as Maahir Shah, Product Manager at Facebook, tells us, it can also lead you in the wrong direction. He joined us on the podcast to talk about how to take your use of metrics to the next level.

Listen to more episodes…


Featured Links: Follow Maahir on LinkedIn and Twitter | Maahir’s blog | Definition of Exit Criteria | ‘How I got my job in Product’ piece by Maahir at Mind The Product

Episode transcript

Lily Smith: 

Hey Randy, how much do you love metrics?

Randy Silver: 

Oh my god. Really? Is that a trick question? You know, I can’t measure my love for metrics.

Lily Smith: 

I know me too. They can drive you crazy. Once you get them all organised, it just makes life so much easier.

Randy Silver: 

Fair. And you know, all the best things in life can drive you crazy. But if you have a Northstar, you can never go wrong, right?

Lily Smith: 

Wow. Funny, you should mention that. Our guest today is my hair Shah, Product Manager from Mehta. And in this chat, we talk about how Northstar metrics aren’t all that. And as a PM, you definitely need to dig deeper.

Randy Silver: 

So my hero, he wants us to go in the other direction to follow the Southern Cross right now. Don’t answer that even I know that’s ridiculous. I can’t wait to hear how he advises us to make better use of metrics. So no more bother. Let’s get right to it.

Lily Smith: 

The product experience is brought to you by mind the product.

Randy Silver: 

Every week, we talk to the best product people from around the globe about how we can improve our practice and build products that people love.

Lily Smith: 

Is it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos,

Randy Silver: 

browse for free, or become a mind the product member to unlock premium articles, unseen videos, AMA’s roundtables, discounts to our conferences around the world training opportunities.

Lily Smith: 

mining product also offers free product tank meetups in more than 200 cities. And there’s probably one. Hey, we’re here. It’s so lovely to be talking to you on the podcast today. How you doing?

Maahir Shah: 

Not too bad? How about yourself?

Lily Smith: 

Good. Thank you. And today we’re going to be talking about metrics. But before we get stuck into that, it would be great if you could give our listeners a real quick intro into who you are and how you got into product.

Maahir Shah: 

Sure. My name is Maahir Shah I have been in the professional product management for close to a decade now. And I’ve worked across a range of industries, you know, all the way from really small companies with just 50 people in them to really large companies and across industries ranging from financial services, advertising, marketing, consumer software, and more. I really like fell into product management. I want to say maybe over like a decade or decade and a half ago when I was an engineer, I always found myself like asking why like, why are we doing this? What are we like actually trying to accomplish? What is value for our end users. And through that process, I was introduced to the to the profession of Product Management. I was very lucky because a mentor in my early career gave me the opportunity to break through and transition from being an engineer to being a product manager and have been in love with the professions and stuff.

Lily Smith: 

Awesome. Thanks for that intro and today we’re going to talk about Northstar metrics, and particularly why they’re bad, which I think is fantastic topic. But before we get really into the detail, tell me about what Northstar metrics are for those who kind of haven’t heard about them and haven’t heard about the concept of them.

Maahir Shah: 

Sure. So put simply Northstar metrics are the way like large organisations use to measure their overall success. They look at like the top line success. And if you’re working at a large organisation, then often you have not started metrics for sub segments of that organisation. A couple examples that come to mind like if you think about like Amazon, it’s a large organisation with multiple like divisions within. So if you think about amazon.com, your GMB or your gross merchandise volume is a really good example of a Northstar metric which gives amazon.com leadership as well as investors some idea of like how Amazon is doing in the amazon.com retail business. Other examples that come to mind if you think about like an enterprise like tech company or an enterprise SaaS company have like typically seen like subscription sold and like revenue. And if you think about like, you know, consumer companies like Google and Facebook and Yelp, they look at like active users as like a really good way of like, stepping back and assessing is their overall product or product portfolio moving in the right direction or not.

Randy Silver: 

So how could that be a bad thing? I mean, having everyone understanding that there’s something to shoot for that. I mean, if it’s not if the North Star is into revenue, then it’s going to be a leading metric or or a leading indicator or a proxy for revenue? So how could that be bad?

Maahir Shah: 

Yeah, great question. So it’s actually not bad, right? Like if you know, your revenue is trending up or some leading indicator of revenue is trending up, that’s great for the organisation, for the employees within the organisation for your external, like shareholders and stakeholders. I think what I’ve like generally seen in the industry is it becomes bad when that becomes one of the primary or sole ways that every single product manager and product team decides big product decisions in terms of the investments they’re going to make the products they’re going to build. If you think about any organisation, it’s unlikely that every single small thing that you do is going to move a Northstar metric. But that also doesn’t mean that we shouldn’t do all of these other things if they don’t move a Northstar metric. And I’ve seen this growing trend where like, every single like decision starts like is made on the basis of better moves a lot star metric in the near term, which often like leads to very like bias, like product investments. And also like very like short sighted near term myopic thinking versus thinking about long term value and what’s good for the end user.

Lily Smith: 

So how do we just thinking about that? So product managers, basically are getting too obsessed with Northstar metrics by the sounds of it. So what should they be paying attention to instead? Like, how do they decide what metrics they should pay attention to which are kind of connected to the Northstar metric, but I guess, more down in the detail?

Maahir Shah: 

Yeah, if I were to, like, you know, just step back and think about our role as product managers, it’s really to like not just move the business forward, but like, deliver an immense amount of like user value and solve some like real user needs and pain points. And if you think about the same leading indicator philosophy, if you’re solving real user needs, delivering value and solving their pain points, that’s a good leading indicator that like over the long term, like your users will be happier, they’ll stay on your platform, they like continue to like use your platform or your products more. So as a product manager, I always encourage other product managers to like first start with like their users. And think about like, what are the core needs of the users? What is like the competition offering? And how can your product like delivers significantly better value and solve some real pain points? Once you’ve identified that, then like, find a way to measure that versus like using like some Northstar metric to assess whether your investment is the right investment or not.

Randy Silver: 

So this is where this is I can game that really easily by saying, great if the only thing I need to be concerned on is customer value, then let’s take away price entirely. Let’s just give it away to them. But that’s not quite right, either, I’m assuming is it’s going to be a balance between what’s best for the customer, and what’s best for the company. How do you how do you balance that appropriately?

Maahir Shah: 

Absolutely. So before I answer that, I would also argue that you could pretty much give the Northstar metric and move revenue up or active users up by giving away the product for free. So no metric is like really perfect. And Bob like typically encouraged. You know, what I’ve tried to like do as a product practice, as well as encourage other PMS to do is establish some sort of like goal map that ties like user value to business value. So if we can deliver like user value in ways X, Y and Z, like whether it’s like improving the user experience or getting users to like adopt a new tool Darwell over time, like get users to like stay with our products, and that could over the longer term lead to like more users more money. So we try and establish a goal map that links like the user value that we’re delivering through to our products to the top line metric for the organisation.

Randy Silver: 

Can you talk a tiny bit more about what a goal map is what it looks like?

Maahir Shah: 

Sure, a goal map is basically a connection from your like investment portfolio to the organization’s like needs to the organization’s like top line metric. So a concrete example like you know, I can give your if you think about like a simple app, like Yelp has, like historically focused on helping users like discover restaurants and helping them like make decisions on what are the best restaurants like to visit now if you’re wanting to like invest in some other vertical like let’s say helping users find the best like plumbers completely different business model from like helping users like find like restaurants, although there are some similarities. And it’s unlikely that something new like this is going to lead to like a massive spike in some Have your top line metrics like daily active users or revenue in the near term. So what I would do as a product manager in this instance, is basically like establish like, hey, if we did have users like find new plumbers, like really quickly, how would we measure whether we’re doing doing that very well. And there’s like two aspects to that. The first aspect is like, are users adopting this new tool? And are they retaining onto this new tool? And the second, are they actually like finding value? Like, does it reduce the amount of time that it needs to help them find a plumber doesn’t make reduce the number of steps. And if it does, then over time, we can expect that more users will start using Yelp more often not just to find restaurants, but also to find plumbers and hair salons and barber shops and all of these other verticals out there. So I think that’s where like there’s a nice tie in from helping solve like a problem of, can we help users find new like plumbers or find hair salons to users will be more active and more engaged on the platform over time. Cool.

Lily Smith: 

And so you’ve also mentioned in one in your article that you wrote about this, about other metrics that we should be paying attention to as product managers to make sure that we’re building a good quality product. What are some of the examples of those?

Maahir Shah: 

Yeah, if I think about taking any new product, especially as you’re the one products to market, I typically see that as like stages, it’s often naive to just go from like building the product, or like making it available to 1000s millions or billions of users overnight. So I typically like see that in stages, where stage one is like really at a small scale, understand if your product has high quality, there’s no broken experiences. When you’re launching like a net new product or an ecosystem of products, it’s often like, you know, you might come across unknown edge cases that you and your team didn’t account for. So I really recommend looking at like quality broken experiences before like going and like just trying to expose the product or like a large scale. After that, you know, once we are like confident that the quality and the usability are high, then starting to measure like, hey, is this product actually like solving the pain point that it was like intended to solve? So if you think about like this Yelp example, the goal was to like, make it easy for users to like find plumbers. So finding like ways to measure the value like did we help users find a plumber to be hot and complete a transaction with a with a plumber? Did we help them book a plumber, actually measuring whether it’s like delivering that core value prop that we are hypothesised, even for like a small segment. And once we get signal on that core value prop, the last stage I’d look at is like, are we delivering this value prop to a large enough like amount of people on an ongoing basis. And that’s what typically recommend looking at metrics like adoption and retention, where you can see like, hey, now are a lot of users like using Yelp to find a plumber? And are they using Yelp on an ongoing basis to find like a plumber or hair salon or like other business. So in short, I recommend like breaking it up into stages and progressively like moving through the stages to make sure that we’re delivering a high quality experience that offers value to a large enough amount of people.

Lily Smith: 

And do you think it needs to come in that order, just thinking about, you know, the sort of minimum viable product, quite often you try and hack your way through the first parts of that of like quality, usability and, and value, you know, the actual booking of the plumber and try and identify whether there’s a market out there that have that wants your product in the first place.

Maahir Shah: 

I do recommend like these stages, I think, depending on like the product, like if it’s not a big zero to one product, but it’s an optimization of like an existing product or like a much, much smaller scope. You can like fast track the stages, like I’ve seen situations where like, we go through these stages in like a 14 day or 21 DAY TIME. And I’ve seen like situations where it takes six to nine months to like go through these stages. But in principle, we still try and like, go through these stages, because just from like, you know, using my product science, it doesn’t make sense to like deliver broken product or like millions of users.

Lily Smith: 

Yeah, although I suppose in some ways, there’s a method of being able to do that by just putting a bita tag on delivering it to everyone and going, it’s fine. It’s better. You know, we’ll polish it later if people actually use

Maahir Shah: 

it. Yeah, that’s fair. I mean, you could like put a beta tag on it, but ultimately, like that beta tag needs to like come off, especially if you want your product to like be used by a large volume of people. Um, what I’ve at least seen, like my, you know, pm experience is quite often, like broken experiences tend to like reduce the retention, adoption and participation in a product that makes us believe like, hey, the product isn’t doing what it’s supposed to when it’s actually just broken in the first place.

Randy Silver: 

You talked a little bit earlier about how not everything you do necessarily aligns to a top line, or Northstar metric. And there’s lots of different teams working on lots of different things all at the same time. Sometimes, it sounds almost like you’re describing something akin to, you know, OKRs used well, or an opportunity solution street where you do have a top line thing, but you have lots of pieces and lots of it competing metrics are competing, not competing complementary metrics, that all make up pieces that and all work together is that what you’re talking about finding optimising lots of little pieces, and but having a cohesive vision of how it all fits? Absolutely,

Maahir Shah: 

I think like having seen how all of these like tie together and ladder up long term to like the orgs, OKRs. And metrics is like critical. I think it’s also important just to build like internal alignment within your organisation, like, frankly, if you’re doing something where like, there’s no path or like how it ladders into the OKRs of the overall organisation or company, it’s going to be hard to like keep the investment, like, it’s hard to justify a case for the investment. But at the same time, you know, we shouldn’t stop doing some things because it’s not going to immediately move like topline OKRs in like a three or six month period, I generally think of this, like you know from an organisation leadership perspective is you have different product investments in your portfolio, all but different, like maturity life cycles. So there are some that are going to mature and, you know, six months, others, which wouldn’t mature and hit your top line metrics in the next like six to nine months, and you need to like focus on like much earlier stages of the funnel, and some even like two to five years out.

Lily Smith: 

Yeah, that makes sense. And so just thinking about your ideal kind of metrics scenario, then what do you do now? Or what would be your kind of your ideal scenario for making sure that the product that you’re working on is focused? Do you always go through, like Northstar metric, and a bit of gold mapping to identify those value metrics? And then those stages of the kind of quality and usability and then value and product? Or is there some other process as well that you go through? So

Maahir Shah: 

typically, we always start with like our end users, what their like needs and pain points are. And then we assess like, which one of these needs and pain points to be thinking help us like, deliver significant value to like a large volume of users and like provide a significant differentiation? Then we try and like look at like, what metrics would be used to measure that we’re like delivering value. And that typically like is three forms of metrics, some direct value metrics, which help us understand like, how much value we’re delivering to users, and then adoption and retention metrics to help us understand like, are we delivering this value to a large volume of users? And are we delivering it consistently? And then we have like a golden apple tie into if we do this, well? How will that impact impact the organization’s OKRs and top line metrics over whatever period of time and in some cases, that period of time is like, three weeks, six weeks, eight weeks? And in other cases, that period of time is like one year, two years, even three years?

Randy Silver: 

So given all those different horizons? What’s your approach to testing all this to making sure that it’s that it’s actually working?

Maahir Shah: 

Yeah, so we try and make sure like, based on the product that we’re launching be made sure that we have like a very state like testing like framework. And if it’s a big like zero to one product that might start off with like a really small alpha, where we’re just getting like usability feedback from 20 users and talking to them every day. And if it’s like a very like simple product that might be just like testing it, but like 1% of our user base. So we try and make sure we have a testing framework that starts with like, getting like signal or like our quality and usability and we have very clear exit criteria for each for this phase, the exit criteria might be the six out of like 10 Alpha users did not have any broken experiences, or like we did not log any like, we logged last time, like 10% errors when we were like testing this at scale. We are like an AB test. And then we move on to the next phase, which might be like version of like scale testing, where we look at our value metrics, looking at like, what value we’re delivering, what the adoption is, what the retention is, in whatever time horizon makes sense. Typically, like you know, you don’t want to wait 60 or 90 days to understand retentions, you look at short term retention in the range of like two to three weeks. And then once we’ve proven all of that out and we’re ready to go big, then we start looking at like, how is this impacting ords? OKRs and top line metrics, if at all?

Lily Smith: 

And do you also consider because it sounds like incredibly data led, which is cool. But do you also consider any other types of data, like how people are feeling about it within the business, or the more sort of qual data that you get from user research?

Maahir Shah: 

You absolutely do. Again, like depending on like the product that we’re like launching, and if this is like a big change, or if it’s just a minor optimization, if it’s a, you know, big change, then we would definitely look at like some like quantitative user research to understand not just how people are using the products, but also like why people are using the product the way they do, because quite often, like the data gives you the what, but not the why. And if it’s like a small optimization, we would look at, you know, we try and be really specific around exactly what we’re trying to do. So if we’re trying to solve a problem for storing user base, or certain sets of user bases, understanding how those different sets of user bases are like interacting with our products, if certain metrics are going, let’s say of retention is going like is not that great? You can slice and dice for that further into like power users and like lots of our users to understand what’s working for home and what’s not working for home before we like, successful lately are the exit criteria for each stage of testing. So we

Randy Silver: 

can absolutely focus on on some of these, these metrics. We can work to optimise them, and we can even do a really good job with it. But how do we know we’ve done the best possible job? How do we know that we haven’t passed up a much bigger opportunity that we’re the you know that we’re prioritising the right ideas for these things?

Maahir Shah: 

Yeah, good question. I mean, generally, like, you know, when we tend to, like, look at it, we try and look at like, do we think that this makes a significant difference for our users, and our product and our business over the course of like a three to five year horizon will not help us really like when a particular user segment over, and then we size that segment. And there’s a lot of like art that like comes into this, once we’ve sized that segment, like looking at, like, what competing solutions they use? And how much of a differentiation do we think we can offer with our like, product offering. So there’s a lot of art, we used to just like, assess whether we think this is the right investment for the next like, two years, three years, five years or so. And then we like, have like a clear like testing plan with exit criteria for each stage to assess whether we are moving in the right direction or not. Yeah, sure. So you know, for like launching a big life change to a product, or we’re introducing a new big co2, one product, one of the phases. You know, before we like start looking at how it’s impacting top line org metrics, one of the important phases we go through is this phase called adoption and retention, where we’re testing the product at scale to understand what adoption looks like and what retention looks like. And we established like clear criteria, where we, for instance, say that if retention is over 22%, if three week retention is over 22%, then we think it’s a clear success. It three week retention is somewhere between 13 to 22%. It’s signs of success and retention is below 13%. It’s clear failure. And that effectively helps us develop like decision framework around if it’s clear success, keep going because we know this is moving in the right direction. If it’s signs of success, we need to we may need to like shift by like an inch or two you’re there. And if it’s clear failure, we really need to like go back to the drawing board and like rethink whether we made like right decisions, we targeted the right user group. And so we spend like several weeks on some of these big changes assessing where we are in this like decision tree and using that to like inform what we do next. And so that’s like one example of exit criteria for a stage of testing.

Lily Smith: 

And does this come from some sort of modelling that you’ve done?

Maahir Shah: 

Honestly, this comes from just like a lot of like trial and error and working on launching big cod one products for like the last like eight to 10 years of my career, especially when you’re working in the zero to one space. It’s very hard to like accurately use data to like tell you exactly what is going to happen. If it was easy, every single startup that was founded would be successful. But it’s very, very hard to like accurately know, like, exactly what is going to happen. And so it’s just like, through a bunch of like, different methods that I’ve tried in the past. This is the one that always like consistently helped us make the best decisions for end users as well as for the business.

Lily Smith: 

Yeah. But in terms of the how you decide what that exit criteria is, or, you know, how you decide what’s good, what’s Okay, and what’s bad? Is there a model that you built to help you make those those decisions? And if you’re building that model, like, that’s going to be based on a load of assumptions? So like, What, do you have any sort of tips or advice as someone who’s coming at this and thinking, like, how do I even decide whether it’s good or bad? Like, how do I yeah,

Maahir Shah: 

there’s a couple of things we’ve looked at. The first one is like comparables, you know, we’re lucky that we live in a world where there’s so many different types of like, software, hardware products that you can, like, compare yourselves to as a product manager, they might not be direct competition, but they might be in like, adjacent spaces. So, you know, for instance, like, if you’re like, if you’re working on Thumbtack, which is like an app that helps you find like plumbers and like God knows and stuff, you have comparables in your industry, like Yelp, and Google Maps, and all of these other like players out there, and quite often, like, it’s not that hard to like, just get a sense of like comparable data to like Google searches online. If you’re working in a large organisation, you might see like other comparable like product launches, or like, you know, AB tasks, like through the course. So finding comparables is, like definitely something we do. And then the second one, a second thing we try and do was just like, put together a mental model of like, what we think our user will do what we think, you know, their behaviour will be because like, there’s a certain hypothesis that we’ve like built a product with so if you’re thinking about like the example of, you know, using Yup, to like find a plumber, Someone’s probably not going to find a plumber every week or every day, but like through that period, that they’re trying to find a plumber, they’re probably going to be very active. So in short, we use like comparables as well as just some like intuition around what we expect our user to do and what we’re what use case for building the product or service.

Randy Silver: 

So it sounds like you’re in a healthy environment now where the use of data is, is well understood people agree on a you have these good models. But lots of us aren’t in that situation. And I’m not sure that you’ve always been in such situations. When it’s been more of a challenge, what kind of strategies what are what methods have you used to try and change the organisation to influence it to be more data influenced? Yeah,

Maahir Shah: 

great question. I mean, I’ve definitely been through situations, especially in my early career, where I had a hard time justifying and articulating, like why we should be making certain investments, even if it doesn’t map up to like immediate OKRs or top line metrics. I think a couple things that I’d like typically use a couple things that I’ve used so far. The first one is like making sure that everyone is like really indexed on like the end user. Knowing that at the end of the day, while like, you know, all your to like, also drive business goals, one of the big job jobs we have is to like deliver value for the end user. So grounding everyone back into like, what value for our end user actually is the first thing I do. The second thing is just like go map quite often with organisation leadership, who are maybe a few 1000 feet away from like the details, it might be hard understanding, like the tactics of like some of these metrics. And so that goal map that we develop of how a certain investment and like certain success metrics that we’re using right now can map to this like big organisation like OKR, or like top line metric over the course of an extended period of time. That’s been like typically helpful. And then the third thing is having those clearly defined like exit criteria. I think most importantly, like, it does two things. One, it provides everyone like some expectation, it just sets the expectation of when we will get to like those, like impacting those top line OKRs. And the second thing it does is it gives everyone a quick check ins like Hey, are we moving in the right direction, because of right until this stage, it’s likely that we’re going to be right in the next stage in the stage after that. So I think to summarise like just keep everyone focused on the end user, make sure you have a goal map and make sure there’s like your exit criteria. That is like pointing towards that goal then.

Randy Silver: 

Fantastic. Thank you very much. That’s been really clear. It’s really interesting and I love the the taking a lot more of a critical thought about the way we use metrics in the organisation. It’s something we all know we need to do. We all try to do it in some ways, but we’re not always so refined in that sense. It’s been a lot of fun talking to you about it.

Maahir Shah: 

Yeah, absolutely. It was my pleasure

Randy Silver: 

Hey, join us again next week we’re gonna have another great guest and we will talk to you then.

Lily Smith: 

Our hosts, are me, Lily Smith and

Randy Silver: 

me Randy silver.

Lily Smith: 

Emily Tate is our producer. And Luke Smith is our editor.

Randy Silver: 

Our theme music is from humbard baseband power that’s P A you thanks to Ana Kibler, who runs product tank and MTP engage in Hamburg and plays bass in the band for letting us use their music. Connect with your local product community via product tag or regular free meetups in over 200 cities worldwide.

Lily Smith: 

If there’s not one Nagy you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank.

Randy Silver: 

Product Tech is a global community of meetups during buy in for product people. We offer expert talks group discussion and a safe environment for product people to come together and share learnings and tips

 

Figuring out what to prioritise is one of a product team's biggest challenges. Using a North Star metric to orient around can make it easier—but, as Maahir Shah, Product Manager at Facebook, tells us, it can also lead you in the wrong direction. He joined us on the podcast to talk about how to take your use of metrics to the next level. Listen to more episodes…
Featured Links: Follow Maahir on LinkedIn and Twitter | Maahir's blog | Definition of Exit Criteria | 'How I got my job in Product' piece by Maahir at Mind The Product

Episode transcript

Lily Smith:  Hey Randy, how much do you love metrics? Randy Silver:  Oh my god. Really? Is that a trick question? You know, I can't measure my love for metrics. Lily Smith:  I know me too. They can drive you crazy. Once you get them all organised, it just makes life so much easier. Randy Silver:  Fair. And you know, all the best things in life can drive you crazy. But if you have a Northstar, you can never go wrong, right? Lily Smith:  Wow. Funny, you should mention that. Our guest today is my hair Shah, Product Manager from Mehta. And in this chat, we talk about how Northstar metrics aren't all that. And as a PM, you definitely need to dig deeper. Randy Silver:  So my hero, he wants us to go in the other direction to follow the Southern Cross right now. Don't answer that even I know that's ridiculous. I can't wait to hear how he advises us to make better use of metrics. So no more bother. Let's get right to it. Lily Smith:  The product experience is brought to you by mind the product. Randy Silver:  Every week, we talk to the best product people from around the globe about how we can improve our practice and build products that people love. Lily Smith:  Is it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos, Randy Silver:  browse for free, or become a mind the product member to unlock premium articles, unseen videos, AMA's roundtables, discounts to our conferences around the world training opportunities. Lily Smith:  mining product also offers free product tank meetups in more than 200 cities. And there's probably one. Hey, we're here. It's so lovely to be talking to you on the podcast today. How you doing? Maahir Shah:  Not too bad? How about yourself? Lily Smith:  Good. Thank you. And today we're going to be talking about metrics. But before we get stuck into that, it would be great if you could give our listeners a real quick intro into who you are and how you got into product. Maahir Shah:  Sure. My name is Maahir Shah I have been in the professional product management for close to a decade now. And I've worked across a range of industries, you know, all the way from really small companies with just 50 people in them to really large companies and across industries ranging from financial services, advertising, marketing, consumer software, and more. I really like fell into product management. I want to say maybe over like a decade or decade and a half ago when I was an engineer, I always found myself like asking why like, why are we doing this? What are we like actually trying to accomplish? What is value for our end users. And through that process, I was introduced to the to the profession of Product Management. I was very lucky because a mentor in my early career gave me the opportunity to break through and transition from being an engineer to being a product manager and have been in love with the professions and stuff. Lily Smith:  Awesome. Thanks for that intro and today we're going to talk about Northstar metrics, and particularly why they're bad, which I think is fantastic topic. But before we get really into the detail, tell me about what Northstar metrics are for those who kind of haven't heard about them and haven't heard about the concept of them. Maahir Shah:  Sure. So put simply Northstar metrics are the way like large organisations use to measure their overall success. They look at like the top line success. And if you're working at a large organisation, then often you have not started metrics for sub segments of that organisation. A couple examples that come to mind like if you think about like Amazon, it's a large organisation with multiple like divisions within. So if you think about amazon.com, your GMB or your gross merchandise volume is a really good example of a Northstar metric which gives amazon.com leadership as well as investors some idea of like how Amazon is doing in the amazon.com retail business. Other examples that come to mind if you think about like an enterprise like tech company or an enterprise SaaS company have like typically seen like subscription sold and like revenue. And if you think about like, you know, consumer companies like Google and Facebook and Yelp, they look at like active users as like a really good way of like, stepping back and assessing is their overall product or product portfolio moving in the right direction or not. Randy Silver:  So how could that be a bad thing? I mean, having everyone understanding that there's something to shoot for that. I mean, if it's not if the North Star is into revenue, then it's going to be a leading metric or or a leading indicator or a proxy for revenue? So how could that be bad? Maahir Shah:  Yeah, great question. So it's actually not bad, right? Like if you know, your revenue is trending up or some leading indicator of revenue is trending up, that's great for the organisation, for the employees within the organisation for your external, like shareholders and stakeholders. I think what I've like generally seen in the industry is it becomes bad when that becomes one of the primary or sole ways that every single product manager and product team decides big product decisions in terms of the investments they're going to make the products they're going to build. If you think about any organisation, it's unlikely that every single small thing that you do is going to move a Northstar metric. But that also doesn't mean that we shouldn't do all of these other things if they don't move a Northstar metric. And I've seen this growing trend where like, every single like decision starts like is made on the basis of better moves a lot star metric in the near term, which often like leads to very like bias, like product investments. And also like very like short sighted near term myopic thinking versus thinking about long term value and what's good for the end user. Lily Smith:  So how do we just thinking about that? So product managers, basically are getting too obsessed with Northstar metrics by the sounds of it. So what should they be paying attention to instead? Like, how do they decide what metrics they should pay attention to which are kind of connected to the Northstar metric, but I guess, more down in the detail? Maahir Shah:  Yeah, if I were to, like, you know, just step back and think about our role as product managers, it's really to like not just move the business forward, but like, deliver an immense amount of like user value and solve some like real user needs and pain points. And if you think about the same leading indicator philosophy, if you're solving real user needs, delivering value and solving their pain points, that's a good leading indicator that like over the long term, like your users will be happier, they'll stay on your platform, they like continue to like use your platform or your products more. So as a product manager, I always encourage other product managers to like first start with like their users. And think about like, what are the core needs of the users? What is like the competition offering? And how can your product like delivers significantly better value and solve some real pain points? Once you've identified that, then like, find a way to measure that versus like using like some Northstar metric to assess whether your investment is the right investment or not. Randy Silver:  So this is where this is I can game that really easily by saying, great if the only thing I need to be concerned on is customer value, then let's take away price entirely. Let's just give it away to them. But that's not quite right, either, I'm assuming is it's going to be a balance between what's best for the customer, and what's best for the company. How do you how do you balance that appropriately? Maahir Shah:  Absolutely. So before I answer that, I would also argue that you could pretty much give the Northstar metric and move revenue up or active users up by giving away the product for free. So no metric is like really perfect. And Bob like typically encouraged. You know, what I've tried to like do as a product practice, as well as encourage other PMS to do is establish some sort of like goal map that ties like user value to business value. So if we can deliver like user value in ways X, Y and Z, like whether it's like improving the user experience or getting users to like adopt a new tool Darwell over time, like get users to like stay with our products, and that could over the longer term lead to like more users more money. So we try and establish a goal map that links like the user value that we're delivering through to our products to the top line metric for the organisation. Randy Silver:  Can you talk a tiny bit more about what a goal map is what it looks like? Maahir Shah:  Sure, a goal map is basically a connection from your like investment portfolio to the organization's like needs to the organization's like top line metric. So a concrete example like you know, I can give your if you think about like a simple app, like Yelp has, like historically focused on helping users like discover restaurants and helping them like make decisions on what are the best restaurants like to visit now if you're wanting to like invest in some other vertical like let's say helping users find the best like plumbers completely different business model from like helping users like find like restaurants, although there are some similarities. And it's unlikely that something new like this is going to lead to like a massive spike in some Have your top line metrics like daily active users or revenue in the near term. So what I would do as a product manager in this instance, is basically like establish like, hey, if we did have users like find new plumbers, like really quickly, how would we measure whether we're doing doing that very well. And there's like two aspects to that. The first aspect is like, are users adopting this new tool? And are they retaining onto this new tool? And the second, are they actually like finding value? Like, does it reduce the amount of time that it needs to help them find a plumber doesn't make reduce the number of steps. And if it does, then over time, we can expect that more users will start using Yelp more often not just to find restaurants, but also to find plumbers and hair salons and barber shops and all of these other verticals out there. So I think that's where like there's a nice tie in from helping solve like a problem of, can we help users find new like plumbers or find hair salons to users will be more active and more engaged on the platform over time. Cool. Lily Smith:  And so you've also mentioned in one in your article that you wrote about this, about other metrics that we should be paying attention to as product managers to make sure that we're building a good quality product. What are some of the examples of those? Maahir Shah:  Yeah, if I think about taking any new product, especially as you're the one products to market, I typically see that as like stages, it's often naive to just go from like building the product, or like making it available to 1000s millions or billions of users overnight. So I typically like see that in stages, where stage one is like really at a small scale, understand if your product has high quality, there's no broken experiences. When you're launching like a net new product or an ecosystem of products, it's often like, you know, you might come across unknown edge cases that you and your team didn't account for. So I really recommend looking at like quality broken experiences before like going and like just trying to expose the product or like a large scale. After that, you know, once we are like confident that the quality and the usability are high, then starting to measure like, hey, is this product actually like solving the pain point that it was like intended to solve? So if you think about like this Yelp example, the goal was to like, make it easy for users to like find plumbers. So finding like ways to measure the value like did we help users find a plumber to be hot and complete a transaction with a with a plumber? Did we help them book a plumber, actually measuring whether it's like delivering that core value prop that we are hypothesised, even for like a small segment. And once we get signal on that core value prop, the last stage I'd look at is like, are we delivering this value prop to a large enough like amount of people on an ongoing basis. And that's what typically recommend looking at metrics like adoption and retention, where you can see like, hey, now are a lot of users like using Yelp to find a plumber? And are they using Yelp on an ongoing basis to find like a plumber or hair salon or like other business. So in short, I recommend like breaking it up into stages and progressively like moving through the stages to make sure that we're delivering a high quality experience that offers value to a large enough amount of people. Lily Smith:  And do you think it needs to come in that order, just thinking about, you know, the sort of minimum viable product, quite often you try and hack your way through the first parts of that of like quality, usability and, and value, you know, the actual booking of the plumber and try and identify whether there's a market out there that have that wants your product in the first place. Maahir Shah:  I do recommend like these stages, I think, depending on like the product, like if it's not a big zero to one product, but it's an optimization of like an existing product or like a much, much smaller scope. You can like fast track the stages, like I've seen situations where like, we go through these stages in like a 14 day or 21 DAY TIME. And I've seen like situations where it takes six to nine months to like go through these stages. But in principle, we still try and like, go through these stages, because just from like, you know, using my product science, it doesn't make sense to like deliver broken product or like millions of users. Lily Smith:  Yeah, although I suppose in some ways, there's a method of being able to do that by just putting a bita tag on delivering it to everyone and going, it's fine. It's better. You know, we'll polish it later if people actually use Maahir Shah:  it. Yeah, that's fair. I mean, you could like put a beta tag on it, but ultimately, like that beta tag needs to like come off, especially if you want your product to like be used by a large volume of people. Um, what I've at least seen, like my, you know, pm experience is quite often, like broken experiences tend to like reduce the retention, adoption and participation in a product that makes us believe like, hey, the product isn't doing what it's supposed to when it's actually just broken in the first place. Randy Silver:  You talked a little bit earlier about how not everything you do necessarily aligns to a top line, or Northstar metric. And there's lots of different teams working on lots of different things all at the same time. Sometimes, it sounds almost like you're describing something akin to, you know, OKRs used well, or an opportunity solution street where you do have a top line thing, but you have lots of pieces and lots of it competing metrics are competing, not competing complementary metrics, that all make up pieces that and all work together is that what you're talking about finding optimising lots of little pieces, and but having a cohesive vision of how it all fits? Absolutely, Maahir Shah:  I think like having seen how all of these like tie together and ladder up long term to like the orgs, OKRs. And metrics is like critical. I think it's also important just to build like internal alignment within your organisation, like, frankly, if you're doing something where like, there's no path or like how it ladders into the OKRs of the overall organisation or company, it's going to be hard to like keep the investment, like, it's hard to justify a case for the investment. But at the same time, you know, we shouldn't stop doing some things because it's not going to immediately move like topline OKRs in like a three or six month period, I generally think of this, like you know from an organisation leadership perspective is you have different product investments in your portfolio, all but different, like maturity life cycles. So there are some that are going to mature and, you know, six months, others, which wouldn't mature and hit your top line metrics in the next like six to nine months, and you need to like focus on like much earlier stages of the funnel, and some even like two to five years out. Lily Smith:  Yeah, that makes sense. And so just thinking about your ideal kind of metrics scenario, then what do you do now? Or what would be your kind of your ideal scenario for making sure that the product that you're working on is focused? Do you always go through, like Northstar metric, and a bit of gold mapping to identify those value metrics? And then those stages of the kind of quality and usability and then value and product? Or is there some other process as well that you go through? So Maahir Shah:  typically, we always start with like our end users, what their like needs and pain points are. And then we assess like, which one of these needs and pain points to be thinking help us like, deliver significant value to like a large volume of users and like provide a significant differentiation? Then we try and like look at like, what metrics would be used to measure that we're like delivering value. And that typically like is three forms of metrics, some direct value metrics, which help us understand like, how much value we're delivering to users, and then adoption and retention metrics to help us understand like, are we delivering this value to a large volume of users? And are we delivering it consistently? And then we have like a golden apple tie into if we do this, well? How will that impact impact the organization's OKRs and top line metrics over whatever period of time and in some cases, that period of time is like, three weeks, six weeks, eight weeks? And in other cases, that period of time is like one year, two years, even three years? Randy Silver:  So given all those different horizons? What's your approach to testing all this to making sure that it's that it's actually working? Maahir Shah:  Yeah, so we try and make sure like, based on the product that we're launching be made sure that we have like a very state like testing like framework. And if it's a big like zero to one product that might start off with like a really small alpha, where we're just getting like usability feedback from 20 users and talking to them every day. And if it's like a very like simple product that might be just like testing it, but like 1% of our user base. So we try and make sure we have a testing framework that starts with like, getting like signal or like our quality and usability and we have very clear exit criteria for each for this phase, the exit criteria might be the six out of like 10 Alpha users did not have any broken experiences, or like we did not log any like, we logged last time, like 10% errors when we were like testing this at scale. We are like an AB test. And then we move on to the next phase, which might be like version of like scale testing, where we look at our value metrics, looking at like, what value we're delivering, what the adoption is, what the retention is, in whatever time horizon makes sense. Typically, like you know, you don't want to wait 60 or 90 days to understand retentions, you look at short term retention in the range of like two to three weeks. And then once we've proven all of that out and we're ready to go big, then we start looking at like, how is this impacting ords? OKRs and top line metrics, if at all? Lily Smith:  And do you also consider because it sounds like incredibly data led, which is cool. But do you also consider any other types of data, like how people are feeling about it within the business, or the more sort of qual data that you get from user research? Maahir Shah:  You absolutely do. Again, like depending on like the product that we're like launching, and if this is like a big change, or if it's just a minor optimization, if it's a, you know, big change, then we would definitely look at like some like quantitative user research to understand not just how people are using the products, but also like why people are using the product the way they do, because quite often, like the data gives you the what, but not the why. And if it's like a small optimization, we would look at, you know, we try and be really specific around exactly what we're trying to do. So if we're trying to solve a problem for storing user base, or certain sets of user bases, understanding how those different sets of user bases are like interacting with our products, if certain metrics are going, let's say of retention is going like is not that great? You can slice and dice for that further into like power users and like lots of our users to understand what's working for home and what's not working for home before we like, successful lately are the exit criteria for each stage of testing. So we Randy Silver:  can absolutely focus on on some of these, these metrics. We can work to optimise them, and we can even do a really good job with it. But how do we know we've done the best possible job? How do we know that we haven't passed up a much bigger opportunity that we're the you know that we're prioritising the right ideas for these things? Maahir Shah:  Yeah, good question. I mean, generally, like, you know, when we tend to, like, look at it, we try and look at like, do we think that this makes a significant difference for our users, and our product and our business over the course of like a three to five year horizon will not help us really like when a particular user segment over, and then we size that segment. And there's a lot of like art that like comes into this, once we've sized that segment, like looking at, like, what competing solutions they use? And how much of a differentiation do we think we can offer with our like, product offering. So there's a lot of art, we used to just like, assess whether we think this is the right investment for the next like, two years, three years, five years or so. And then we like, have like a clear like testing plan with exit criteria for each stage to assess whether we are moving in the right direction or not. Yeah, sure. So you know, for like launching a big life change to a product, or we're introducing a new big co2, one product, one of the phases. You know, before we like start looking at how it's impacting top line org metrics, one of the important phases we go through is this phase called adoption and retention, where we're testing the product at scale to understand what adoption looks like and what retention looks like. And we established like clear criteria, where we, for instance, say that if retention is over 22%, if three week retention is over 22%, then we think it's a clear success. It three week retention is somewhere between 13 to 22%. It's signs of success and retention is below 13%. It's clear failure. And that effectively helps us develop like decision framework around if it's clear success, keep going because we know this is moving in the right direction. If it's signs of success, we need to we may need to like shift by like an inch or two you're there. And if it's clear failure, we really need to like go back to the drawing board and like rethink whether we made like right decisions, we targeted the right user group. And so we spend like several weeks on some of these big changes assessing where we are in this like decision tree and using that to like inform what we do next. And so that's like one example of exit criteria for a stage of testing. Lily Smith:  And does this come from some sort of modelling that you've done? Maahir Shah:  Honestly, this comes from just like a lot of like trial and error and working on launching big cod one products for like the last like eight to 10 years of my career, especially when you're working in the zero to one space. It's very hard to like accurately use data to like tell you exactly what is going to happen. If it was easy, every single startup that was founded would be successful. But it's very, very hard to like accurately know, like, exactly what is going to happen. And so it's just like, through a bunch of like, different methods that I've tried in the past. This is the one that always like consistently helped us make the best decisions for end users as well as for the business. Lily Smith:  Yeah. But in terms of the how you decide what that exit criteria is, or, you know, how you decide what's good, what's Okay, and what's bad? Is there a model that you built to help you make those those decisions? And if you're building that model, like, that's going to be based on a load of assumptions? So like, What, do you have any sort of tips or advice as someone who's coming at this and thinking, like, how do I even decide whether it's good or bad? Like, how do I yeah, Maahir Shah:  there's a couple of things we've looked at. The first one is like comparables, you know, we're lucky that we live in a world where there's so many different types of like, software, hardware products that you can, like, compare yourselves to as a product manager, they might not be direct competition, but they might be in like, adjacent spaces. So, you know, for instance, like, if you're like, if you're working on Thumbtack, which is like an app that helps you find like plumbers and like God knows and stuff, you have comparables in your industry, like Yelp, and Google Maps, and all of these other like players out there, and quite often, like, it's not that hard to like, just get a sense of like comparable data to like Google searches online. If you're working in a large organisation, you might see like other comparable like product launches, or like, you know, AB tasks, like through the course. So finding comparables is, like definitely something we do. And then the second one, a second thing we try and do was just like, put together a mental model of like, what we think our user will do what we think, you know, their behaviour will be because like, there's a certain hypothesis that we've like built a product with so if you're thinking about like the example of, you know, using Yup, to like find a plumber, Someone's probably not going to find a plumber every week or every day, but like through that period, that they're trying to find a plumber, they're probably going to be very active. So in short, we use like comparables as well as just some like intuition around what we expect our user to do and what we're what use case for building the product or service. Randy Silver:  So it sounds like you're in a healthy environment now where the use of data is, is well understood people agree on a you have these good models. But lots of us aren't in that situation. And I'm not sure that you've always been in such situations. When it's been more of a challenge, what kind of strategies what are what methods have you used to try and change the organisation to influence it to be more data influenced? Yeah, Maahir Shah:  great question. I mean, I've definitely been through situations, especially in my early career, where I had a hard time justifying and articulating, like why we should be making certain investments, even if it doesn't map up to like immediate OKRs or top line metrics. I think a couple things that I'd like typically use a couple things that I've used so far. The first one is like making sure that everyone is like really indexed on like the end user. Knowing that at the end of the day, while like, you know, all your to like, also drive business goals, one of the big job jobs we have is to like deliver value for the end user. So grounding everyone back into like, what value for our end user actually is the first thing I do. The second thing is just like go map quite often with organisation leadership, who are maybe a few 1000 feet away from like the details, it might be hard understanding, like the tactics of like some of these metrics. And so that goal map that we develop of how a certain investment and like certain success metrics that we're using right now can map to this like big organisation like OKR, or like top line metric over the course of an extended period of time. That's been like typically helpful. And then the third thing is having those clearly defined like exit criteria. I think most importantly, like, it does two things. One, it provides everyone like some expectation, it just sets the expectation of when we will get to like those, like impacting those top line OKRs. And the second thing it does is it gives everyone a quick check ins like Hey, are we moving in the right direction, because of right until this stage, it's likely that we're going to be right in the next stage in the stage after that. So I think to summarise like just keep everyone focused on the end user, make sure you have a goal map and make sure there's like your exit criteria. That is like pointing towards that goal then. Randy Silver:  Fantastic. Thank you very much. That's been really clear. It's really interesting and I love the the taking a lot more of a critical thought about the way we use metrics in the organisation. It's something we all know we need to do. We all try to do it in some ways, but we're not always so refined in that sense. It's been a lot of fun talking to you about it. Maahir Shah:  Yeah, absolutely. It was my pleasure Randy Silver:  Hey, join us again next week we're gonna have another great guest and we will talk to you then. Lily Smith:  Our hosts, are me, Lily Smith and Randy Silver:  me Randy silver. Lily Smith:  Emily Tate is our producer. And Luke Smith is our editor. Randy Silver:  Our theme music is from humbard baseband power that's P A you thanks to Ana Kibler, who runs product tank and MTP engage in Hamburg and plays bass in the band for letting us use their music. Connect with your local product community via product tag or regular free meetups in over 200 cities worldwide. Lily Smith:  If there's not one Nagy you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank. Randy Silver:  Product Tech is a global community of meetups during buy in for product people. We offer expert talks group discussion and a safe environment for product people to come together and share learnings and tips  

Leave a Reply

Your email address will not be published.