An introduction to impact mapping – Tim Herbig on The Product Experience "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs October 10 2021 False The Product Experience, Mind the Product Mind the Product Ltd 6247 Product Management 24.988

An introduction to impact mapping – Tim Herbig on The Product Experience

BY ON

Evolving your organisation from a feature factory to becoming outcome-focused is a challenge many of us have faced — but how do you actually do it? Impact Mapping is a great way to help take stakeholders and teams from focusing on the how to the what and why We asked consultant, trainer and author Tim Herbig to give us the crash course on the topic, including how to get people to understand the difference between an Impact, an Output, and an Outcome.

Featured Links: Follow Tim on LinkedIn and Twitter | Tim’s Website | Read Tim’s ‘Using Impact Mapping to Navigate Product Discovery’ piece | ‘Idea Prioritization with ICE and the Confidence Meter’ by Itamar Gilad | Gojko Adzic’s book ‘Impact Mapping: Making a Big Impact with Software Products and Projects’

Support our sponsors

Give Sprig (formerly Userleap) a try for free by visiting Sprig.com to build better products.

Episode transcript

Randy Silver: 

Hey Lily, I hear you’re going to run a workshop tomorrow. I wonder if you can help me with a problem that I have

Lily Smith: 

a brandy, I’m not going to get the whole team to work on your problem. Especially not after what happened last time. I have no

Randy Silver: 

idea where you get these ideas from. I just wanted some advice on how to run an impact mapping session. I mean, I always get bogged down in trying to get everyone to understand the difference between an outcome and an output and, and, and an impact.

Lily Smith: 

Oh, okay, that’s a different story. I can most definitely help you with that. But even better, let’s ask our old pal Tim habit to do it. He has been training product people and teams in this as part of his work in building better discovery practices.

Randy Silver: 

Oh, that sounds perfect. So let’s see if I’ve got this right. Us booking Tim is the output. And the impact is that we all get to learn something, is that correct?

Lily Smith: 

Let’s find out. The product experience is brought to you by mind the product.

Randy Silver: 

Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love.

Lily Smith: 

Because it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos,

Randy Silver: 

browse for free, or become a mind the product member to unlock premium articles, unseen videos, ama’s roundtables, discounts to our conference conferences around the world training opportunities.

Lily Smith: 

Mind product also offers free product tank meetups in more than 200 cities. And there’s probably one for you.

Randy Silver: 

Thank you so much for joining us on the podcast again.

Tim Herbig: 

Thanks for having me.

Randy Silver: 

For anyone who hasn’t listened to your first episode. And you know, to be fair, was one of our very first episodes. Can you just give us a quick intro? Who are you? How did you get into product?

Tim Herbig: 

Sure. So let’s start with the more difficult one. So how did I get into product like most of us was sort of stumbled into it, like the company I was working for back in the days, look for a guinea pig to try out this product owner role and was like, hey, the working student might be a good fit for that, which tells you a lot about how serious they were about that approach. And naturally, that’s how it turned out. That’s how I found my way into product. And you know, over a couple of different industries like publishing, professional networking, b2b, AV testing software for enterprises, I found my way through the roles of Product Manager heads of product, also worked in a couple of smaller startups, amongst others, I wanted to try to disrupt the dark setting industry with a super fancy mobile application. But and I’m still here tells you that it didn’t work out. I like to think that it was because I’m more of a cat person and not much of the product. But that’s maybe a story for another podcast. So that’s what I did over the last 11 years now. And since almost three years, I work independent. As a product management coach and consultant, we have the privilege with working with many great product teams pretty much across the globe, and helping them to solve the customer’s problems and contribute to their business goals. And that’s what I do for most of the day. And also love sharing my knowledge while being a guest on shows like this was also through writing on my homepage, or co or from our weekly newsletter called product thoughts, which is shared with folks.

Randy Silver: 

And we will link to all of that in the show notes if you missed the link. So don’t worry. One of the reasons we wanted to talk to you today is there’s something that you’re a bit of an expert in that I’m really bad at. And I get better at it. And I’ve seen you talk it’s not German, this is your Telly. But it’s it’s something called impact mapping and which I have not been able to really use successfully even though I’ve tried, and I’ve seen you give workshops on it and it made so much sense. So we brought you back on to give us a lesson. So let’s start the beginning. What is an impact map?

Tim Herbig: 

So what does it impact map? That’s a great question. I guess it’s it’s many different things to many different people. To me it is first and foremost like a set a tool for sense making for product teams, which is I know very vague. I think the original of impact mapping go back to 2004 as I learned and but it has been like popularised by Goku a church back in 2012. I guess we also published a book on the topic. Personally, I got exposed to it in 2014 when I was working at a very, let’s say, optimistic product discovery, which required lots of connecting the dots and lots of insights and lots of navigating the problem space. And that’s what it is to me these days. And it’s at its core. It’s a tool that helps product teams to navigate the problem space and connect individual activities and features to those larger business. This goals using user problems prove user problems as like a proxy. So it has evolved over time. Originally, it was a, like a four level framework. These days, I like to use it as like a five level way, because I found that it’s more helpful, more tangible for proteins to work with. But yet, so if you would boil it down, it’s a multi level mapping technique that helps teams to connect individual activities and features to overarching business goals and strategic priorities.

Lily Smith: 

So do you typically tend to use this mainly in the discovery period? he said.

Tim Herbig: 

So I think that, as with so many things in product, it highly depends, it’s definitely my preferred use case of using it as like a first you could say for slightly more mature teams who have understood that it’s that the navigation of discovery is not necessarily a rigid process. And just by following the blueprints, you will equal success. It helps you to be like a companion to document what you’ve learned to frame why you would want to embark on a discovery, and to make deliberate choices about what kind of options you might be pursuing and how you’re better than these. But I’ve also seen great success for teams who might not be that fluid and pro discovery, if you want to say so who could use it as like a one off exercise to pretty much identify, okay, what are all the things we’re doing right now, and all the things we’re knowing and trying to connect these and realising, oh, there’s a big gap in our understanding of what kind of problems this user segment is facing. This feature doesn’t connect to any user problem. And we don’t even know what the overarching business goal is. So it exposes these blind spots and encourages you to hopefully take more educated next steps.

Lily Smith: 

So in the in the first instance, where you say they’re kind of maybe more mature product function might use it as a, an ongoing tool as part of their product discovery work. They kind of looking at the impact map after product research interviews, and then taking that outcome of those interviews and mapping them as part of the impact mapping exercise. How, like, how does it work on a sort of ongoing basis? And are you building like, one very large artefact? Or does it break down into smaller components.

Tim Herbig: 

So what I would recommend teams doing is to start, even like before doing the research, because from my perspective, that specifically, the first two levels of the impact map, which is called the impact, though, and the the actor level, these are quite helpful questions to have clarity about when it comes to even approaching structuring your research, right? Because the first level, the impact is a lot about what’s the what high level company metric, mirror success for us, three to 12 months into the future. And this could be one of multiple metrics, right? So a company or department, depending on your size, might be focused more on monetization, user growth, churn, tech quality, those very lagging, high level metrics. And you might hand or you might use one of these as the main framing for a product team or give them discovery effort. And so depending on what business goal is most important to you, you might want to think about, okay, who should I talk to, like whose problems actually relevant in the context of that impact? Right? So you wouldn’t just use the same segmentation over and over over just saying, oh, new user, returning user shouldn’t use a free user, but you want to look at, okay, what kind of attributes are relevant for that, for that impact? So to say, and then, once you have talked to the people, or did whatever kind of qualitative quantitative research you had to do, you can then distil the insights or the spotted patterns from those efforts into the the third level, the outcome level of the map and thereby articulate, okay, how would I have to change the behaviour of a given actor so I can achieve my overarching impact or can contribute to it? So

Randy Silver: 

let’s, let’s just make sure we have that all in order. So we know what we’re talking about. So you said there were order? Well, you said there were four levels. And then you’ve added a fifth. So can you just detail what those levels are? Sure.

Tim Herbig: 

So it would start at the top. The first level I call electrical the impact, which is, as I said, like this high level company metric measuring success, those can typically be found either coming straight from your product strategy, if it’s sort of like a good one, or these can come from your company level OPR sets either the quarterly or the yearly level. Typically, these key results are have this lagging nature and mirror those high level company priorities. Another attribute of the impact is that it requires typically multiple teams multiple features coming together in order for it to change, right, it doesn’t just change because of one feature. Below that you have the actor level and I’m sure we can go a little deeper into what the actor means but essentially it’s it’s the level of outlining who has a problem that is relevant in the context of goal that is preventing us from achieving this goal or contributing to this goal. And making sure that you list the segments that have these traits, these attributes. So to say, the third level, I like to call the, the outcomes and borrowing from Josh Sidon here, seeing an outcome as a change in human behaviour that creates value or contributes to impact. It’s about synthesising those research insights, turning problems into necessary changes in behaviour. And basically, these first three levels are more focused on the problem side of things, as you can probably tell, so it’s a lot about this strategic framing, selecting the user segments distilling research insights, and then get it let’s get to it easier for most product teams, because then it’s it’s time to start to talk about solutions and all those fun experiments. So the fourth level is the called outputs, meaning we now talk about the specific features, projects, epics, whatever you want, however, you want to slice these that you would want to pursue to drive one of the given outcomes you have prioritised, right, so Okay, this is how the behaviour has to change, what feature has the highest chance of creating this change in behaviour. And the fifth level, as added is to give teams even more structure for being aware that just because an output a feature is a potential idea doesn’t mean you have to build it right away. But there are probably some more lightweight steps to increasing your confidence in a given idea. So added a fifth experiment level where it’s about structuring quantitative or qualitative experiments, you could run to further validate if an idea really drives this outcome you set out to achieve.

Randy Silver: 

Okay, that all makes sense. And it makes even more sense when you read an article or seed the diagram and put it all together. And unfortunately, we have that link in the show notes. But I have spent, I don’t know how long in workshops with stakeholders trying to explain the difference between an outcome and an impact. And it’s just gone wrong. So help me with this is this is the entire reason we brought you on Tim, no pressure. The difference between an impact and outcome? How do you get people to understand that quickly.

Tim Herbig: 

So the reason it’s so hard to try to articulate that more quickly is the impact is more about the company perspective. So to say, and the outcome is more about the user perspective, that’s the probably the first the high level differentiation that makes most sense to people. The second, I would say, attributes that differentiate these two elements is their, how leading or lagging they are. So as I mentioned, the impact typically is fairly lagging as in, you can only measure it in hindsight, it requires multiple things to have more to fall into place until it changes significantly until you can detect a change. And that’s what the big problem comes in when teams try to map individual features experiments to those large company lagging metrics, right? So for few companies, it’s the case that you can tie in a feature release to company revenue, or maybe even share price, right, which would be more lagging. And therefore, obviously, teams need more more guiding like, Okay, what kind of metric tells me if the stuff that I’m doing actually moves the needle in the right direction, and so that you don’t have to wait for those annual shareholder reports to figure out whether you made an impact, you need something more tangible, and that’s where the outcome comes in, right? Because typically, those these outcomes you’ve identified for research, change at a larger pace. So once identified, these outcomes can also be used for your team level key results are team level okrs, to measure the success of your product as well, because then once you build a feature, and you want to determine not just the result of an experiment, but the ultimate success of a product, you want to look at the outcomes and how the outcomes have changed in particular.

Lily Smith: 

So So with the outcomes, are those coming out of the research that you’re doing? And are you making assumptions or hypotheses from your research, in order to kind of have some sort of confidence that they’re the outcomes that you should be focusing on in order to achieve the impact? Do you see what I mean? It’s like,

Tim Herbig: 

actually, no, that makes total sense. I think, again, I think it comes down to two to two layers. I think the first layer is making sure that you’re only doing research on those segments that actually matter in the context of the of the impact. So for example, let’s say your impact is focused on monetization, which means you probably want to utilise you look at attributes like paid membership saturation, churn rates trial. So let’s say you’re pursuing an impact focus on monetization, you probably want to talk to people who share attributes like a given paid membership situation or a certain average revenue per customer or given churn rate or try conversion rates right so these segments then become more simple. And, and therefore already your your corridor of insights to consider has significantly more has become more narrow, right? Because you don’t consider all people’s problems, so to say. And then obviously as the second layer, you want to make sure that you only list those outcomes that are based on actual patterns, right? So just because one or two people have articulated, oh, it’s hard to complete the checkout doesn’t necessarily mean that this is an outcome you should be prioritising. So I would say having those two levels of saying, okay, is like, is the group of actors? Is the actor relevant to the goal in terms of quantify ability? That if that’s the words, probably not like, how, but quantifying them? how large the portion is, how large how big the leverage is, if you change something for that segment? And second, how you how prominent is it? Right? We probably have heard many proteins, say you like, no, but customers tell us they need that. And once you’d like, drill into that, like, Okay, how many people have said that? It’s like, Yeah, I heard that from a salesperson over lunch, that they heard it from a friend, right? So you want to avoid this those anecdotal proxies at this point.

Lily Smith: 

Sprague Sprake, formally, usually is an all in one product research platform that lets you ask bite sized questions to your customers within your product, or existing user journeys.

Randy Silver: 

Companies like Dropbox, square, open door loom, and shift all use springs, video interviews, concept tests, and micro surveys to understand their customers at the pace of modern software development. So they can build customer centric products that deliver a sustainable competitive advantage.

Lily Smith: 

So if you’re part of an agile product team that wants to obtain rich insights from users at the pace you ship products, then give sprig a try for free by visiting sprig calm. Again, that’s sprig. Sp r I g.com.

Randy Silver: 

I mean, if I can’t prioritise the things, that salesperson told me over lunch, what am I going to prioritise him? Okay, let’s don’t answer that. So one of the levels you’ve got is actor. And I think there might be a subtle difference between a persona and an actor, and I’m not 100% sure what that is. So is

Tim Herbig: 

there neither mine, but let’s try to let’s try to untangle it together. So here’s what I think about it. Obviously, there are different ways of how persona might be set up, right? We have those like super flashy gut feeling based marketing personas. But let’s put these aside for a bit. But what do you typically what I typically see is that the persona is still based on let’s say, static attributes that are like sort of stuck with this person, right. And they’re more humanised, we’re giving them a name. And you might hear something like Tim 32, a bit too serious about coffee listens to five podcasts per week, that doesn’t change the mindset of a persona. And that means that I as persona I’m only considered if these broadly match attributes seem to appeal to marketing or product. The actor categorization as I see, it is much more flexible and dynamic, right? Because it’s not, it’s not based on ultimate attributes a person might be having, but on those quantifiable behaviours that are relevant in the context of the goal. So it’s really just about the context and saying, like, just because you, like coffee doesn’t make you something worth considering for our goal must be looked at, okay? So the goal might be, might be might be engagement, or might be activation. So we want to look at things like sign in rate, mobile usage, push notifications, click through rate, and we look at these quantifiable attributes. And from there, we start to build our extra segments. And this is, again, can very, very flexible. And I like to think of it as not aesthetic as the typical persona categorization, because it’s primarily based on the context of the overarching goal.

Randy Silver: 

So is there a limit when we do impact mapping? How many actors we should concentrate on? Should we just pick one for each experiment? Or is he looking at from a couple of different perspectives?

Tim Herbig: 

That’s a really good point. So in general, I wouldn’t set necessarily a limit to the number of actors you should map out in theory on this level, right? Because it’s about basically documenting what you have learned. That doesn’t mean that you have to solve the problems of all the actors, because that’s probably the second biggest trade off what, to me separates this actor killer actor perspective from the persona is that it’s also about mapping them out, particularly by mapping them out horizontally on that level, acknowledging the relationships and the differences these actors can have. So for example, originally in this book, Gregoire church also talked about onstage and offstage actors Which I think is a brilliant metaphor to explain that to people because it’s about those onstage people are probably your primary users, right? These are the people who are really using your product, who you’re engaging with on a regular basis. Those offstage actors might not be as prominently visible to you, but they play a role in your quest to achieve the overarching goal. So one prime example for me was once when I was building a JIRA integration, I was primarily talking to the users of the integration later on, but turned out the IT administrators of my customers were very relevant actors, because I needed to consider their problems in order to get my solution to be adopted, which was one of the larger overarching goals. So those offsets actors are typically as I said, not as not as top of mind for most product teams. But these are the things you you pick up when talking to customers. And they mentioned maybe it mentioned on the side, like, Oh, yeah, and then I need to talk to so and so or department XYZ, and you’re like, oh, that could be an interesting actor to consider, because it might stand in my way of achieving the overarching goal, and then you can map them out. And then when revisiting your interview insights, you can still decide, okay, is this really a crucial role? Will their problems really prevent us from achieving the goal? Or do I just not pursue them anymore?

Lily Smith: 

Yeah, I think that’s really interesting as well, because those, I think you call them adjacent actors in the blog, where you mentioned some of this. And quite often, you know, when you’re trying to sell a product, or you know, use a an online product, there might be other people in the background, that have an impact on whether it’s successful, and it’s delivering value and everything. So yeah, identifying those and calling them out. And I suppose you can only really do that through user interviews,

Tim Herbig: 

right? Or like maybe maybe a couple of open ended questions in the survey, right? But all the good, probably you really want the qualitative side of things, then it will be up to you to dig into like, okay, is this vector existence, I do exist and other relevant for my, for my discovery, or my mission?

Lily Smith: 

So, you, we’ve got to the point where we have a very nice impact map that has impact actors, outcomes, some outputs? Or actually, yeah, how do we get to the outputs? But were we? I don’t think we’ve covered that. But yeah.

Tim Herbig: 

Yeah. I mean, it’s, it’s a, it’s, it’s quick to cover I guess. So it’s like, once you’re at this point, okay, now we’ve really, we’ve proven outcomes. And then again, it’s still it’s about asking this question. Okay, what are these outcomes? Should we focus on first because it poses the biggest lever for the overarching goal due to being lamed by the largest customer, extra segment or whatever? Basically, can you want to engage in more like structured ideation, right? To say, like, okay, we have this outcome. The the beauty of using the impact map for this process is that you can bring people on board fairly quickly, by giving them the right context, like, okay, here’s why we care about this challenge about this outcome for this actor in the context of this larger business goal you all might have heard of, from the last all hands meeting. So now let’s use let’s frame that as a whole might we statement and start to generate some ideas we can continue working with, then you really want to go through the motion of like this, you know, structured ideation process a little bit of coding, until you end up with, I don’t know, five, six manageable outputs, or feature ideas that come out of such a structured ideation session, and you then want to place them on the output level of the map. And then you can really connect them again to the outcome, you can make the case for why you would want to pursue this feature in the larger context. And then obviously, you have to have to make a choice, what kind of feature you want to work on first.

Lily Smith: 

So you’re prioritising the outcome level, and then ideating against each potential outcome, to generate different outputs. And at that point where you’re ideating do you tend to just like really focus the team on right this is the one outcome that behaviour change that we are trying to achieve with our users and or these actors rather than kind of mixing it up and having a few different options.

Tim Herbig: 

Right so my experience is that depending on the setup you’re having but assuming you might want to use this ideation phase to also bring on let’s say, like more like of those supporting discovery collaborators like marketing sales or sea level to engage with them. I found it best to really focus a give my ideation session only on one challenge and one outcome so to say, so people don’t get like don’t have to switch context too much right and talk about different challenges. So really want to focus on okay as the starting point we’re going to focus on this outcome. And this is Again, which also symbolises the idea of the impact map being like not a static artefact, but something you can continue to use over time. Because at this point in time, you might focus on this outcome at a couple of weeks in the future, where it’s about like, Okay, what kind of features or outputs which you want to focus on next, you might want to revisit some of the other outcomes you mapped out either from this same actor or different actors, and you want to continue utilising it.

Randy Silver: 

One question about that, though, this, it sounds like you’re getting dangerously close. And maybe this is actually the intention to solution arising in these meetings. And is that is that a good thing? Or is there a way of guarding against that?

Tim Herbig: 

I think at this point in time, it might be a good thing, because after all, right, it’s like solutions is what will drive those business goals and these outcomes. I think if you really managed to have the discipline of going through the motions of clarifying the strategic goal, what segments are relevant, what are the problems in the form of outcomes, I think you’re all set to finally start talking about solutions. As long as it’s clear to everybody that just because like an idea has been generated and has been mapped as an output doesn’t mean that it ends up on your roadmap, and is this promised feature that drops in the next 18 months? Or more like, this is the range of options, we have to drive this outcome? And let’s make an educated decision about with which we’re going to start.

Randy Silver: 

So what’s your solution? I’m saying just to be clear, is not committing to building a feature. It’s so it’s you’re committing to an experiment to validate if that’s going to work?

Tim Herbig: 

Right? Yeah, that’s a that’s a really good differentiation, I think to say like, yeah, it’s not it’s not a commitment. It’s like a range of options. And showcasing also like, what kind of decision making criteria are you using to say, like, okay, I stop, or start validating, or running experiments for feature a versus feature B.

Lily Smith: 

And how does this fit in with your product roadmap? Do you see it replacing a roadmap? Or does it work alongside it?

Tim Herbig: 

I definitely would see it’s working along the side of it as like a complimentary tool, because I think, depending on what kind of roadmap format you’d like to use, but let’s assume teams using a more problem or theme oriented roadmap, I think it could be nice, because from again, from the impact, you can probably derive the high level theme, as I mentioned, is it monetization, engagement, retention, churn all these things, you can then be more explicit, with any given roadmap item of saying, Okay, this is the theme. And this is the outcome this roadmap item is about. And maybe, depending on how granular and what your time horizon is, you want to look at maybe you can already list the feature or experiments you want to run to drive that outcome within your roadmap item. But obviously, the further you look into the future, the less granular granular you want to be. So in this, then those more high level roadmap items in the future could be the starting point for your next impact map, right? So maybe, in I don’t know, in the next or later column off of the master roadmap, you have things like international is like international expansion, or m&a activities. And these could then be your starting point for another impact map on the impact level of trying to quantify that thing.

Lily Smith: 

So How often would you have the team revisit this? The impact map once you’ve kind of got it into your discovery? Or like product workflow? You visiting it, like once a month? Or does it depend on the cadence of like how you’re working?

Tim Herbig: 

Right? I think what you just said like that depends on the cadence you’re working. Probably in problems that the most let’s say, you’re really in a very more like, very intense discovery phase, where you will learn new things by the day or by the week, I think it makes sense to use the impact map as like a lightweight way for stakeholder updates or discovery check ins if you run some kind of meetings or to bring the product team up to speed during a review of backlog refinement. So using that as a lightweight way to frame the work in progress, essentially. I think on a more if you would look at it from a more like tactical, sorry, strategic planning level, you might want to use laser during some kind of like rolling quarterly, or maybe yearly perspectives. I think from a team perspective, if you are really in the midst of gathering new data and insights by the day or the week, use it for that to document insights and make decisions. And other than that revisited as soon as you want to shift, focus have to focus or want to or have to communicate higher level strategic priorities that are coming up.

Randy Silver: 

So for anyone getting started with impact mapping, someone who’s inspired by this conversation, and I know you’re out there, how couldn’t you? What mistakes Do you see people make when they get started? What’s the one piece of advice that you’d give to people to say, here’s how to start using it in in Well, I’d say an impactful way but that’s,

Tim Herbig: 

that’s that’s pretty good. Um, so the the biggest, let’s say, challenge I see for teams doing that the first time is differentiating the outcome and the output, right? Because very often when I give teams the challenge of like, okay, articulate how you’re going to have to change the behaviour, they are listing features, right? And naturally, that’s the output. And so that’s the biggest thing teams, which should be very cautious about this, because like, if you look at it from the, if you try to visualise the map as like a top to bottom five level framework, you go from the problem space at the top to the solution space at the bottom, and particularly the outcome and the output level, or like the atch, or from going from the problem to the solution space. And so it’s easy to get mixed up here. So being very cautious at this level and using good facilitation and guiding questions to make sure like, hey, are we still talking about an outcome? Or are we already talking about an output? And my favourite way of highlighting that to teams is using the How might we test as I like to call it so as I mentioned earlier, you should be able to, you should be able to rephrase an outcome as a whole might we statement essentially? And if that’s the case, if that makes sense, chances are higher that you’re still talking about an outcome compared to talking about a solution. So when, like an NT NT example would be, if you just framed like, how might we build a share button, that’s that leaves some room for discussion for the actual execution, but doesn’t really inspire ideation and coming up with new ideas? Whereas if you would rephrase it, like, how might we enable account managers to share data with their clients faster, that opens up the room to actually come up with solutions that you would consider for that.

Randy Silver: 

So saying, how might we release this in q3 is a bad one.

Tim Herbig: 

You might have to be creative, though, to come up with a couple of tactics to achieve that.

Randy Silver: 

Fair, but still not.

Tim Herbig: 

The questions like who’s who’s changing behaviour is that

Lily Smith: 

it’s been such a pleasure talking to you this evening. Thank you so much for joining us.

Tim Herbig: 

Yeah, thanks for having me on.

Lily Smith: 

So if everyone who’s listening shares this podcast with three friends than the outcome will be that we’ll have more listeners.

Randy Silver: 

And the impact will be that more people build better products.

Lily Smith: 

I was thinking more along the lines that we’ll just be slightly more famous. But yeah, building better products is good too. So please share this episode with three people. There it now. haste, me, Lily Smith and me Randy silver. Emily Tate is our producer. And Luke Smith is our editor.

Randy Silver: 

Our theme music is from Humbard baseband power. That’s p au. Thanks to Nick Hitler who runs product tank and MTP engage in Hamburg and plays bass in the band for letting us use their music. Connect with your local product community via product tank or regular free meetups in over 200 cities worldwide.

Lily Smith: 

If there’s not one Nagy you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank.

Randy Silver: 

Product tech is a global community of meetups during volume for product people. We offer expert talks group discussion and a safe environment for product people to come together and share greetings and tips.

[buzzsprout episode='9334621' player='true'] Evolving your organisation from a feature factory to becoming outcome-focused is a challenge many of us have faced — but how do you actually do it? Impact Mapping is a great way to help take stakeholders and teams from focusing on the how to the what and why We asked consultant, trainer and author Tim Herbig to give us the crash course on the topic, including how to get people to understand the difference between an Impact, an Output, and an Outcome. Featured Links: Follow Tim on LinkedIn and Twitter | Tim's Website | Read Tim's 'Using Impact Mapping to Navigate Product Discovery' piece | 'Idea Prioritization with ICE and the Confidence Meter' by Itamar Gilad | Gojko Adzic's book 'Impact Mapping: Making a Big Impact with Software Products and Projects'

Support our sponsors

Give Sprig (formerly Userleap) a try for free by visiting Sprig.com to build better products.

Episode transcript

Randy Silver:  Hey Lily, I hear you're going to run a workshop tomorrow. I wonder if you can help me with a problem that I have Lily Smith:  a brandy, I'm not going to get the whole team to work on your problem. Especially not after what happened last time. I have no Randy Silver:  idea where you get these ideas from. I just wanted some advice on how to run an impact mapping session. I mean, I always get bogged down in trying to get everyone to understand the difference between an outcome and an output and, and, and an impact. Lily Smith:  Oh, okay, that's a different story. I can most definitely help you with that. But even better, let's ask our old pal Tim habit to do it. He has been training product people and teams in this as part of his work in building better discovery practices. Randy Silver:  Oh, that sounds perfect. So let's see if I've got this right. Us booking Tim is the output. And the impact is that we all get to learn something, is that correct? Lily Smith:  Let's find out. The product experience is brought to you by mind the product. Randy Silver:  Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love. Lily Smith:  Because it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos, Randy Silver:  browse for free, or become a mind the product member to unlock premium articles, unseen videos, ama's roundtables, discounts to our conference conferences around the world training opportunities. Lily Smith:  Mind product also offers free product tank meetups in more than 200 cities. And there's probably one for you. Randy Silver:  Thank you so much for joining us on the podcast again. Tim Herbig:  Thanks for having me. Randy Silver:  For anyone who hasn't listened to your first episode. And you know, to be fair, was one of our very first episodes. Can you just give us a quick intro? Who are you? How did you get into product? Tim Herbig:  Sure. So let's start with the more difficult one. So how did I get into product like most of us was sort of stumbled into it, like the company I was working for back in the days, look for a guinea pig to try out this product owner role and was like, hey, the working student might be a good fit for that, which tells you a lot about how serious they were about that approach. And naturally, that's how it turned out. That's how I found my way into product. And you know, over a couple of different industries like publishing, professional networking, b2b, AV testing software for enterprises, I found my way through the roles of Product Manager heads of product, also worked in a couple of smaller startups, amongst others, I wanted to try to disrupt the dark setting industry with a super fancy mobile application. But and I'm still here tells you that it didn't work out. I like to think that it was because I'm more of a cat person and not much of the product. But that's maybe a story for another podcast. So that's what I did over the last 11 years now. And since almost three years, I work independent. As a product management coach and consultant, we have the privilege with working with many great product teams pretty much across the globe, and helping them to solve the customer's problems and contribute to their business goals. And that's what I do for most of the day. And also love sharing my knowledge while being a guest on shows like this was also through writing on my homepage, or co or from our weekly newsletter called product thoughts, which is shared with folks. Randy Silver:  And we will link to all of that in the show notes if you missed the link. So don't worry. One of the reasons we wanted to talk to you today is there's something that you're a bit of an expert in that I'm really bad at. And I get better at it. And I've seen you talk it's not German, this is your Telly. But it's it's something called impact mapping and which I have not been able to really use successfully even though I've tried, and I've seen you give workshops on it and it made so much sense. So we brought you back on to give us a lesson. So let's start the beginning. What is an impact map? Tim Herbig:  So what does it impact map? That's a great question. I guess it's it's many different things to many different people. To me it is first and foremost like a set a tool for sense making for product teams, which is I know very vague. I think the original of impact mapping go back to 2004 as I learned and but it has been like popularised by Goku a church back in 2012. I guess we also published a book on the topic. Personally, I got exposed to it in 2014 when I was working at a very, let's say, optimistic product discovery, which required lots of connecting the dots and lots of insights and lots of navigating the problem space. And that's what it is to me these days. And it's at its core. It's a tool that helps product teams to navigate the problem space and connect individual activities and features to those larger business. This goals using user problems prove user problems as like a proxy. So it has evolved over time. Originally, it was a, like a four level framework. These days, I like to use it as like a five level way, because I found that it's more helpful, more tangible for proteins to work with. But yet, so if you would boil it down, it's a multi level mapping technique that helps teams to connect individual activities and features to overarching business goals and strategic priorities. Lily Smith:  So do you typically tend to use this mainly in the discovery period? he said. Tim Herbig:  So I think that, as with so many things in product, it highly depends, it's definitely my preferred use case of using it as like a first you could say for slightly more mature teams who have understood that it's that the navigation of discovery is not necessarily a rigid process. And just by following the blueprints, you will equal success. It helps you to be like a companion to document what you've learned to frame why you would want to embark on a discovery, and to make deliberate choices about what kind of options you might be pursuing and how you're better than these. But I've also seen great success for teams who might not be that fluid and pro discovery, if you want to say so who could use it as like a one off exercise to pretty much identify, okay, what are all the things we're doing right now, and all the things we're knowing and trying to connect these and realising, oh, there's a big gap in our understanding of what kind of problems this user segment is facing. This feature doesn't connect to any user problem. And we don't even know what the overarching business goal is. So it exposes these blind spots and encourages you to hopefully take more educated next steps. Lily Smith:  So in the in the first instance, where you say they're kind of maybe more mature product function might use it as a, an ongoing tool as part of their product discovery work. They kind of looking at the impact map after product research interviews, and then taking that outcome of those interviews and mapping them as part of the impact mapping exercise. How, like, how does it work on a sort of ongoing basis? And are you building like, one very large artefact? Or does it break down into smaller components. Tim Herbig:  So what I would recommend teams doing is to start, even like before doing the research, because from my perspective, that specifically, the first two levels of the impact map, which is called the impact, though, and the the actor level, these are quite helpful questions to have clarity about when it comes to even approaching structuring your research, right? Because the first level, the impact is a lot about what's the what high level company metric, mirror success for us, three to 12 months into the future. And this could be one of multiple metrics, right? So a company or department, depending on your size, might be focused more on monetization, user growth, churn, tech quality, those very lagging, high level metrics. And you might hand or you might use one of these as the main framing for a product team or give them discovery effort. And so depending on what business goal is most important to you, you might want to think about, okay, who should I talk to, like whose problems actually relevant in the context of that impact? Right? So you wouldn't just use the same segmentation over and over over just saying, oh, new user, returning user shouldn't use a free user, but you want to look at, okay, what kind of attributes are relevant for that, for that impact? So to say, and then, once you have talked to the people, or did whatever kind of qualitative quantitative research you had to do, you can then distil the insights or the spotted patterns from those efforts into the the third level, the outcome level of the map and thereby articulate, okay, how would I have to change the behaviour of a given actor so I can achieve my overarching impact or can contribute to it? So Randy Silver:  let's, let's just make sure we have that all in order. So we know what we're talking about. So you said there were order? Well, you said there were four levels. And then you've added a fifth. So can you just detail what those levels are? Sure. Tim Herbig:  So it would start at the top. The first level I call electrical the impact, which is, as I said, like this high level company metric measuring success, those can typically be found either coming straight from your product strategy, if it's sort of like a good one, or these can come from your company level OPR sets either the quarterly or the yearly level. Typically, these key results are have this lagging nature and mirror those high level company priorities. Another attribute of the impact is that it requires typically multiple teams multiple features coming together in order for it to change, right, it doesn't just change because of one feature. Below that you have the actor level and I'm sure we can go a little deeper into what the actor means but essentially it's it's the level of outlining who has a problem that is relevant in the context of goal that is preventing us from achieving this goal or contributing to this goal. And making sure that you list the segments that have these traits, these attributes. So to say, the third level, I like to call the, the outcomes and borrowing from Josh Sidon here, seeing an outcome as a change in human behaviour that creates value or contributes to impact. It's about synthesising those research insights, turning problems into necessary changes in behaviour. And basically, these first three levels are more focused on the problem side of things, as you can probably tell, so it's a lot about this strategic framing, selecting the user segments distilling research insights, and then get it let's get to it easier for most product teams, because then it's it's time to start to talk about solutions and all those fun experiments. So the fourth level is the called outputs, meaning we now talk about the specific features, projects, epics, whatever you want, however, you want to slice these that you would want to pursue to drive one of the given outcomes you have prioritised, right, so Okay, this is how the behaviour has to change, what feature has the highest chance of creating this change in behaviour. And the fifth level, as added is to give teams even more structure for being aware that just because an output a feature is a potential idea doesn't mean you have to build it right away. But there are probably some more lightweight steps to increasing your confidence in a given idea. So added a fifth experiment level where it's about structuring quantitative or qualitative experiments, you could run to further validate if an idea really drives this outcome you set out to achieve. Randy Silver:  Okay, that all makes sense. And it makes even more sense when you read an article or seed the diagram and put it all together. And unfortunately, we have that link in the show notes. But I have spent, I don't know how long in workshops with stakeholders trying to explain the difference between an outcome and an impact. And it's just gone wrong. So help me with this is this is the entire reason we brought you on Tim, no pressure. The difference between an impact and outcome? How do you get people to understand that quickly. Tim Herbig:  So the reason it's so hard to try to articulate that more quickly is the impact is more about the company perspective. So to say, and the outcome is more about the user perspective, that's the probably the first the high level differentiation that makes most sense to people. The second, I would say, attributes that differentiate these two elements is their, how leading or lagging they are. So as I mentioned, the impact typically is fairly lagging as in, you can only measure it in hindsight, it requires multiple things to have more to fall into place until it changes significantly until you can detect a change. And that's what the big problem comes in when teams try to map individual features experiments to those large company lagging metrics, right? So for few companies, it's the case that you can tie in a feature release to company revenue, or maybe even share price, right, which would be more lagging. And therefore, obviously, teams need more more guiding like, Okay, what kind of metric tells me if the stuff that I'm doing actually moves the needle in the right direction, and so that you don't have to wait for those annual shareholder reports to figure out whether you made an impact, you need something more tangible, and that's where the outcome comes in, right? Because typically, those these outcomes you've identified for research, change at a larger pace. So once identified, these outcomes can also be used for your team level key results are team level okrs, to measure the success of your product as well, because then once you build a feature, and you want to determine not just the result of an experiment, but the ultimate success of a product, you want to look at the outcomes and how the outcomes have changed in particular. Lily Smith:  So So with the outcomes, are those coming out of the research that you're doing? And are you making assumptions or hypotheses from your research, in order to kind of have some sort of confidence that they're the outcomes that you should be focusing on in order to achieve the impact? Do you see what I mean? It's like, Tim Herbig:  actually, no, that makes total sense. I think, again, I think it comes down to two to two layers. I think the first layer is making sure that you're only doing research on those segments that actually matter in the context of the of the impact. So for example, let's say your impact is focused on monetization, which means you probably want to utilise you look at attributes like paid membership saturation, churn rates trial. So let's say you're pursuing an impact focus on monetization, you probably want to talk to people who share attributes like a given paid membership situation or a certain average revenue per customer or given churn rate or try conversion rates right so these segments then become more simple. And, and therefore already your your corridor of insights to consider has significantly more has become more narrow, right? Because you don't consider all people's problems, so to say. And then obviously as the second layer, you want to make sure that you only list those outcomes that are based on actual patterns, right? So just because one or two people have articulated, oh, it's hard to complete the checkout doesn't necessarily mean that this is an outcome you should be prioritising. So I would say having those two levels of saying, okay, is like, is the group of actors? Is the actor relevant to the goal in terms of quantify ability? That if that's the words, probably not like, how, but quantifying them? how large the portion is, how large how big the leverage is, if you change something for that segment? And second, how you how prominent is it? Right? We probably have heard many proteins, say you like, no, but customers tell us they need that. And once you'd like, drill into that, like, Okay, how many people have said that? It's like, Yeah, I heard that from a salesperson over lunch, that they heard it from a friend, right? So you want to avoid this those anecdotal proxies at this point. Lily Smith:  Sprague Sprake, formally, usually is an all in one product research platform that lets you ask bite sized questions to your customers within your product, or existing user journeys. Randy Silver:  Companies like Dropbox, square, open door loom, and shift all use springs, video interviews, concept tests, and micro surveys to understand their customers at the pace of modern software development. So they can build customer centric products that deliver a sustainable competitive advantage. Lily Smith:  So if you're part of an agile product team that wants to obtain rich insights from users at the pace you ship products, then give sprig a try for free by visiting sprig calm. Again, that's sprig. Sp r I g.com. Randy Silver:  I mean, if I can't prioritise the things, that salesperson told me over lunch, what am I going to prioritise him? Okay, let's don't answer that. So one of the levels you've got is actor. And I think there might be a subtle difference between a persona and an actor, and I'm not 100% sure what that is. So is Tim Herbig:  there neither mine, but let's try to let's try to untangle it together. So here's what I think about it. Obviously, there are different ways of how persona might be set up, right? We have those like super flashy gut feeling based marketing personas. But let's put these aside for a bit. But what do you typically what I typically see is that the persona is still based on let's say, static attributes that are like sort of stuck with this person, right. And they're more humanised, we're giving them a name. And you might hear something like Tim 32, a bit too serious about coffee listens to five podcasts per week, that doesn't change the mindset of a persona. And that means that I as persona I'm only considered if these broadly match attributes seem to appeal to marketing or product. The actor categorization as I see, it is much more flexible and dynamic, right? Because it's not, it's not based on ultimate attributes a person might be having, but on those quantifiable behaviours that are relevant in the context of the goal. So it's really just about the context and saying, like, just because you, like coffee doesn't make you something worth considering for our goal must be looked at, okay? So the goal might be, might be might be engagement, or might be activation. So we want to look at things like sign in rate, mobile usage, push notifications, click through rate, and we look at these quantifiable attributes. And from there, we start to build our extra segments. And this is, again, can very, very flexible. And I like to think of it as not aesthetic as the typical persona categorization, because it's primarily based on the context of the overarching goal. Randy Silver:  So is there a limit when we do impact mapping? How many actors we should concentrate on? Should we just pick one for each experiment? Or is he looking at from a couple of different perspectives? Tim Herbig:  That's a really good point. So in general, I wouldn't set necessarily a limit to the number of actors you should map out in theory on this level, right? Because it's about basically documenting what you have learned. That doesn't mean that you have to solve the problems of all the actors, because that's probably the second biggest trade off what, to me separates this actor killer actor perspective from the persona is that it's also about mapping them out, particularly by mapping them out horizontally on that level, acknowledging the relationships and the differences these actors can have. So for example, originally in this book, Gregoire church also talked about onstage and offstage actors Which I think is a brilliant metaphor to explain that to people because it's about those onstage people are probably your primary users, right? These are the people who are really using your product, who you're engaging with on a regular basis. Those offstage actors might not be as prominently visible to you, but they play a role in your quest to achieve the overarching goal. So one prime example for me was once when I was building a JIRA integration, I was primarily talking to the users of the integration later on, but turned out the IT administrators of my customers were very relevant actors, because I needed to consider their problems in order to get my solution to be adopted, which was one of the larger overarching goals. So those offsets actors are typically as I said, not as not as top of mind for most product teams. But these are the things you you pick up when talking to customers. And they mentioned maybe it mentioned on the side, like, Oh, yeah, and then I need to talk to so and so or department XYZ, and you're like, oh, that could be an interesting actor to consider, because it might stand in my way of achieving the overarching goal, and then you can map them out. And then when revisiting your interview insights, you can still decide, okay, is this really a crucial role? Will their problems really prevent us from achieving the goal? Or do I just not pursue them anymore? Lily Smith:  Yeah, I think that's really interesting as well, because those, I think you call them adjacent actors in the blog, where you mentioned some of this. And quite often, you know, when you're trying to sell a product, or you know, use a an online product, there might be other people in the background, that have an impact on whether it's successful, and it's delivering value and everything. So yeah, identifying those and calling them out. And I suppose you can only really do that through user interviews, Tim Herbig:  right? Or like maybe maybe a couple of open ended questions in the survey, right? But all the good, probably you really want the qualitative side of things, then it will be up to you to dig into like, okay, is this vector existence, I do exist and other relevant for my, for my discovery, or my mission? Lily Smith:  So, you, we've got to the point where we have a very nice impact map that has impact actors, outcomes, some outputs? Or actually, yeah, how do we get to the outputs? But were we? I don't think we've covered that. But yeah. Tim Herbig:  Yeah. I mean, it's, it's a, it's, it's quick to cover I guess. So it's like, once you're at this point, okay, now we've really, we've proven outcomes. And then again, it's still it's about asking this question. Okay, what are these outcomes? Should we focus on first because it poses the biggest lever for the overarching goal due to being lamed by the largest customer, extra segment or whatever? Basically, can you want to engage in more like structured ideation, right? To say, like, okay, we have this outcome. The the beauty of using the impact map for this process is that you can bring people on board fairly quickly, by giving them the right context, like, okay, here's why we care about this challenge about this outcome for this actor in the context of this larger business goal you all might have heard of, from the last all hands meeting. So now let's use let's frame that as a whole might we statement and start to generate some ideas we can continue working with, then you really want to go through the motion of like this, you know, structured ideation process a little bit of coding, until you end up with, I don't know, five, six manageable outputs, or feature ideas that come out of such a structured ideation session, and you then want to place them on the output level of the map. And then you can really connect them again to the outcome, you can make the case for why you would want to pursue this feature in the larger context. And then obviously, you have to have to make a choice, what kind of feature you want to work on first. Lily Smith:  So you're prioritising the outcome level, and then ideating against each potential outcome, to generate different outputs. And at that point where you're ideating do you tend to just like really focus the team on right this is the one outcome that behaviour change that we are trying to achieve with our users and or these actors rather than kind of mixing it up and having a few different options. Tim Herbig:  Right so my experience is that depending on the setup you're having but assuming you might want to use this ideation phase to also bring on let's say, like more like of those supporting discovery collaborators like marketing sales or sea level to engage with them. I found it best to really focus a give my ideation session only on one challenge and one outcome so to say, so people don't get like don't have to switch context too much right and talk about different challenges. So really want to focus on okay as the starting point we're going to focus on this outcome. And this is Again, which also symbolises the idea of the impact map being like not a static artefact, but something you can continue to use over time. Because at this point in time, you might focus on this outcome at a couple of weeks in the future, where it's about like, Okay, what kind of features or outputs which you want to focus on next, you might want to revisit some of the other outcomes you mapped out either from this same actor or different actors, and you want to continue utilising it. Randy Silver:  One question about that, though, this, it sounds like you're getting dangerously close. And maybe this is actually the intention to solution arising in these meetings. And is that is that a good thing? Or is there a way of guarding against that? Tim Herbig:  I think at this point in time, it might be a good thing, because after all, right, it's like solutions is what will drive those business goals and these outcomes. I think if you really managed to have the discipline of going through the motions of clarifying the strategic goal, what segments are relevant, what are the problems in the form of outcomes, I think you're all set to finally start talking about solutions. As long as it's clear to everybody that just because like an idea has been generated and has been mapped as an output doesn't mean that it ends up on your roadmap, and is this promised feature that drops in the next 18 months? Or more like, this is the range of options, we have to drive this outcome? And let's make an educated decision about with which we're going to start. Randy Silver:  So what's your solution? I'm saying just to be clear, is not committing to building a feature. It's so it's you're committing to an experiment to validate if that's going to work? Tim Herbig:  Right? Yeah, that's a that's a really good differentiation, I think to say like, yeah, it's not it's not a commitment. It's like a range of options. And showcasing also like, what kind of decision making criteria are you using to say, like, okay, I stop, or start validating, or running experiments for feature a versus feature B. Lily Smith:  And how does this fit in with your product roadmap? Do you see it replacing a roadmap? Or does it work alongside it? Tim Herbig:  I definitely would see it's working along the side of it as like a complimentary tool, because I think, depending on what kind of roadmap format you'd like to use, but let's assume teams using a more problem or theme oriented roadmap, I think it could be nice, because from again, from the impact, you can probably derive the high level theme, as I mentioned, is it monetization, engagement, retention, churn all these things, you can then be more explicit, with any given roadmap item of saying, Okay, this is the theme. And this is the outcome this roadmap item is about. And maybe, depending on how granular and what your time horizon is, you want to look at maybe you can already list the feature or experiments you want to run to drive that outcome within your roadmap item. But obviously, the further you look into the future, the less granular granular you want to be. So in this, then those more high level roadmap items in the future could be the starting point for your next impact map, right? So maybe, in I don't know, in the next or later column off of the master roadmap, you have things like international is like international expansion, or m&a activities. And these could then be your starting point for another impact map on the impact level of trying to quantify that thing. Lily Smith:  So How often would you have the team revisit this? The impact map once you've kind of got it into your discovery? Or like product workflow? You visiting it, like once a month? Or does it depend on the cadence of like how you're working? Tim Herbig:  Right? I think what you just said like that depends on the cadence you're working. Probably in problems that the most let's say, you're really in a very more like, very intense discovery phase, where you will learn new things by the day or by the week, I think it makes sense to use the impact map as like a lightweight way for stakeholder updates or discovery check ins if you run some kind of meetings or to bring the product team up to speed during a review of backlog refinement. So using that as a lightweight way to frame the work in progress, essentially. I think on a more if you would look at it from a more like tactical, sorry, strategic planning level, you might want to use laser during some kind of like rolling quarterly, or maybe yearly perspectives. I think from a team perspective, if you are really in the midst of gathering new data and insights by the day or the week, use it for that to document insights and make decisions. And other than that revisited as soon as you want to shift, focus have to focus or want to or have to communicate higher level strategic priorities that are coming up. Randy Silver:  So for anyone getting started with impact mapping, someone who's inspired by this conversation, and I know you're out there, how couldn't you? What mistakes Do you see people make when they get started? What's the one piece of advice that you'd give to people to say, here's how to start using it in in Well, I'd say an impactful way but that's, Tim Herbig:  that's that's pretty good. Um, so the the biggest, let's say, challenge I see for teams doing that the first time is differentiating the outcome and the output, right? Because very often when I give teams the challenge of like, okay, articulate how you're going to have to change the behaviour, they are listing features, right? And naturally, that's the output. And so that's the biggest thing teams, which should be very cautious about this, because like, if you look at it from the, if you try to visualise the map as like a top to bottom five level framework, you go from the problem space at the top to the solution space at the bottom, and particularly the outcome and the output level, or like the atch, or from going from the problem to the solution space. And so it's easy to get mixed up here. So being very cautious at this level and using good facilitation and guiding questions to make sure like, hey, are we still talking about an outcome? Or are we already talking about an output? And my favourite way of highlighting that to teams is using the How might we test as I like to call it so as I mentioned earlier, you should be able to, you should be able to rephrase an outcome as a whole might we statement essentially? And if that's the case, if that makes sense, chances are higher that you're still talking about an outcome compared to talking about a solution. So when, like an NT NT example would be, if you just framed like, how might we build a share button, that's that leaves some room for discussion for the actual execution, but doesn't really inspire ideation and coming up with new ideas? Whereas if you would rephrase it, like, how might we enable account managers to share data with their clients faster, that opens up the room to actually come up with solutions that you would consider for that. Randy Silver:  So saying, how might we release this in q3 is a bad one. Tim Herbig:  You might have to be creative, though, to come up with a couple of tactics to achieve that. Randy Silver:  Fair, but still not. Tim Herbig:  The questions like who's who's changing behaviour is that Lily Smith:  it's been such a pleasure talking to you this evening. Thank you so much for joining us. Tim Herbig:  Yeah, thanks for having me on. Lily Smith:  So if everyone who's listening shares this podcast with three friends than the outcome will be that we'll have more listeners. Randy Silver:  And the impact will be that more people build better products. Lily Smith:  I was thinking more along the lines that we'll just be slightly more famous. But yeah, building better products is good too. So please share this episode with three people. There it now. haste, me, Lily Smith and me Randy silver. Emily Tate is our producer. And Luke Smith is our editor. Randy Silver:  Our theme music is from Humbard baseband power. That's p au. Thanks to Nick Hitler who runs product tank and MTP engage in Hamburg and plays bass in the band for letting us use their music. Connect with your local product community via product tank or regular free meetups in over 200 cities worldwide. Lily Smith:  If there's not one Nagy you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank. Randy Silver:  Product tech is a global community of meetups during volume for product people. We offer expert talks group discussion and a safe environment for product people to come together and share greetings and tips.