Rerun: Making Better Predictions – Margaret Heffernan on The Product Experience "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs January 01 2022 False Podcasts, The Product Experience, Mind the Product Mind the Product Ltd 6949 Product Management 27.796

Rerun: Making Better Predictions – Margaret Heffernan on The Product Experience

BY ON

If the last couple of years has taught us anything, it’s that predicting the future is hard. In this podcast episode, sponsored by Amplitude, we turn to Margaret Heffernan to learn how to do it better. She’s a captivating speaker with more than 12 million views of her TED talks and a prolific writer who recently published Uncharted: How To Map The Future – if she can’t help, there’s no hope!

Listen to more episodes…


Featured Links: Follow Margaret on LinkedIn and Twitter | Margaret’s Website | Margaret’s page at TED | Margaret’s latest book Uncharted: How to Map the Future’


Episode transcript

Randy Silver: 

Happy Birthday Lily.

Lily Smith: 

Thanks, Randy. Um, but it’s not my birthday until later in the month, the 17th. Actually, you know, if anyone wants to send me birthday wishes or anything,

Randy Silver: 

I’m sensing something like a hint here or something I should be doing something about this. But what I actually meant Lily was happy podcast birthday. We’re unhealth three years old, if you can believe it,

Lily Smith: 

oh, and what an amazing three years it’s been, I couldn’t have predicted that the podcast would grow and develop in the beautiful way that it has.

Randy Silver: 

And it wouldn’t have been possible without the fantastic team at mind the product, our great editor, Luke, thank you, Luke, and our listeners,

Lily Smith: 

not to mention our lovely guests, of course. And speaking of not being able to predict things, well, you’re all making your plans for 2022. We thought this would be a great time to revisit our chat with Margaret Heffernan acclaimed author and business woman.

Randy Silver: 

You know, this is the one episode that my wife has actually listened to Lily, and she thinks it’s amazing. And that’s because Margaret explains why we can’t predict the future and what we can and should do instead, let’s get right into it.

Lily Smith: 

The product experience is brought to you by mind the product.

Randy Silver: 

Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love,

Lily Smith: 

because they mind the product.com to catch up on past episodes and to discover an extensive library of great content and videos.

Randy Silver: 

Browse for free, or become a mind the product member to unlock premium articles, unseen videos, amas roundtables, discounts to our conferences around the world training opportunities.

Lily Smith: 

mining product also offers free product tank meetups in more than 200 cities, and less property. Hi, everyone, welcome to the product experience podcast. Randy, do you know that our guest this week has more than 12 million views of her TED Talks?

Randy Silver: 

Wait a moment, Willie. That means that she’s even more popular than we are. Who could it possibly be?

Lily Smith: 

Well, this week, we have the very great pleasure of welcoming Margaret Heffernan to join us and talk about forecasting prediction, and how we shape our work through our assumptions of what’s going to happen. Margaret’s book signing was actually the last event I went to before lockdown kicked in

Randy Silver: 

Margaret’s an amazing speaker, a fantastic author and a business leader with a huge amount of experience. And we really enjoyed this interview. Although she says we can’t predict the future. Her new book is so timely because just as it was launched the global pandemic it and I’m pretty sure that wasn’t on anyone’s roadmaps. I’m not

Lily Smith: 

sure if he predicted that Randy but I’m comfortable predicting that you’ll love our chat with her. So let’s not waste any more time and get straight to the product experience is brought to you by mind the product.

Randy Silver: 

Every week, we talk to the best product people from around the globe about how we can improve our practice and build products that people love.

Lily Smith: 

Because it mind the product.com to catch up on past episodes and to discover an extensive library of great content and videos, browse for

Randy Silver: 

free, or become a mind the product member to unlock premium articles, unseen videos, AMA’s roundtables, discounts to our conferences around the world training opportunities, and more.

Lily Smith: 

My new product also offers free product tank meetups in more than 200 cities. And there’s probably one near you. Hi, Margaret, thank you so much for joining us on the product experience podcast. It’s really lovely to have you here.

Margaret Heffernan: 

Thank you so much. It’s really fun to be talking to you.

Lily Smith: 

So for those that haven’t come across you before in your various different guises of author and TED speaker, you want to give us a really quick intro into your background and and what you’ve done.

Margaret Heffernan: 

Sure. I started my professional life as a producer for BBC Radio, and then television and then I ran the trade association for independent film and TV producers. And then I moved to the US where I ran, I started and ran three different tech companies. And then I moved back to England and since then I have written and published six books. Three plays and I mentor a number of entrepreneurs and also senior and chief executives in large corporations. I am a professor of practice at the University of Bath and and I occasionally write for the Financial Times

Lily Smith: 

and have Four incredibly successful TED Talks and think about with an amazing number of views as well.

Margaret Heffernan: 

It’s about at last count about 12 million.

Lily Smith: 

Yeah, incredible. And what I love about so I kind of first came across you when I read beyond measure, what I love about your writing, and is the kind of the insights that you have into the businesses that you speak to and that you do your research with. And the one that we’re going to cover today is your latest book where you talk about forecasting and predicting the future. So at the moment, we’ve got more data than ever before, and like incredible computing power. So tell us, why can’t we predict the future still?

Margaret Heffernan: 

Yes, it’s interesting, isn’t it? I think we had this fantasy, or just pretty humbly sewn to us that if we had enough data, we could predict everything. And strangely enough, it seems to be coming true. There are several reasons. One is because if we really wanted the perfect model, it would have to be as big as the world. And obviously, never going to be the case. So what happened, they said, so kind of usable models, you know, smaller than the world have a bunch of kind of systemic problems. And one is that, of course, in order to be useful, they have to leave things out. And what they leave out, you know, is based on assumptions and value judgments that we make, which may be right and may be wrong. And they’re certainly going to be informed by our biases, by our opinions by our life experience. And often by the past. And since the past doesn’t repeat itself, that’s also a problem. So the problem with models is that they are incomplete, almost by definition, that they are biassed, and that they often have agendas in in other In other words, we’re using them to sell ideas that we already have. And as such, while they might be they had definitely have some utility, what they don’t have is rock solid, credibility. At there’s another difficulty with models, as one of the most interesting people I talked to him was very implementing Richard Padgett, who runs the Centre for epidemic preparedness. And he said, You know, one of the problems with models is the numbers look so concrete, you start to see reality, the model for reality. And you start to think that they give you an absolute truth, and they don’t, they’re at their most useful for provoking discussion. But what this really means in the end is however much data you pour in, the model is going to give you a slice of future possibilities, it is not going to give you it cannot give you a rock solid 100% probability forecast.

Randy Silver: 

You’ve talked about the different kinds of data as well the different string complex data and complicated data. What can you go further into that? What is the difference?

Margaret Heffernan: 

So this is really interesting. And it’s, it’s something that I think has got a lot of traction, because it sort of changes the way you see everything. We live in a world that is both complicated and complex. So complicated things are tend to be linear, they’re very cause and effect. They tend to be very controlled environments, I often think of it as a bit like an assembly line, for example. And you can control those things because you have control over the inputs and all the pretty much all the influences the inputs. And so you can predict the output. And these are systems that are very, very well honed by efficiency. complex environments are quite different, which is that they have lots and lots and lots of factors operating on them, some of which you might be controlling, some of which you can’t. And so what that means is that while there may be patterns within these complex systems, they don’t repeat themselves. Predictably. It also means that very small things can have a gigantic impact. For example, a very small thing like a virus bring the world’s economy to its knees. The other thing about them is that because there’s so dynamic, often expertise can’t keep up because the system is changing too fast. So it’s a you have an end if you try to to kind of forced these systems to become efficient. And it doesn’t work and be what happens is you eliminate all your margin for adaptation. and response. So when you apply efficiency to a complex system, you actually make them more dangerous. So it’s really important in companies or projects, to understand which kind of system ally in right now. Because one system, you will want to manage to be efficient. And one part of the system, you will want to manage to be robust. In other words, where there is a margin for error and surprise,

Randy Silver: 

love that the difference between efficiency and robustness, that’s fantastic.

Margaret Heffernan: 

So a great example is, you know, if you fly, you know, you remember the days when we used to fly, and you go to the airport, and there are lots of systems around you that are complicated. Checking your bag is complicated, it’s pretty much the same every single time. And there’s a lot of control over it, and very, very susceptible to great technology to make it more efficient. Pretty much the same thing with getting the food on board, right. But once you get into the air, there are all sorts of things acting on forces acting on the plane, some of which you have some control over and some of which you don’t, and which are inherently unpredictable, you don’t know if there’s going to be a key strike, for example, you don’t know if there might be a bug in one of the operating systems, or one of the parts of the plane. And the consequence of that is that you these planes typically have four engines if they don’t need four engines. And the engines run on different operating systems, which is really expensive, because it means you have to maintain these different engines at the different operating systems, but they’re robust, because they’re designed so that if one of them fails, they can fail safely. Because the impact of getting a tiny mistake could be this gigantic tragedy. So you would never want to get on a plane designed for efficiency. And in fact, argue that that’s one of the problems that Boeing encountered, which is it started to kind of ruin everything, for efficiency.

Lily Smith: 

So when we’re developing products, I guess we can look at whether you know how predictable our product is, whether it’s just a simple completed form, to send it off to order something that feels like it could, you know, benefit from some efficiency. But if it has a lot more complexity to it, then we need to ensure that it’s more robust, I guess.

Margaret Heffernan: 

So for example, the other thing you’d want to think about is if this were to go wrong, what’s the impact? So if I complete a form incorrectly, it’s unlikely to be a matter of life and death, probably, right. But if I am ordering a prescription online, and I don’t understand the different permutations of my prescription, the consequences of that could be really terrible. So you want to think about what, especially because user, you know, user interfaces, and user behaviour is intrinsically complex, there is no one user. And even though you can design all sorts of test cases for multiple user profiles, you and I both know that if you design a user interface, the first person who uses it is going to use it in a way you never ever, ever dreamt up, right? So you want to make it reasonably robust. So the person who likes doing it backwards? Do it? Or if they can’t do it, the cost to the business isn’t gigantic. Right? Yeah, that’s what you have to think about. You know, for if you think of a really simple product, like a ladder, a ladder probably doesn’t need all the safety devices, you know, like the kind of catch to keep it exactly right. And all the rubber feet in handles and all that kind of stuff, but it’s there to make it safe, or, but it’s also not, you know, 70 feet high. Because that never be safe enough. The product to solve the how do I reach something 75 feet up.

Lily Smith: 

So if we can’t predict or forecast what’s going to happen with our complex systems, and there was a phrase in your book that I really loved, it was in a context saturated with ambiguity resisting the gravitational pull of certainty remains paramount. So it’s like, Yes, this is surgery. So tell me more like how should we behave when we’re faced with such a large amount of uncertainty like we don’t know how people are going to respond when we change things in our products and release new features or talk about things in a different way, in terms of our language,

Margaret Heffernan: 

so a lot of the things that you do already are really designed to address this, which is you’re doing a lot of user testing, you know, to see all the crazy things people think of doing with your product that you never imagined, you’ll probably have beta testers and people like that, to test your product, you’ll try to think of all the possible permutations. But then you may also get a lot of feedback once you release the product. And what you’re accumulate is all sorts of feedback that shows you where the problems are. And then you’ll triage those mostly by impact, right. So the problems that have a big impact, look for the address first. But the mistake I would expect you and your colleagues never to make, although absolutely brand new tech entrepreneurs do make this mistake, is to think that you’re going to get it right first time. And you’re just not just stuff is gonna happen. I mean, I remember this a lot, because in the early days of the web, when I was running tech companies, you had servers, which was a relatively new technology, and were quite young and stable. And on those were operating systems, which were constantly changing. And on top of that, you would have browsers which were constantly changing, and didn’t necessarily interoperate with all kinds of websites very well. And then you had all sorts of new completely funky, often insane, you know, website designs, that were quite unstable, wanting on top running on laptops, that were all wildly incompatible. So this was statically, unstable environment in which to develop software. And you know, what it left me with, with a complete understanding that you’re never going to get it all right. Just never, ever, ever, ever, ever. So you have to think hard about what are the things I must get, right? Because if I don’t people die, right? What are the things that I must get? Right? Because otherwise, the product doesn’t function and we go bust? You know, and after that, everything else is kind of a nice to have. But I think what’s really what really, I mean, lots, what really fascinated me was that there was a belief. And I think it’s still around that if we just keep at it hard enough. And we fix all the bugs and keep fixing all the bugs, we will get it perfect. I think it’s madness. It’s madness. And it’s why that that fantasy is why people become so gullible when you start selling them nonsense, like driverless. I mean, if my if a driverless car had the same stability that my laptop does, which is a relatively mature technology, I die a couple times a week. And then when they say, they’re going to be so great, there’s driverless cars, because they’ll be able to go and convoy. So they’ll be able to be closer together, because there’ll be so finely tuned, and they’re all run off the same system. And you think, well, first of all, this is mad. And secondly, think, Hmm, if I were a hacker, oh, my God, this would be my wet dream. I don’t have get to crash one truck, I could crack 50 trucks, because all they have to do is

Randy Silver: 

just think of how efficient they are. I know.

Margaret Heffernan: 

I mean, this notion of this perfectibility, I think, you know, it’s just propaganda. It’s a way of selling stuff, you know, this is all going to be so wonderful. Trust us, you, we only have your best interests at heart after the share price. You know, it’s so I think, I think we have a big kind of mental adjustment to make in life and in business, which is there is no product that has ever been 100% safe. Even the simplest thing. And so ask different questions about which are the things that must be got right, and what are the things that actually we can afford some margin?

Lily Smith: 

I think that thing of, you know, driverless cars are going to take over the world in 10 years time 70% of the population will be in I don’t know, whatever it was. Drive. We tend to you know, we tend to get caught up in these technology trends and also societal trends of like, this is going to happen, so therefore go and build products that take advantage If this what, you know, what should we be doing at this point? Do we just not believe any of it and then go purely based on the evidence that we have and very much with our product mindset, or should we look at it in a different light?

Margaret Heffernan: 

Well, I think, you know, the first thing is to ask is these great predictions? Where are they coming from? And what agenda the people issuing them have, because most of them have agendas, and most of them are propaganda or salesmanship. So all the very overheated rhetoric around driverless cars is partly driven by people who are trying to corner the market and scare off competitors. People are in the market trying to scare off investors. And so very idiot, people who don’t believe in public transport, don’t believe in public infrastructure, and want to persuade governments that they don’t have to provide any of those things. So you have to be you know, put put your brain in gear and think about where’s this stuff coming from? And why should I believe these people, you also have to use some common sense, you know, so that when they start saying the blind will drive, and then they also say, but the driver is the last point of intervention, you have to notice that actually, those two segments do it comfortably. But, but also, if we all only ever did the possible, we would never come up with great inventions. So you can you can also think about, Okay, what about this concept might be viable or positive possible? You know, what do we gain by trying? It we’re working on a new product or a new form of engineering or whatever? Can we learn a lot of along the way? So for example, you know, if you look at CERN, you know, CERN is doing impossible thing, you know, 12 impossible things before that die. They’re looking for neutrinos, which everybody thinks, you know, should exist. But do they exist? And how do they behave? We don’t know. So, mil billions are invested in searching for neutrinos. And it may turn out that like bosons, they can be found. They may also turn out like string theory to be unprovable. And you don’t know when you start. But at CERN, what they believe is the pursuit of knowledge for its own sake has value. And you have to trust that along the way you discover other things that may have more immediate value. But what so it doesn’t do is promise what it’s going to find? Because the whole discovery and exploration is that you don’t know.

Lily Smith: 

Do you know your product usage interval?

Randy Silver: 

Do you know which percentage of your active users are new current or resurrected?

Lily Smith: 

If not, it’s time to find out.

Randy Silver: 

For yours product teams have used amplitude to mastering retention playbook to understand their user base.

Lily Smith: 

And now amplitude has updated the playbook for 2020 and released it for free on their website, visit amplitude.com forward slash MTP to get your copy today.

Randy Silver: 

So Billy and I both come from the frame of reference of working in product management as much of our audience. So we are part of the job description that’s never stated in the role requirements, Doc, is that we’re supposed to be psychic, we’re supposed to know exactly when we’re going to deliver that. And when someone washing their hands, we’re also expected to know where the markets going to go and plan for it perfectly. So this is a real treat to talk to you. And one of the things that you you another thing you’ve said, is that to think better requires more options, not fewer. But that feels a little bit counterintuitive. So how do we make better decisions? And how do we work better with our salespeople and customers and stakeholders to try and give them either the illusion of certainty or ensure that we’re making better prediction?

Margaret Heffernan: 

Yeah, fantastic question. I mean, I was laughing about this the other day, because, you know, apparently the government’s track contact and trace track and trace app isn’t going to ship on time. You know, software never ships on time, right? It’s famous for not shipping on time. I remember when I first moved into software, you know, we’re having come from broadcasting. The Nine O’Clock News is on at nine o’clock. He can’t put up with me A paper that says, Oh, really sorry, we’ll be with you in a few minutes. So to me, the notion that you wouldn’t met me to deadline was incomprehensible, you know, and then I realised, oh, but the thing about software is you’re doing something it’s never been done before. So now there’s uncertainty. So you can have a deadline, you have to have a deadline. Otherwise, you know, you never make the thing at all. But you have to acknowledge that, that is a hope for date, and everybody’s gonna bust the gut to get there. But then they’re going to be a couple of trade offs you’re gonna have to make one is, is it budget? Or is it features? Is it budgets, features or time really, right. So if you want it on time, it might not be on budget. If you want it on time, we might not be able to do all the features that we would like to do. So you have to be really transparent, I’d say about the trade offs. And I learned this the hard way, right, because coming from my broadcasting background, I would say we’re going to ship on June the first and in my wall, that means I’m going to ship on June the first and then choose the first would come and we wouldn’t ship the product when it shipped in the second either. And I had a fantastic piece of chief technology officer who said, look, here’s the deal, Margaret, we can cut all these features, and ship it by June the seven. Or we can carry on as we’re doing. And we can choose to ship it maybe somewhere around the end of the month. So that’s a choice, it’s a trade off, or we can throw a lot more wanted, which might speed it up, or it could very easily slow it down. So your call. And I think I had a magical belief that if I just you know hammered the deadline, loudly enough, it would magically happen. And what I learned from my fantastic CTO and his fantastic team was that that was totally irresponsible and naive. And that actually we had to have grown up conversations about what matters most here is it time, is it budget is it features. And we have to have a really good conversation with our customers, which is what matters more to you. If this is so mission critical that you’ve got to have the basic basic thing on during the first we can do that, it’s not an efficient way to build the product. But if that’s if it’s mission critical to you, we can do it the inefficient way, in order that you can get up and running. But what a lot of people do, and I’ve seen this up close and personal is they try to fudge it. You also get kind of a little bit of internees side warfare, which is the marketing people won’t want the thing. And the engineering team wants another, and the QA team wants another and they all lobby. And so nobody’s terribly straight with each other about how bad it could be. So you really need to build teams, where people feel really alignment, aligned to the whole project, they understand the customer needs, and they’re really honest with each other. Because if you can’t have those conversations with complete honesty, you just get into such bad blood and such bad feeling that you can poison your whole culture.

Lily Smith: 

And I just want to go back to something you mentioned earlier, around, you know, as as product people, we kind of do user testing. And you know, we’re used to kind of developing hypotheses and then trying to validate our assumptions or in invalidate them. And there was an interesting point in the book with a story about Daryl Plummer from Gartner. He knew he said that with his forecasting. 60% accuracy is optimal. Um, and I thought this was really interesting, because, you know, we don’t we don’t ever really kind of think about the weather, whether we should be always trying to prove ourselves, right, I think, as product people. And I thought that’s quite interesting. If you were aiming to prove yourself, you know, if you had a kind of 60% accuracy, or you know, if I could be right 60% of the time, then maybe that would embolden them kind of more risk taking and your experimentation. So do you think this does apply or is it a completely missed? missed the point of

Margaret Heffernan: 

that? I think it depends a lot on the nature of the product and the nature of the company, which is if you’re working with a company that really prioritises innovation above everything else, then it should anticipate a much bigger margin for error. Because the more innovative a product, the more unknowns it’ll have. Several. Yeah, so that means that your budget and sheduled, for example, are going to be best guesses and know better than if you’re building something that’s never been built before, how could you possibly know how long it’s going to take, or what it’s going to cost? So that requires being very, very frank with the customer or the client, and saying, since you’ve never done this before, and we’ve never done this before, you know, we’re going to do our very, very best. But, but it’s, it is intrinsically unknowable. So, you know, so we have to be quite honest about that. And if, if the innovation aspect is the thing that matters most to you, we need a better margin, then if you just want to get something that’s sellable. So, you know, I think in many of these businesses, you know that where they come unstuck is not having these, what I think of as come to Jesus moments, early, they often come too late, after all sorts of giddy promises have been made. You know, if you’re, if you’re white labelling something that already works, that’s a completely different ballgame. You know, it’s a relatively simple thing. And your, your schedules and so on are very likely to be right. I mean, the other thing, of course, is to have gates, you know, where you reevaluate, you know, is this ever going to work? And if so, what’s it going to take? But the more unknowns there are, the bigger the margin for surprise, you have to build in to the budget. And, and the more you know, the more creative and innovative a product is, the more unknowns it will have in it, you know, just in terms of suppliers? And can you actually design a part and, you know, does the part work with the materials that you can afford? Or do you need a more expensive metal, all sorts of that. But what you call do, even though people do do it, is what I think of as gaming the game, you know, pretending to an expertise that isn’t isn’t available anywhere in the world, and pretending to a level of control which, given the forces of nature, you can’t control.

Randy Silver: 

You were just talking there about come to Jesus moment, and you talked earlier about your one of your own, in dealing with your CTO and understanding how quickly you could ship something you also talked about at the beginning, about how you work with a number of business leaders as a mentor. One of the common complaints from product managers is that they work with business leaders who haven’t had that same come to Jesus moment as you did, that don’t understand the trade offs. So now that you’re on the other side, and you’re mentoring people and trying to get them to the same level of understanding as you What would you suggest is the right tactic to take for people who are in that situation who are trying to get that point across?

Margaret Heffernan: 

I mean, I say, and I, I think that actually, product managers have a part of their role which is not sufficiently appreciated or celebrated, which is the great product managers. I’ve been lucky enough to have work with me were actually fantastic teachers and they taught me my business really, you know, I thought I knew my business. I knew quite a lot about this. But they taught me all this stuff about the trade offs between you know time budget and features. They taught me about you know, how how much clients can change their minds what we about the degree to which clients often have demands that transcend reason and you know, and and once I understood that that I could think about you know, how to deal with that but but I did I think, you know, one reason I’m so like working with Lilly and and you know and with product managers is because I think that they are a hall at their best often they are a cultural translators. They are trying translating from one expertise group to another from one interest group to another, and the very best of them see through All those different perspectives, and try to bring them into some kind of alignment. But I think the very best of them are great teachers.

Lily Smith: 

And another area that you cover is this idea of super forecasters. And how, you know, we can, we can at best? Well, some of some of the kind of forecasters are good at predicting 100 things in the future. And maybe the super forecasters can predict 400 days into the future but, or not even predict 400 days into the future, but give a kind of a fair enough, I have an idea into the future. So as product managers, you know, if we asked for like a 12 month roadmap or a three year plan for the product, you know, how, how should we go about doing that if some of the best people at doing this in the world can only go 100 days? Should we just say no, we’re not? I’m not going to do that, which I’ve done before? Or should we kind of get something really rough and bake together and go, it could look something like this?

Margaret Heffernan: 

I think there are two kind of two things that that you need to do. I think that, first of all, you have to acknowledge that all of these plans are provisional. And it’s very helpful to try to unpack with clients. What are the assumptions, the working assumptions on which the plans are based, because once the plans are turned into numbers, the assumptions disappear. And it’s really important to keep them front of mind. You know, the working assumption for most plans in 2020 20 did not include, we think there won’t be a pandemic. But they did include an assumption, which was, we assume that there will not be any huge interruptions to business as usual, which might have been terrorism or ash clouds, or extreme weather events. None of these things are that hard to imagine, actually. So. So the first thing you have to do is think, you know, all of those contingencies remain possible. And it’s not unhelpful to say the plan is based, assuming that and from a contractual position. Sometimes you’ve got to say that. I mean, I have companies who are, you know, we’re approaching the end of an earnout, you know, the pandemic has completely completely mess that up. And if contractually, they have said, you know, this is all based on the assumption that, you know, business, as usual, will continue, as usual, had they made that explicit, they’d be in a much stronger position than they are now. I think the other thing is, I think you have to think that what I think of is in sort of two different time zones. So yes, we can plan pretty meticulously what will happen in the next three months, we have to recognise the further out we get, the less certain the estimates are. But what we have to have far out in the distance is the guiding principle for which we’re doing this. So if we say we’re doing this, you know, because we want a product that I don’t know, helps people do something important. We keep thinking about okay, that’s where we’re heading. You know, with we’re thinking about how do we build this product that’s really going to help people because that’s going to make our trade offs along the way internally consistent. But the fundamental melt mental model, I think that you have to have is this. A couple of months ago, before the lockdown, I went to do some teaching in Northwest Wales. And my GPS told me it’s gonna take me three and a half hours to drive there for my heroine. It took five and a half hours. I had a pretty good instinct, it was gonna take me five because I know north Wales quite well, it by GPS is fantastic mapping mud, the journey to my nearest post office in seven minutes. Now, things can go wrong in those seven minutes, but there’s only seven minutes in which they can go wrong. So the closer the destination, the shorter the period of time, the less risk there is, the further out it gets the risky or it becomes and the riskier it becomes, the more contingencies you have to build in Because once you take those contingencies out, usually when you’re negotiating to get the contract, you haven’t taken the uncertainties out. The uncertainties are still there, and they’re going to get you. That’s why one of my favourite lawyers once said, you know, every contract is an unexploded time bomb. Because there are uncertainties inside it. And if you haven’t really thought about them, and what the three year five year contract looks like, then you just have to pray the bomb doesn’t go off. So I think it’s this thing of saying, Okay, so in this kind of this sort of product, how far out do we think we really have quite a lot of certainty. And I think this estimate of 140 day, 150 days isn’t far off. And after that, you have to realise Okay, now we’re in north Wales, okay. Where it’s dark, and it’s confusing, and the roads are narrow, and you know, and you just don’t know what’s gonna happen.

Lily Smith: 

Yeah, it was one thing that kept coming. I don’t know if you’re a Star Trek fan. But there’s a character in Star Trek called data, who’s a awesome a droid. And he kind of constantly talks about the the probability of things happening. And I kind of thought, oh, actually, you know, Star Trek, like, they got it, right. He wasn’t, he was never certain. It was always like, well, the probability is flow, and it was never 100%. It was always like, 9 million loan or whatever. So you know, really,

Margaret Heffernan: 

what is really interesting, too, because actually, people find probability really hard, hard to understand. I remember somebody saying to me, when they heard I was writing about forecasts, and they said, Well, you know, they, they forecast yesterday that it wasn’t gonna rain. And it did. And I said, I’ll bet they didn’t bet they didn’t. And I looked it up, because I’m that sort of nerdy person that does that. And it was a 90% chance of no rain. That doesn’t mean it’s not going to rain. Right. So here was somebody who had misunderstood the forecast. And it’s really interesting, because I remember talking to the statistician, a mathematician, David Spiegelhalter, about this, you know, like one of the biggest mathematical brains in the universe. And he probably is data, actually. And, and he said, Well, you know, none of us really find probability is very intuitive. And I thought, thank God, if he doesn’t get it, if he thinks it’s hard. No wonder we all suffer. Because it’s just, we tend to think, well, something’s true or it isn’t. Right. But you know, the weather forecast tomorrow says it’s going to be another beautiful day. It probably will. That isn’t 100% probability.

Lily Smith: 

Yeah, that’s so true. And I actually I always look at the forecast and think there’s still a 3% chance it could rain. Yeah. Margaret, thank you so much for joining us on the podcast. It’s been really fun. Well,

Margaret Heffernan: 

thanks for inviting me and you know, good luck to you and your listeners and all the work that you do. I think, I think you have some of the best jobs on Earth. Yeah, I do too.

Lily Smith: 

Haste, me, Lily Smith and me Randy silver. Emily Tate is our producer. And Luke Smith is our editor.

Randy Silver: 

Our theme music is from humbard baseband power that’s P A you thanks to Ana killer who runs product tank and MTP engage in Hamburg, and please based in the band for willingness to use their music, connect with your local product community via product tank or regular free meetups in over 200 cities worldwide.

Lily Smith: 

If there’s not one Nagy you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank.

Randy Silver: 

Product Tech is a global community of meetups during buying for product people. We offer expert talks group discussion and a safe environment for product people to come together and share warnings and tips.

If the last couple of years has taught us anything, it’s that predicting the future is hard. In this podcast episode, sponsored by Amplitude, we turn to Margaret Heffernan to learn how to do it better. She’s a captivating speaker with more than 12 million views of her TED talks and a prolific writer who recently published Uncharted: How To Map The Future – if she can’t help, there’s no hope! Listen to more episodes…
Featured Links: Follow Margaret on LinkedIn and Twitter | Margaret's Website | Margaret's page at TED | Margaret's latest book 'Uncharted: How to Map the Future'

Episode transcript

Randy Silver:  Happy Birthday Lily. Lily Smith:  Thanks, Randy. Um, but it's not my birthday until later in the month, the 17th. Actually, you know, if anyone wants to send me birthday wishes or anything, Randy Silver:  I'm sensing something like a hint here or something I should be doing something about this. But what I actually meant Lily was happy podcast birthday. We're unhealth three years old, if you can believe it, Lily Smith:  oh, and what an amazing three years it's been, I couldn't have predicted that the podcast would grow and develop in the beautiful way that it has. Randy Silver:  And it wouldn't have been possible without the fantastic team at mind the product, our great editor, Luke, thank you, Luke, and our listeners, Lily Smith:  not to mention our lovely guests, of course. And speaking of not being able to predict things, well, you're all making your plans for 2022. We thought this would be a great time to revisit our chat with Margaret Heffernan acclaimed author and business woman. Randy Silver:  You know, this is the one episode that my wife has actually listened to Lily, and she thinks it's amazing. And that's because Margaret explains why we can't predict the future and what we can and should do instead, let's get right into it. Lily Smith:  The product experience is brought to you by mind the product. Randy Silver:  Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love, Lily Smith:  because they mind the product.com to catch up on past episodes and to discover an extensive library of great content and videos. Randy Silver:  Browse for free, or become a mind the product member to unlock premium articles, unseen videos, amas roundtables, discounts to our conferences around the world training opportunities. Lily Smith:  mining product also offers free product tank meetups in more than 200 cities, and less property. Hi, everyone, welcome to the product experience podcast. Randy, do you know that our guest this week has more than 12 million views of her TED Talks? Randy Silver:  Wait a moment, Willie. That means that she's even more popular than we are. Who could it possibly be? Lily Smith:  Well, this week, we have the very great pleasure of welcoming Margaret Heffernan to join us and talk about forecasting prediction, and how we shape our work through our assumptions of what's going to happen. Margaret's book signing was actually the last event I went to before lockdown kicked in Randy Silver:  Margaret's an amazing speaker, a fantastic author and a business leader with a huge amount of experience. And we really enjoyed this interview. Although she says we can't predict the future. Her new book is so timely because just as it was launched the global pandemic it and I'm pretty sure that wasn't on anyone's roadmaps. I'm not Lily Smith:  sure if he predicted that Randy but I'm comfortable predicting that you'll love our chat with her. So let's not waste any more time and get straight to the product experience is brought to you by mind the product. Randy Silver:  Every week, we talk to the best product people from around the globe about how we can improve our practice and build products that people love. Lily Smith:  Because it mind the product.com to catch up on past episodes and to discover an extensive library of great content and videos, browse for Randy Silver:  free, or become a mind the product member to unlock premium articles, unseen videos, AMA's roundtables, discounts to our conferences around the world training opportunities, and more. Lily Smith:  My new product also offers free product tank meetups in more than 200 cities. And there's probably one near you. Hi, Margaret, thank you so much for joining us on the product experience podcast. It's really lovely to have you here. Margaret Heffernan:  Thank you so much. It's really fun to be talking to you. Lily Smith:  So for those that haven't come across you before in your various different guises of author and TED speaker, you want to give us a really quick intro into your background and and what you've done. Margaret Heffernan:  Sure. I started my professional life as a producer for BBC Radio, and then television and then I ran the trade association for independent film and TV producers. And then I moved to the US where I ran, I started and ran three different tech companies. And then I moved back to England and since then I have written and published six books. Three plays and I mentor a number of entrepreneurs and also senior and chief executives in large corporations. I am a professor of practice at the University of Bath and and I occasionally write for the Financial Times Lily Smith:  and have Four incredibly successful TED Talks and think about with an amazing number of views as well. Margaret Heffernan:  It's about at last count about 12 million. Lily Smith:  Yeah, incredible. And what I love about so I kind of first came across you when I read beyond measure, what I love about your writing, and is the kind of the insights that you have into the businesses that you speak to and that you do your research with. And the one that we're going to cover today is your latest book where you talk about forecasting and predicting the future. So at the moment, we've got more data than ever before, and like incredible computing power. So tell us, why can't we predict the future still? Margaret Heffernan:  Yes, it's interesting, isn't it? I think we had this fantasy, or just pretty humbly sewn to us that if we had enough data, we could predict everything. And strangely enough, it seems to be coming true. There are several reasons. One is because if we really wanted the perfect model, it would have to be as big as the world. And obviously, never going to be the case. So what happened, they said, so kind of usable models, you know, smaller than the world have a bunch of kind of systemic problems. And one is that, of course, in order to be useful, they have to leave things out. And what they leave out, you know, is based on assumptions and value judgments that we make, which may be right and may be wrong. And they're certainly going to be informed by our biases, by our opinions by our life experience. And often by the past. And since the past doesn't repeat itself, that's also a problem. So the problem with models is that they are incomplete, almost by definition, that they are biassed, and that they often have agendas in in other In other words, we're using them to sell ideas that we already have. And as such, while they might be they had definitely have some utility, what they don't have is rock solid, credibility. At there's another difficulty with models, as one of the most interesting people I talked to him was very implementing Richard Padgett, who runs the Centre for epidemic preparedness. And he said, You know, one of the problems with models is the numbers look so concrete, you start to see reality, the model for reality. And you start to think that they give you an absolute truth, and they don't, they're at their most useful for provoking discussion. But what this really means in the end is however much data you pour in, the model is going to give you a slice of future possibilities, it is not going to give you it cannot give you a rock solid 100% probability forecast. Randy Silver:  You've talked about the different kinds of data as well the different string complex data and complicated data. What can you go further into that? What is the difference? Margaret Heffernan:  So this is really interesting. And it's, it's something that I think has got a lot of traction, because it sort of changes the way you see everything. We live in a world that is both complicated and complex. So complicated things are tend to be linear, they're very cause and effect. They tend to be very controlled environments, I often think of it as a bit like an assembly line, for example. And you can control those things because you have control over the inputs and all the pretty much all the influences the inputs. And so you can predict the output. And these are systems that are very, very well honed by efficiency. complex environments are quite different, which is that they have lots and lots and lots of factors operating on them, some of which you might be controlling, some of which you can't. And so what that means is that while there may be patterns within these complex systems, they don't repeat themselves. Predictably. It also means that very small things can have a gigantic impact. For example, a very small thing like a virus bring the world's economy to its knees. The other thing about them is that because there's so dynamic, often expertise can't keep up because the system is changing too fast. So it's a you have an end if you try to to kind of forced these systems to become efficient. And it doesn't work and be what happens is you eliminate all your margin for adaptation. and response. So when you apply efficiency to a complex system, you actually make them more dangerous. So it's really important in companies or projects, to understand which kind of system ally in right now. Because one system, you will want to manage to be efficient. And one part of the system, you will want to manage to be robust. In other words, where there is a margin for error and surprise, Randy Silver:  love that the difference between efficiency and robustness, that's fantastic. Margaret Heffernan:  So a great example is, you know, if you fly, you know, you remember the days when we used to fly, and you go to the airport, and there are lots of systems around you that are complicated. Checking your bag is complicated, it's pretty much the same every single time. And there's a lot of control over it, and very, very susceptible to great technology to make it more efficient. Pretty much the same thing with getting the food on board, right. But once you get into the air, there are all sorts of things acting on forces acting on the plane, some of which you have some control over and some of which you don't, and which are inherently unpredictable, you don't know if there's going to be a key strike, for example, you don't know if there might be a bug in one of the operating systems, or one of the parts of the plane. And the consequence of that is that you these planes typically have four engines if they don't need four engines. And the engines run on different operating systems, which is really expensive, because it means you have to maintain these different engines at the different operating systems, but they're robust, because they're designed so that if one of them fails, they can fail safely. Because the impact of getting a tiny mistake could be this gigantic tragedy. So you would never want to get on a plane designed for efficiency. And in fact, argue that that's one of the problems that Boeing encountered, which is it started to kind of ruin everything, for efficiency. Lily Smith:  So when we're developing products, I guess we can look at whether you know how predictable our product is, whether it's just a simple completed form, to send it off to order something that feels like it could, you know, benefit from some efficiency. But if it has a lot more complexity to it, then we need to ensure that it's more robust, I guess. Margaret Heffernan:  So for example, the other thing you'd want to think about is if this were to go wrong, what's the impact? So if I complete a form incorrectly, it's unlikely to be a matter of life and death, probably, right. But if I am ordering a prescription online, and I don't understand the different permutations of my prescription, the consequences of that could be really terrible. So you want to think about what, especially because user, you know, user interfaces, and user behaviour is intrinsically complex, there is no one user. And even though you can design all sorts of test cases for multiple user profiles, you and I both know that if you design a user interface, the first person who uses it is going to use it in a way you never ever, ever dreamt up, right? So you want to make it reasonably robust. So the person who likes doing it backwards? Do it? Or if they can't do it, the cost to the business isn't gigantic. Right? Yeah, that's what you have to think about. You know, for if you think of a really simple product, like a ladder, a ladder probably doesn't need all the safety devices, you know, like the kind of catch to keep it exactly right. And all the rubber feet in handles and all that kind of stuff, but it's there to make it safe, or, but it's also not, you know, 70 feet high. Because that never be safe enough. The product to solve the how do I reach something 75 feet up. Lily Smith:  So if we can't predict or forecast what's going to happen with our complex systems, and there was a phrase in your book that I really loved, it was in a context saturated with ambiguity resisting the gravitational pull of certainty remains paramount. So it's like, Yes, this is surgery. So tell me more like how should we behave when we're faced with such a large amount of uncertainty like we don't know how people are going to respond when we change things in our products and release new features or talk about things in a different way, in terms of our language, Margaret Heffernan:  so a lot of the things that you do already are really designed to address this, which is you're doing a lot of user testing, you know, to see all the crazy things people think of doing with your product that you never imagined, you'll probably have beta testers and people like that, to test your product, you'll try to think of all the possible permutations. But then you may also get a lot of feedback once you release the product. And what you're accumulate is all sorts of feedback that shows you where the problems are. And then you'll triage those mostly by impact, right. So the problems that have a big impact, look for the address first. But the mistake I would expect you and your colleagues never to make, although absolutely brand new tech entrepreneurs do make this mistake, is to think that you're going to get it right first time. And you're just not just stuff is gonna happen. I mean, I remember this a lot, because in the early days of the web, when I was running tech companies, you had servers, which was a relatively new technology, and were quite young and stable. And on those were operating systems, which were constantly changing. And on top of that, you would have browsers which were constantly changing, and didn't necessarily interoperate with all kinds of websites very well. And then you had all sorts of new completely funky, often insane, you know, website designs, that were quite unstable, wanting on top running on laptops, that were all wildly incompatible. So this was statically, unstable environment in which to develop software. And you know, what it left me with, with a complete understanding that you're never going to get it all right. Just never, ever, ever, ever, ever. So you have to think hard about what are the things I must get, right? Because if I don't people die, right? What are the things that I must get? Right? Because otherwise, the product doesn't function and we go bust? You know, and after that, everything else is kind of a nice to have. But I think what's really what really, I mean, lots, what really fascinated me was that there was a belief. And I think it's still around that if we just keep at it hard enough. And we fix all the bugs and keep fixing all the bugs, we will get it perfect. I think it's madness. It's madness. And it's why that that fantasy is why people become so gullible when you start selling them nonsense, like driverless. I mean, if my if a driverless car had the same stability that my laptop does, which is a relatively mature technology, I die a couple times a week. And then when they say, they're going to be so great, there's driverless cars, because they'll be able to go and convoy. So they'll be able to be closer together, because there'll be so finely tuned, and they're all run off the same system. And you think, well, first of all, this is mad. And secondly, think, Hmm, if I were a hacker, oh, my God, this would be my wet dream. I don't have get to crash one truck, I could crack 50 trucks, because all they have to do is Randy Silver:  just think of how efficient they are. I know. Margaret Heffernan:  I mean, this notion of this perfectibility, I think, you know, it's just propaganda. It's a way of selling stuff, you know, this is all going to be so wonderful. Trust us, you, we only have your best interests at heart after the share price. You know, it's so I think, I think we have a big kind of mental adjustment to make in life and in business, which is there is no product that has ever been 100% safe. Even the simplest thing. And so ask different questions about which are the things that must be got right, and what are the things that actually we can afford some margin? Lily Smith:  I think that thing of, you know, driverless cars are going to take over the world in 10 years time 70% of the population will be in I don't know, whatever it was. Drive. We tend to you know, we tend to get caught up in these technology trends and also societal trends of like, this is going to happen, so therefore go and build products that take advantage If this what, you know, what should we be doing at this point? Do we just not believe any of it and then go purely based on the evidence that we have and very much with our product mindset, or should we look at it in a different light? Margaret Heffernan:  Well, I think, you know, the first thing is to ask is these great predictions? Where are they coming from? And what agenda the people issuing them have, because most of them have agendas, and most of them are propaganda or salesmanship. So all the very overheated rhetoric around driverless cars is partly driven by people who are trying to corner the market and scare off competitors. People are in the market trying to scare off investors. And so very idiot, people who don't believe in public transport, don't believe in public infrastructure, and want to persuade governments that they don't have to provide any of those things. So you have to be you know, put put your brain in gear and think about where's this stuff coming from? And why should I believe these people, you also have to use some common sense, you know, so that when they start saying the blind will drive, and then they also say, but the driver is the last point of intervention, you have to notice that actually, those two segments do it comfortably. But, but also, if we all only ever did the possible, we would never come up with great inventions. So you can you can also think about, Okay, what about this concept might be viable or positive possible? You know, what do we gain by trying? It we're working on a new product or a new form of engineering or whatever? Can we learn a lot of along the way? So for example, you know, if you look at CERN, you know, CERN is doing impossible thing, you know, 12 impossible things before that die. They're looking for neutrinos, which everybody thinks, you know, should exist. But do they exist? And how do they behave? We don't know. So, mil billions are invested in searching for neutrinos. And it may turn out that like bosons, they can be found. They may also turn out like string theory to be unprovable. And you don't know when you start. But at CERN, what they believe is the pursuit of knowledge for its own sake has value. And you have to trust that along the way you discover other things that may have more immediate value. But what so it doesn't do is promise what it's going to find? Because the whole discovery and exploration is that you don't know. Lily Smith:  Do you know your product usage interval? Randy Silver:  Do you know which percentage of your active users are new current or resurrected? Lily Smith:  If not, it's time to find out. Randy Silver:  For yours product teams have used amplitude to mastering retention playbook to understand their user base. Lily Smith:  And now amplitude has updated the playbook for 2020 and released it for free on their website, visit amplitude.com forward slash MTP to get your copy today. Randy Silver:  So Billy and I both come from the frame of reference of working in product management as much of our audience. So we are part of the job description that's never stated in the role requirements, Doc, is that we're supposed to be psychic, we're supposed to know exactly when we're going to deliver that. And when someone washing their hands, we're also expected to know where the markets going to go and plan for it perfectly. So this is a real treat to talk to you. And one of the things that you you another thing you've said, is that to think better requires more options, not fewer. But that feels a little bit counterintuitive. So how do we make better decisions? And how do we work better with our salespeople and customers and stakeholders to try and give them either the illusion of certainty or ensure that we're making better prediction? Margaret Heffernan:  Yeah, fantastic question. I mean, I was laughing about this the other day, because, you know, apparently the government's track contact and trace track and trace app isn't going to ship on time. You know, software never ships on time, right? It's famous for not shipping on time. I remember when I first moved into software, you know, we're having come from broadcasting. The Nine O'Clock News is on at nine o'clock. He can't put up with me A paper that says, Oh, really sorry, we'll be with you in a few minutes. So to me, the notion that you wouldn't met me to deadline was incomprehensible, you know, and then I realised, oh, but the thing about software is you're doing something it's never been done before. So now there's uncertainty. So you can have a deadline, you have to have a deadline. Otherwise, you know, you never make the thing at all. But you have to acknowledge that, that is a hope for date, and everybody's gonna bust the gut to get there. But then they're going to be a couple of trade offs you're gonna have to make one is, is it budget? Or is it features? Is it budgets, features or time really, right. So if you want it on time, it might not be on budget. If you want it on time, we might not be able to do all the features that we would like to do. So you have to be really transparent, I'd say about the trade offs. And I learned this the hard way, right, because coming from my broadcasting background, I would say we're going to ship on June the first and in my wall, that means I'm going to ship on June the first and then choose the first would come and we wouldn't ship the product when it shipped in the second either. And I had a fantastic piece of chief technology officer who said, look, here's the deal, Margaret, we can cut all these features, and ship it by June the seven. Or we can carry on as we're doing. And we can choose to ship it maybe somewhere around the end of the month. So that's a choice, it's a trade off, or we can throw a lot more wanted, which might speed it up, or it could very easily slow it down. So your call. And I think I had a magical belief that if I just you know hammered the deadline, loudly enough, it would magically happen. And what I learned from my fantastic CTO and his fantastic team was that that was totally irresponsible and naive. And that actually we had to have grown up conversations about what matters most here is it time, is it budget is it features. And we have to have a really good conversation with our customers, which is what matters more to you. If this is so mission critical that you've got to have the basic basic thing on during the first we can do that, it's not an efficient way to build the product. But if that's if it's mission critical to you, we can do it the inefficient way, in order that you can get up and running. But what a lot of people do, and I've seen this up close and personal is they try to fudge it. You also get kind of a little bit of internees side warfare, which is the marketing people won't want the thing. And the engineering team wants another, and the QA team wants another and they all lobby. And so nobody's terribly straight with each other about how bad it could be. So you really need to build teams, where people feel really alignment, aligned to the whole project, they understand the customer needs, and they're really honest with each other. Because if you can't have those conversations with complete honesty, you just get into such bad blood and such bad feeling that you can poison your whole culture. Lily Smith:  And I just want to go back to something you mentioned earlier, around, you know, as as product people, we kind of do user testing. And you know, we're used to kind of developing hypotheses and then trying to validate our assumptions or in invalidate them. And there was an interesting point in the book with a story about Daryl Plummer from Gartner. He knew he said that with his forecasting. 60% accuracy is optimal. Um, and I thought this was really interesting, because, you know, we don't we don't ever really kind of think about the weather, whether we should be always trying to prove ourselves, right, I think, as product people. And I thought that's quite interesting. If you were aiming to prove yourself, you know, if you had a kind of 60% accuracy, or you know, if I could be right 60% of the time, then maybe that would embolden them kind of more risk taking and your experimentation. So do you think this does apply or is it a completely missed? missed the point of Margaret Heffernan:  that? I think it depends a lot on the nature of the product and the nature of the company, which is if you're working with a company that really prioritises innovation above everything else, then it should anticipate a much bigger margin for error. Because the more innovative a product, the more unknowns it'll have. Several. Yeah, so that means that your budget and sheduled, for example, are going to be best guesses and know better than if you're building something that's never been built before, how could you possibly know how long it's going to take, or what it's going to cost? So that requires being very, very frank with the customer or the client, and saying, since you've never done this before, and we've never done this before, you know, we're going to do our very, very best. But, but it's, it is intrinsically unknowable. So, you know, so we have to be quite honest about that. And if, if the innovation aspect is the thing that matters most to you, we need a better margin, then if you just want to get something that's sellable. So, you know, I think in many of these businesses, you know that where they come unstuck is not having these, what I think of as come to Jesus moments, early, they often come too late, after all sorts of giddy promises have been made. You know, if you're, if you're white labelling something that already works, that's a completely different ballgame. You know, it's a relatively simple thing. And your, your schedules and so on are very likely to be right. I mean, the other thing, of course, is to have gates, you know, where you reevaluate, you know, is this ever going to work? And if so, what's it going to take? But the more unknowns there are, the bigger the margin for surprise, you have to build in to the budget. And, and the more you know, the more creative and innovative a product is, the more unknowns it will have in it, you know, just in terms of suppliers? And can you actually design a part and, you know, does the part work with the materials that you can afford? Or do you need a more expensive metal, all sorts of that. But what you call do, even though people do do it, is what I think of as gaming the game, you know, pretending to an expertise that isn't isn't available anywhere in the world, and pretending to a level of control which, given the forces of nature, you can't control. Randy Silver:  You were just talking there about come to Jesus moment, and you talked earlier about your one of your own, in dealing with your CTO and understanding how quickly you could ship something you also talked about at the beginning, about how you work with a number of business leaders as a mentor. One of the common complaints from product managers is that they work with business leaders who haven't had that same come to Jesus moment as you did, that don't understand the trade offs. So now that you're on the other side, and you're mentoring people and trying to get them to the same level of understanding as you What would you suggest is the right tactic to take for people who are in that situation who are trying to get that point across? Margaret Heffernan:  I mean, I say, and I, I think that actually, product managers have a part of their role which is not sufficiently appreciated or celebrated, which is the great product managers. I've been lucky enough to have work with me were actually fantastic teachers and they taught me my business really, you know, I thought I knew my business. I knew quite a lot about this. But they taught me all this stuff about the trade offs between you know time budget and features. They taught me about you know, how how much clients can change their minds what we about the degree to which clients often have demands that transcend reason and you know, and and once I understood that that I could think about you know, how to deal with that but but I did I think, you know, one reason I'm so like working with Lilly and and you know and with product managers is because I think that they are a hall at their best often they are a cultural translators. They are trying translating from one expertise group to another from one interest group to another, and the very best of them see through All those different perspectives, and try to bring them into some kind of alignment. But I think the very best of them are great teachers. Lily Smith:  And another area that you cover is this idea of super forecasters. And how, you know, we can, we can at best? Well, some of some of the kind of forecasters are good at predicting 100 things in the future. And maybe the super forecasters can predict 400 days into the future but, or not even predict 400 days into the future, but give a kind of a fair enough, I have an idea into the future. So as product managers, you know, if we asked for like a 12 month roadmap or a three year plan for the product, you know, how, how should we go about doing that if some of the best people at doing this in the world can only go 100 days? Should we just say no, we're not? I'm not going to do that, which I've done before? Or should we kind of get something really rough and bake together and go, it could look something like this? Margaret Heffernan:  I think there are two kind of two things that that you need to do. I think that, first of all, you have to acknowledge that all of these plans are provisional. And it's very helpful to try to unpack with clients. What are the assumptions, the working assumptions on which the plans are based, because once the plans are turned into numbers, the assumptions disappear. And it's really important to keep them front of mind. You know, the working assumption for most plans in 2020 20 did not include, we think there won't be a pandemic. But they did include an assumption, which was, we assume that there will not be any huge interruptions to business as usual, which might have been terrorism or ash clouds, or extreme weather events. None of these things are that hard to imagine, actually. So. So the first thing you have to do is think, you know, all of those contingencies remain possible. And it's not unhelpful to say the plan is based, assuming that and from a contractual position. Sometimes you've got to say that. I mean, I have companies who are, you know, we're approaching the end of an earnout, you know, the pandemic has completely completely mess that up. And if contractually, they have said, you know, this is all based on the assumption that, you know, business, as usual, will continue, as usual, had they made that explicit, they'd be in a much stronger position than they are now. I think the other thing is, I think you have to think that what I think of is in sort of two different time zones. So yes, we can plan pretty meticulously what will happen in the next three months, we have to recognise the further out we get, the less certain the estimates are. But what we have to have far out in the distance is the guiding principle for which we're doing this. So if we say we're doing this, you know, because we want a product that I don't know, helps people do something important. We keep thinking about okay, that's where we're heading. You know, with we're thinking about how do we build this product that's really going to help people because that's going to make our trade offs along the way internally consistent. But the fundamental melt mental model, I think that you have to have is this. A couple of months ago, before the lockdown, I went to do some teaching in Northwest Wales. And my GPS told me it's gonna take me three and a half hours to drive there for my heroine. It took five and a half hours. I had a pretty good instinct, it was gonna take me five because I know north Wales quite well, it by GPS is fantastic mapping mud, the journey to my nearest post office in seven minutes. Now, things can go wrong in those seven minutes, but there's only seven minutes in which they can go wrong. So the closer the destination, the shorter the period of time, the less risk there is, the further out it gets the risky or it becomes and the riskier it becomes, the more contingencies you have to build in Because once you take those contingencies out, usually when you're negotiating to get the contract, you haven't taken the uncertainties out. The uncertainties are still there, and they're going to get you. That's why one of my favourite lawyers once said, you know, every contract is an unexploded time bomb. Because there are uncertainties inside it. And if you haven't really thought about them, and what the three year five year contract looks like, then you just have to pray the bomb doesn't go off. So I think it's this thing of saying, Okay, so in this kind of this sort of product, how far out do we think we really have quite a lot of certainty. And I think this estimate of 140 day, 150 days isn't far off. And after that, you have to realise Okay, now we're in north Wales, okay. Where it's dark, and it's confusing, and the roads are narrow, and you know, and you just don't know what's gonna happen. Lily Smith:  Yeah, it was one thing that kept coming. I don't know if you're a Star Trek fan. But there's a character in Star Trek called data, who's a awesome a droid. And he kind of constantly talks about the the probability of things happening. And I kind of thought, oh, actually, you know, Star Trek, like, they got it, right. He wasn't, he was never certain. It was always like, well, the probability is flow, and it was never 100%. It was always like, 9 million loan or whatever. So you know, really, Margaret Heffernan:  what is really interesting, too, because actually, people find probability really hard, hard to understand. I remember somebody saying to me, when they heard I was writing about forecasts, and they said, Well, you know, they, they forecast yesterday that it wasn't gonna rain. And it did. And I said, I'll bet they didn't bet they didn't. And I looked it up, because I'm that sort of nerdy person that does that. And it was a 90% chance of no rain. That doesn't mean it's not going to rain. Right. So here was somebody who had misunderstood the forecast. And it's really interesting, because I remember talking to the statistician, a mathematician, David Spiegelhalter, about this, you know, like one of the biggest mathematical brains in the universe. And he probably is data, actually. And, and he said, Well, you know, none of us really find probability is very intuitive. And I thought, thank God, if he doesn't get it, if he thinks it's hard. No wonder we all suffer. Because it's just, we tend to think, well, something's true or it isn't. Right. But you know, the weather forecast tomorrow says it's going to be another beautiful day. It probably will. That isn't 100% probability. Lily Smith:  Yeah, that's so true. And I actually I always look at the forecast and think there's still a 3% chance it could rain. Yeah. Margaret, thank you so much for joining us on the podcast. It's been really fun. Well, Margaret Heffernan:  thanks for inviting me and you know, good luck to you and your listeners and all the work that you do. I think, I think you have some of the best jobs on Earth. Yeah, I do too. Lily Smith:  Haste, me, Lily Smith and me Randy silver. Emily Tate is our producer. And Luke Smith is our editor. Randy Silver:  Our theme music is from humbard baseband power that's P A you thanks to Ana killer who runs product tank and MTP engage in Hamburg, and please based in the band for willingness to use their music, connect with your local product community via product tank or regular free meetups in over 200 cities worldwide. Lily Smith:  If there's not one Nagy you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank. Randy Silver:  Product Tech is a global community of meetups during buying for product people. We offer expert talks group discussion and a safe environment for product people to come together and share warnings and tips.

Leave a Reply

Your email address will not be published.