Monika Turska defines the job of a Product Manager as someone who helps make better decisions, faster. But making the right decisions—whether it’s the ones that we’re actively making, or the ones we’re empowering our algorithms to make—requires a real understanding of the environment they’re being made in. We asked Design for Cognitive Bias author, speaker and filmmaker David Dylan Thomas, to join us on the podcast for a chat about how to ensure that the best possible decisions are being made.
Mind the Product is offering The Product Experience listeners $20/£20 off Communication & Alignment workshops in May and June. You can choose your workshop date here and enter code MTPPOD to claim your offer at checkout.
Featured Links: Follow David on LinkedIn & Twitter |Design for Cognitive Bias| Design for Cognitive Bias: Using Mental Shortcuts for Good Instead of Evil |Iris Bohnet | What Works: Gender Equality by Design | SXSW Interactive 2016 |Bit Flip | Radiolab‘s episode on voting machines | Timnit Gebru (@timnitGebru) was the scientist fired from Google
Discover more: Visit The Product Experience homepage for more episodes
Episode transcript
Randy Silver:
Hey, Lily, did you see the report that came out from the UK government the other week that said there’s no racism in this country?
Lily Smith:
Oh yeah, of course. Yeah, of course. No, literally no English people are racist at all. And in case you’re wondering, yes, I am being sarcastic.
Randy Silver:
But it’s a good thing that we had this report because it means we don’t have to run today’s episode, because today’s episode is all about cognitive bias, but apparently it’s not a problem.
Lily Smith:
I think the people who wrote the report should definitely come back and listen to this episode. They will for sure find some of it very interesting and useful. And today, we are talking to David Dylan Thomas, amazing name, by the way, all about cognitive bias.
Randy Silver:
Yeah, he is the author of the book, Design for Cognitive Bias. He’s a content strategist and a speaker as well. And it was a really fantastic conversation. And not only did we talk about this, we talked about the roots of it. And we talked about one of my favourite things ever about why we all get the scientific method wrong in product development.
Lily Smith:
So, without further ado, let’s chat to David. The Product Experience is brought to you by Mind the Product.
Randy Silver:
Every week we talk to the best product people from around the globe about how we can improve our practise and build products that people love.
Lily Smith:
Visit mindtheproduct.com to catch up on past episodes and to discover an extensive library of great content and videos.
Randy Silver:
Browse for free, or become a Mind the Product member to unlock premium articles, unseen videos, AMS, round tables, discount store conferences around the world, training opportunities and more.
Lily Smith:
Mind the Product also offers free product tank meetups in more than 200 cities. And there’s probably one near you.
Randy Silver:
Dave, thank you so much for joining us on the podcast today.
David Dylan Thomas:
Oh my pleasure. Thanks for having me.
Randy Silver:
So, for anyone who hasn’t already read, Design for Cognitive Bias, who doesn’t already know you, can you just give us a quick introduction? Who are you? How did you get into the product world? And tell us a tiny bit about the book, and then we’ll jump in and ask you lots more questions about it.
David Dylan Thomas:
Sure. So, I’m David Dylan Thomas. I’m an author, a speaker, workshop giver. And I came from the world of content strategy and UX. I’ve been working at different firms and agencies over the years. But at about the same time, I started getting into cognitive bias, thanks to this talk by Iris Bohnet called The Gender Equality by Design. And she was linking up a lot of the implicit racial and gender bias we see with simple things like pattern recognition. And I don’t know, that made the task of taking on racial and gender bias, seem a little more manageable when you can tie it to something like concretes and human like pattern recognition. So, I decided I just need to learn everything I can about cognitive bias. And I literally looked up one cognitive bias a day off this list that had like a 100 or 200, and turned into the guy who wouldn’t shut up about cognitive bias.
So, my friends were like, “Dave, please just get a podcast.” So, I did that, and I recommend that as a way to learn about anything, is to do a podcast about it, because you really have to know your stuff if you’re going to talk to other people about it. But eventually, those two worlds collided of my day job and UX, and content strategy, and all this stuff, I was learning about cognitive bias. And it became, “Hey, here are all these ways that our users are making decisions below the threshold of conscious thought, even they don’t even realise. And here’s all these design and content choices that can influence those biases. So, let’s write a book about that.” And eventually, I hooked up with A Book Apart, who are amazing partners. And we put this book out, Design for Cognitive Bias, that’s available now.
Randy Silver:
Fantastic. I love that you’re the only person who had to be encouraged by other people to get a podcast.
David Dylan Thomas:
No, they really nagged me. My one friend in particular, who used to work for TED at the time was like, “Dave, you really got to think about doing this as a podcast.” And I’m like, “Somebody who works at TED says, you should do a podcast.” It’s like, “Okay, I’ll take your word for it.”
Randy Silver:
Okay. So, before we jump into all the talk about cognitive bias, it’s a term that’s thrown around a lot, and being that you’re the person who literally wrote the book on it, what is cognitive bias? Can you give us a little bit of a definition for anyone who doesn’t fully understand it?
David Dylan Thomas:
Sure. So, think about it like a shortcut gone wrong. Basically, you have to make something like a trillion decisions every day. Even right now, I’m making decisions about what to do with my hands and where to look, and how fast to talk. And if I thought carefully about every single one of those decisions, I’d never get anything done. So, it’s actually a good thing that most of our lives are on autopilot, but the autopilots sometimes gets it wrong, and those errors are what we call cognitive biases.
Lily Smith:
So, a lot of the time when I’ve heard about cognitive bias and about biases in general, it’s also in relation to how AI is trained. How does it connect up with AI?
David Dylan Thomas:
Absolutely. So, we’d like to think of AI as this very objective cold rational thing. And if the AI says it’s right, it must be right. It must be objective. But the thing is, the AI is just doing what it was told. And it’s told what to do by humans who are biassed. Right?
So, one of my favourite examples is Amazon had this hiring bot that was, basically meant to help sift through thousands of resumes, but it kept recommending guys, right? It was extremely sexist. And it would even downgrade resumes that had the names of women’s colleges on it, like really rabidly sexist. And they tried to figure out, how did this happen? And they said, “Okay. Well, we trained it on the 10 years of resumes to Amazon. Well, guess what most of those resumes had in common, right? It was mostly guys.” And so, the AI took a look at that and said, “You sure must like dudes.” And then just kept recommending guys.
So AIs, if you point it in AI at a racist sexist world, it’s going to make racist, sexist predictions. So, we need to be very careful and thoughtful, what are we asking the AI to do? And are we giving it data that’s going to skew it towards doing it in a certain way?
Lily Smith:
So, is the kind of the utopia just like no bias at all in anything. And is that really ever going to be achievable?
David Dylan Thomas:
So, I wish I could remember where I read this, but there was some of those basically saying, look, there are so many different ways to get it wrong, so many different decisions, right? That go into any choice or… Even the world’s greatest AI, couldn’t do it. Even the world’s greatest AI, it needs shortcuts. So, it’s not really achievable, and not even necessarily desirable. Because I think that the one thing that humans can do that AIs can’t really do, is have lived experiences that are completely different from one another. And I think the utopia isn’t zero bias because you just, like I said, you need these shortcuts to live. I think the utopia is we work on destructive biases, and that’s a long-term project, which I could talk more about later.
But in the short term, we make sure that we’re getting multiple perspectives, which is another way of saying multiple biases that are different from each other. Right? So, if I put three people in a room with the same lived experience, they’re probably going to exhibit the same biases and that bias will be inherited by whatever product they’re working on. But if I get three people in a room who have completely different lived experiences, those biases ideally, let’s talk about ideal world, ideally we’ll counteract each other or even inform each other in a way that produces a more inclusive product or a less harmful product.
A perfect example is Twitter, Twitter launched without a block feature. And it was launched by a bunch of white dudes who never had to worry about harassment. So, of course, it launched without a block feature. But if there was one additional perspective in that room from someone who maybe had experienced harassment, they might say, “Maybe we don’t launch this yet, maybe there’s one more feature we need.”
Lily Smith:
That’s a really interesting point. And it’s something that I’ve not really considered before. And I wonder if you can take it even further, when people think about AI, is there a way of having three different AI perspectives, and then allowing people to make a judgement or understand? Sorry, I feel like we’ve gone really deep, really quick.
David Dylan Thomas:
I love these geeky conversations, so I’m totally cool with that. But it’s funny, there’s a version of that, right? So, they had this problem, Radiolab did a great episode about this. We have this problem with voting machines, right? And the voting machines, were basically making these really weird errors. And without going into all the math, but the decision they ultimately decided to do to make the machines more reliable, was to basically make them three machines in one. And they would only verify the result of all three parts of the machine agreed on the result. So, rather than one part of the machine saying, “Yeah, this person got a thousand votes.” All three parts would have to read, “They got a thousand votes.” So, in a way they were getting multiple AIs to check in on each other and be like, “Are you sure that’s the answer?” So, yeah, even when you’re dealing with AI, it’s a good idea to get multiple perspectives.
Randy Silver:
So, is the starting point for getting this right, having a strict definition for what good looks like, forget about how you get there, but agreeing on what the result should be in terms of, you want it to work certain ways, but you also specifically don’t want it to work in other ways. Is that… So, I feel like I’m asking that terribly. I’m sorry.
David Dylan Thomas:
No, no, no. So, I’m the content strategist, so I’m always going to say the most important thing to figure out is the goal. So, I would settle for any definition of good, a much less the strict definition of good.
So, I think that, so let’s go back to the example of the Amazon AI, right? If that AI’s job was to detect bias in the hiring process, then mission accomplished, right? That’s actually a great result. If they said, “It’s only recommending guys. Okay. Now, we can say, maybe we have some kind of bias in our hiring pool, if that’s all we’re hiring.” But if you’re asking it to make a fair “judgement ” about who to hire, okay, well then you have to think about all the different ways the information you’re giving it, and how you ask it to interpret that information, right? Could possibly skew it. Right?
So, if my goal is a more diverse workforce, then I’m actually going to have an optimise for, “Hey, AI, this is what our current staff looks like.” From a demographic, whatever diversity, dimension you’re looking for, from a gender standpoint, from all these different standpoints. “Don’t hire for this. Right? I want you to look at this pool of people we haven’t hired yet and optimise for the opposite, right?” And again, same data set, but I’m giving the AI different instructions around that data set, right?
Randy Silver:
So, we’re doing this all by running experiments, and we use the scientific method. Why does that fail us?
David Dylan Thomas:
Because we’re not really using the scientific method. So, I personally totally got that wrong. I thought the scientific method was like, “Hey, I’ve got an idea about how the world works and I’m going to test that idea. And then if I get a good result, I’m going to have a bunch of other people try the same thing. And if they get the same result, yay, we’re done. Write it down. It’s a law.” But what it actually is, is I go through all of that, it’s like, do the experiments, get a result. Other people try it. They get a good results, great. I now have to do everything I can to prove myself wrong. I have to ask myself, if I’m wrong, what else might be true? And then go and try and prove that. That’s the real scientific method, and it’s much more rigorous. Right? And it’s much more humble, right?
I’m not out there to prove how awesome I am. I’m actually there to try to find where I got it wrong. I’m assuming that I probably got it wrong, and then trying to find it, which is really, it’s difficult for us, right? That’s not how our minds usually work. We’re used to, I mean, it’s fighting confirmation bias. We’re used to saying, “Hey, I get it. I get how the world works. I don’t need anybody telling me difference. I’ve got enough to do in a day. I don’t need to be going out and trying to prove, no, I’m actually wrong about this fundamental thing that has gotten me where I am.”
Randy Silver:
Yeah. You’re not getting promoted for that.
David Dylan Thomas:
Yeah, exactly. Yeah. We don’t incentivize for disproving the fundamental business model of the company. That’s, I mean, literally right. We just had… I’m blanking on her name, but the woman who just got taken out of, fired from Google for basically doing her job and finding all the bias, and their approach to AI and language. And it’s like, “Hey, I found all these holes.” And they’re like, “Oh, those holes go against our business model. I think that’s a bad study. Get out.” Right? But that is exactly what we’re not incentivized to. She was actually doing proper scientific method, and Google was saying, “We don’t really want that.”
Randy Silver:
It’s Timnit Gebru.
David Dylan Thomas:
Thank you. Yeah.
Lily Smith:
So, how does that fit in with the content side of things? We’ve talked a lot about AI and training AI and bias, but how do you approach your content strategy in terms of understanding how bias comes through and trying to avoid it, or maybe not trying to avoid it, but trying to use it in the right way?
David Dylan Thomas:
Yeah. Ultimately, you’re always going to veer towards trying to use it in the right way, because it’s always present. Humans are pattern making machines. So, I can’t produce a neutral design. I can produce a neutral content strategy because humans aren’t neutral. Humans are going to look at whatever I design and interpret it somehow. And if I know certain things about that interpretation, I’m responsible for it. So, an easy example is, we believe things that rhyme more than things that don’t, right? So, if I say an apple a day… What is the phrase? Oh my God, I’m forgetting it now.
Randy Silver:
Makes the doctor go away.
David Dylan Thomas:
Okay. I was like, is that really it? It’s one of those things it’s like, “That can’t be right.” No, it is. Okay. Did you pause?
Randy Silver:
My wife is a dentist and she gets pissed at me every time I use that line.
David Dylan Thomas:
Oh, that’s funny. So, an apple a day keeps the doctor away. All right. So that is more convincing than, “You should really eat more apples.” Right? I’m saying the same thing, but it’s one of them is catcher, and it’s because our minds like things that are easy to process. Right?
So, if I know as a content strategist, that things that rhyme are more believable, if I make something rhyme, I am on the hook for making sure it should be believed. Right? I shouldn’t make lies rhyme. Right? That’s where the design ethics of it all starts to come in. It’s as soon as I know, I have this backdoor into your brain, I can’t mess with it. Great power, great responsibility. I now have to be very careful. If I’m going to make this thing rhyme, can we make sure that’s absolutely true before I basically convince a lot of people it’s true? And by the same token, if I know something’s true, right? Like, hey, wearing masks can help save lives. Okay. Now, I can think, “Okay, mascular casket, what can I do with that?” Right? I can actually, I now have this tool in my tool belt that can make it more impactful.
Randy Silver:
So, this is sounding like a real struggle. Are we ever able to use this power for good? You talked a bit earlier about the Radiolab episode with voting machines. And that sounded like it was using it to mitigate bad, but is there anything about using it to actually promote good, or is the concept of good a little bit too hard to get into?
David Dylan Thomas:
I mean, it’s not easy. Now, so one of my favourites is the framing effect, which is an extremely, I think the most dangerous bias. And the basic concept is that the way you frame something totally changes your decision around it. So, if I said, “Okay, I have one pack of condoms that’s 99% effective. And this other pack of condoms that has a 1% failure rate.” People would be like, “Oh, we definitely want the 99% effective.” And I’m like, “It’s the same thing.” Right? But I made one sound like something you want more. Right? So, you can frame things in really dangerous ways to make it seem like, should we go to war in April? Should we go to war in May? Right? And now, we’re not even talking about whether we should go to war in the first place. Or should teachers have guns in schools? Right?
we skipped over a whole lot of steps in the gun control conversation, and now I’m framing it purely in terms of guns in school or no guns in school. That’s it. Right? By that same token, I can also frame things in ways that improve collaboration, right?
So, there’s an experiment where I show an audience, a photo of a senior citizen behind the wheel of a car. And I asked them, “Should this person drive this car?” And you end up with this policy discussion of people are like, “Oh, old people are bad at everything. Don’t let them drive.” And other people are like, “That’s ages, people should do what they want.”
And all I learned from that is who’s on what side? If I show the exact same photo, and the question is, how might this person drive this car? All of a sudden it’s a design discussion. And some people are like, “Oh, what if we changed the shape of the steering wheel? Or what if we move the dashboard?” Right? And what I’m learning from that conversation is different ways that person might drive that car. And I only changed like two words in the entire question, but that changed the frame of the conversation, which means it changed the whole conversation. And there’s lots of clever ways to apply that in design, especially to the design of collaborative spaces to improve your odds of getting collaboration and put friction around people, just yelling at each other.
Lily Smith:
Design, in general, seems like such a minefield when you think about all of these biases that you have to try and navigate. How do you coach people in this field when they’re starting out? Or they might not necessarily have all of that background information.
David Dylan Thomas:
Yeah. That’s a good question. I mean, I personally do a workshop that I go around like, “Here’s everything I can teach you in four hours about this, that you can then apply later.” But I think at the simplest level is a matter of just making sure, inviting as many perspectives as possible, because often, whether you know the name of the bias or not, someone who has a different lived experience or who has a different power set, so to speak, is going to look at the same thing and be like, “Well, yeah, but what about this?” And I’m like, “Oh, I didn’t even think of that.” Right?
So, I feel like if all else fails, if you don’t have time to read my book or dig into, and study a bunch of biases, if you do nothing else, bring other people in the room who have a different perspective, so they can catch this bias or that bias, even if you don’t know the name of that bias. Right? Or how it particularly plays out.
Lily Smith:
There was one of the technique [inaudible 00:18:13], I’ve just remembered that. I remember seeing in your talk about the red versus blue team, which I thought was really great.
David Dylan Thomas:
Yeah. That’s a really good, low-cost way to implement what I’m talking about. Right? So, red team, blue team, which is actually implemented by both the military and journalists, which I love saying, because there are so few things you can say that about, but it’s basically this technique where you have like a blue team, who might do the initial research and maybe they get as far as a prototype. But before they go any further, you have a red team coming for one day. And that red team’s job is to go to war with the blue team, and really look for any hidden biases, any more elegant solutions, any potential causes of harm that the blue team missed because of confirmation bias. They fell in love with their first idea.
And it’s a fairly elegant approach because I don’t have to tell my boss, “We need to hire two teams for every project to check each other’s work everyday.” I just need one team for one day to come in and make it a little less likely, we’re going to put something harmful out in the world. And again, it’s that way of introducing that outside perspective, because the red team has no stake in this at all, right? They have not seen the work that of the blue team. They haven’t been living and breathing this project every day for a month. It’s like, I’m just looking at this with fresh eyes.
I mean, it’s really not that different from the concept of a design critique, right? Which we’re very used to in the world of UX and experience design. It’s just a matter of adding a little bit of layer of like ethics, right? And hopefully, some of the people in that red team are from a group that might be impacted by that design, right? So, we’re getting a little bit, we’re broadening the definition of what we’re critiquing, but it’s still the same basic concept that 10 eyes are better than five.
[Advert]
Randy Silver:
As product managers we’re responsible for so much, but often have little authority. And that can be tricky, but that’s where the power of influence comes in. It’s one of the most important skills for any product manager to learn.
Lily Smith:
Mind the Product’s Remote Communication and Alignment Workshop will help you go from simply doing your job to elevating your influence over your stakeholders, and having a bigger impact on your team.
Randy Silver:
You’ll dive into key product, skills and concepts, including stakeholder management, reporting out, evaluating opportunities, prioritisation and facilitation, and leave with lots of actionable frameworks and tools you can put to work right away.
Lily Smith:
Get 20 pounds or $20 off your Communication and Alignment Workshop, this May and June. Just go to mindtheproduct.com/communication-workshop, and you use the code MTPPOD@CHECKOUT. That’s mindtheproduct.com/communication-workshop, using the code, MTPPOD. Offer ends on May 31st.
—-
Randy Silver:
So, you’ve talked about two different ways of broadening the representation on the team. So, one way is by having more people of different backgrounds on the development team in the first place, whether that’s developers or product development team, whatever you want to call it. And the other is bringing in a whole separate team, again, a different perspective for a short period of time. Is there anything else that we should be bringing into our development process that’s just better practise to help us fight some of these biases?
David Dylan Thomas:
Sure. I mean, I think that the practise of participatory design in general could be much more prominent in product design. So, the basic concept there is that, it’s not super different from human centred design, but it is much more conscious of the role power plays in the design process. Right? So, before you even start, I might do an assumption audit, where I take my team and we ask each other, “Who’s on this team?” Right? “What perspectives do we represent?”
And you’re only identifying as you feel comfortable, but you’re going to think about things like gender and age, and neurodiversity, and class. And then you’re going to say, “Okay. Well, how might those identities influence the outcome of this product design?” And then you ask, “Who isn’t here, right? Anybody here ever been incarcerated? Anybody here ever had their immigration status questioned?” Right? And you say, “How might the absence of these perspectives influence the outcome of this product?” And then you ask, “Okay. Well, what might we do to invite that perspective and give that perspective some power in this design process?” And really you’re looking for, are there voices here that are going to get impacted by the outcome of this product design, but don’t have very much say in the product design, right?
And the whole project of participatory design from that point becomes, well, how do we give them more power? Because they have to live with what we make. So, how do we give them more say in what we make? That’s the ethos there. And there are many different techniques and approaches to making that happen and giving that power. But that’s the general gist of it. And I think that approach, it would be really interesting to see that approach carried out more in product design, which usually, I only think is about a very limited set of stakeholders. Right. You’re really thinking about the user. You’re thinking about the stakeholder. You’re thinking about the investor. There’s a very tight set of people whose interests you’re focused on when you’re doing the design, but there’s not really a place for, “Well, this person isn’t actually going to use my product, but they’re going to be very impacted by it. Right? So, how do I invite that into the conversation? Because my product doesn’t live in a vacuum, my product lives in a society. So, how do we think about that?”
Randy Silver:
Or toward the service designer who talked a lot about that. That makes a lot of sense.
Lily Smith:
I’m loving the working to test some of the designs that we have with people. Is there a way that we can identify whether some of these biases that we have are coming through, when we’re actually doing face-to-face qual testing? Are there questions that we should be asking at this time?
David Dylan Thomas:
I mean, aside from general best practise in user testing and research, no leading questions, right? All of that kind of stuff. I think again, I would think very much about power. I would think about, am I inviting the people to test this product who look just like me, who act just like me. Right? Even if they don’t have the same job as me. Right? Or am I inviting people who I just hadn’t thought of. Right? And hopefully, you’ve already done that work at the beginning, with that assumption audit with, there’s an exercise called power mapping, where I’m deliberately identifying groups that are going to be impacted by this thing, but who don’t necessarily have a say in the thing.
And I mean, you get into some very interesting places when you take that approach. Because if you think of something like Uber or Lyft, if I were building that from scratch and I were doing some of this power mapping, I would be noting, “Okay. I definitely want to talk to gig economy workers because they’re probably going to be my drivers. I definitely want to talk to people who don’t have cars because they’re probably going to be my customers.”
But if I’m really thinking about who’s going to be impacted by this, I’m also going to be talking to taxi drivers, because they’re definitely going to be impacted by it. They’re not my customer to an extent they’re actually my competitor. Right? And traditionally, it wouldn’t even occur to you to talk to them, but we live in a society and they are going to be very impacted by it. And honestly, long-term, if you don’t talk to them, you’re going to talk to them, it will just be in a courtroom instead of in a user test, right? Or at a city council meeting, right? This is what actually happens.
So, I think it sounds dangerous to do that, but I think if you include them in the process, you actually find some really interesting solutions. Long story short, there’s a thing that played out in Taiwan, where after going through this really interesting process, which I’d be happy to talk about, they arrived at a solution where Uber technology was put into taxis. Right? And if you think about that, that makes Uber happy because now they’re selling to a huge group of buyers, right? Lots and lots of taxes in Taiwan. And it makes the people, the taxi drivers happy because now more people are going to use their service. And from an exploitative aspect, what Uber is selling exploits fewer people, because my business model isn’t relying on overworking people or paying people less than they’re worth. It’s like, “No, here’s the product, you bought it. I got my money. Let’s move on.” It’s not that same kind of, “I need to keep getting you to work for as little money as possible.” Kind of thing. So, there’s all these benefits, but you never get there if you never talked to taxi drivers.
Randy Silver:
Okay. So, the reaction I’ve got as someone who’s tried to fight the fight internally in companies, is this is going to sound expensive, and it’s going to sound like it’s disproving things that my stakeholders told me that they want, it’s adding time, it’s adding friction. And sometimes they’ll tell them things that they don’t want to hear. So, even if I want to do, even if I want to push for this inside of my organisation, how do I advocate for it? How do I get people to buy in? And how do I convince them that this is the right thing to do?
David Dylan Thomas:
Yeah. So, this is literally the topic of my new talk, which is literally called, That’s Great, but How Do I Convince My Boss? Because that question, right? That’s the question. Every conference I’ve gone to, no matter what the talk is about, there’s always during the Q&A, someone who says, “Okay, that was really cool what you said, but how do I convince my boss?” And it’s a legit question, right? Because there’s a lot of great ideas out there. But if the boss or the stakeholders don’t buy-in, what difference does it make? So, literally the middle of my book, I think is chapter three is basically, it’s not called this, but it’s basically how to Jedi mind trick your boss. And it’s going along-
Randy Silver:
I’m sorry, you said, are you using bias for good or for bad there?
David Dylan Thomas:
Well, I’d say, it really depends how you use it. Right? I mean, you said you were fighting the good fight, so I’m taking you at your word. No, but especially just acknowledging that, just like your users, just like you, yourself, your stakeholders are also making 95% of the decisions below the threshold of conscious thought. They’ve got these tonnes of biases, they don’t even realise they have. And they’re just acting that out. So, know that before you go in and talk to them.
So, there’s several different ways this plays out, right? There’s one thing of most large companies are risk averse, because they have more to lose than a small company. And that risk aversion comes with certain techniques, certain things that are more persuasive than others. So, they discovered a long time ago that if someone is risk averse and you want them to make a decision, they might think is risky, it’s better to point out the downside of not taking the risk than the upside of taking the risk. Right?
So, if you have this lousy CMS, that software that people have been using for years, and everybody hates it, if I go in there and I say, “Hey, there’s this shiny new CMS we should use. And if you buy it, we’re going to make all this money. And unicorns are going to walk down the hallway and we’re going to hire all these people.” The boss, it’s not super convincing or it’s not as convincing as going in and saying, “Hey, if we keep using this crappy CMS, here’s how many people are going to leave. Here’s how much money we’re going to lose. Here’s how long it’s going to take to find the replacements. Here’s how much money we’re going to lose while we train the replacements. Here’s how much productivity is going to go down.”
And by the time I finished painting that apocalyptic picture, all of a sudden that shiny new CMS seems like a really good idea. And I liken it to myself, I am a homebody, even if it weren’t coronavirus, so it’d still take a lot to get me to go, “Hey, David, it’s beautiful outside. You should go outside.” And I’d be like, “Yeah, but Netflix and blanket.” And if on the other hand, you said, “Hey, Dave, the kitchen’s on fire.” I’d be like, “Oh, tell me more about your going outside idea. I’m really interested now.” Right?
Lily Smith:
Okay. So, I always liked doing the… Is there a time when we don’t need to worry about this question? Is there a time when we can relax and be like, “Nah, it’s fine. We don’t need to worry about this because it’s not an issue for this product or this point at which the product is being developed.”
David Dylan Thomas:
I think you can relax a little when it comes to positive biases that you inherit from leadership. Right? So, I’ve worked at places where the work-life balance is respected because the people who founded the company hated 80 hour work weeks. Right? So, it was in their DNA to write SOWs that only allowed clients to check in 8:30 to 5:00 PM, Monday through Friday. Right? And not expect their workers to be doing 80 hour work weeks and have all sorts of red flags in place to make sure nobody was being overworked. Right? So, that was something you didn’t really have to convince anybody about. And it was baked into the DNA. And there’s a little upkeep there, but for the most part, you can be like, “You know what? I bet they’re not going to work anybody to death. They’re generally going to fall inside of that.” Because that is one of the biases they came with. Right?
On the other hand, that same company might not be down with salary transparency. Right? And it’s because it’s just not a thing that they grew up with. It’s something they’ve encountered before. And it could have proven effects, which it does around reducing pay disparity between men and women. But that’s not something they inherited. So, they need that outside perspective to come in and say, “Hey, I think you might have a blind spot around how you’re paying your workers. Right? So, how do we think about that?”
So, the short answer is no, there’s really never a time you can’t think about it, but I do think it’s worth knowing, what are you getting for free? I’m making air quotes for those of you listening to this and not seeing me make air quotes. But what are you getting for free? Because the people you’re dealing with, already believe in salary transparency, or already believe and always posting the salary before when they post the job or whatever that thing is. And then saying, “Okay. Well, where are…” There are always going to be blind spots, but they’re not always going to be the same blind spots. So, what can you actually leverage that you’ve already got going for you? What values are they already aligning to that you can then build on?
Lily Smith:
So, is that assessing the culture of your business to understand how that’s going to be applied to product development? Is that what you mean?
David Dylan Thomas:
Yeah. And I think that’d become part of the secret sauce when you’re trying to make your argument to stakeholders, right? Because no one likes to see themselves as the bad guy. And if you can point to this one thing and say, “Hey, look, we’re really great in this one area. And I know it’s one of our core values. I know you believe.” One of the best ways to start an email is, “I know we both agree that…” Right? Because you’re already starting from a place of, “Look, we’re reasonable, we get each other. We’re cool. We get this, right? I know we both agree that blah, blah, blah. What if, right? Maybe doing X gets us closer to that thing, that we both believe in, because we both believe in it. Right?”
Frame and not in terms of, I’m coming out with this wild hippie ask that no one’s ever heard of before that I’m just trying to look like a social justice warrior, which by the way, I’ve never understood why that’s a bad thing, [inaudible 00:32:38] social justice. But fine, if that’s the parlance. I’m not trying to like do something, I’m not trying to do the right thing just to do the right thing here. But it’s like, “No, we both believe in this, this company was founded on the principle.” Right? It’s like a political speech. Like, “This company was founded on the principles of this. And if we do that, we’re just going to get even closer to that. We’re already doing great. And here’s how we get just a little bit closer. And isn’t that what you want and aren’t you being judged on this?”
That’s another great thing. Find out how people are being compensated or how they’re getting their bonuses, or whatever that thing is. There’s a lot of companies now where it’s, inclusivity has been added to performance metrics. So use that. It’s like, “I know you got this inclusivity goal right here. I can help you get it.” Right? Whatever that piece is.
But so, I think, yes, it’s very much about assessing and understanding, a, what the actual cultural values that are stated are, and then seeing that delta, it’s like, “Well, here’s where they actually are aligned to that. And here’s this place where they say they want this, but they don’t really perform that way.” And pointing that out, again, and that’s very nurturing like, “We all agree that value number five is a good value. Maybe we could get a little closer if we did this.” Right? I think that’s very important part of the fight.
Randy Silver:
Yeah. Stakeholder motivation mapping is one of the things I’ve learned over the past couple of years. It’s been the most valuable. Are there any other biases that you want to talk about that we should note about, that it’s something that’s maybe not quite as prominent or not as talked about as much, any other favourites?
David Dylan Thomas:
Sure. So, I think the one, I say that I’m out to change hearts and minds and budgets, because the truth is, until something is in the budget, it doesn’t really exist. Right? Because once it makes it into the budget, it makes it into the project plan and it makes it into all these other… It’s seats its way out from there and becomes routine. That’s the ideal, is to make these things, if I decide to do a power map or a red team, blue team, that’s not a one-off that we did this one time on a project. It’s like, no, it’s standard operating procedure.
And the trick to make something standard operating procedure, one of the tricks anyway is to lever something called a consistency bias. Right? So, if I go in, and this is like the last place I worked, this is actually part of how we started to try to roll out inclusive design, and one approach would’ve been to say, “Okay, we’re going to form the inclusive design committee and we’re going to meet every Monday. And we’re going to drop a big plan for how we’re going to roll up this entire new design set around inclusive design.” And we thought about that for five minutes and we were like, “Okay, that is basically a meeting engine.” Right? We’re not going to change anything so much as we are going to set up lots of meetings.
So, instead we took an approach that was more like, “What’s literally one thing we can do that’s more inclusive than we do now?” And so, we settled on assumption audits, which is the thing I was telling you before about this is who we are, this is how that’s going to influence the design. This is who is not here. That little thing, that’s like a, maybe a two hour meeting. Right? And so, we said, “Okay, let’s introduce that on a few projects. And if that goes well, then we’ll add a red team, blue team.”
And the reason that approach works is, because of consistency bias. And the way that works is there’s an experiment, where I can go up to you and say, “Hey, I want to put this big honking sign on your lawn.” And you’ll say no. Right? But if I come back and I say, or instead of that, I say, “Hey, I got this really tiny, you won’t even notice it, can I put this itty bitty tiny sign? You won’t even notice it’s there.” Right? You’re more likely to say yes. If I then come back two weeks later and I say, “Hey, can I put this big honking sign on your lawn?” You’re actually more likely to say yes, because you’re signed people now. We like to make decisions that are consistent with our former decisions.
And so, if I go to my boss and say, “Hey, can I take up a whole day to do a red team, blue team?” It’s like, “I don’t know.” But if I’m like, “Hey, can I just take two hours, this whole project like a thousand hours, can I just take two hours and do a little assumption on again, this whole thing?” They’re more likely to say yes. And if that goes well on a few projects, then I can come back and say, “Hey, can I get just one day, it’s a 200-day project, can I get just one day?” And they’ll be like, “All right. I guess so because I guess, we’re inclusive design people now.”
Lily Smith:
Yeah. That’s a great tip.
Randy Silver:
Dave, this has been absolutely fantastic. I’ve got one last question for you. It’s a little bit off the topic that we were covering. But you’ve made your living for a number of years now as a content strategist. A lot of product people haven’t had the chance to work that closely with one. And I’m just curious, what’s the thing that you see product teams consistently get wrong? If there’s one thing we should know about working with content strategy and content strategists, what would you recommend?
David Dylan Thomas:
Have a set to kickoff. No, seriously, to be honest, I’ve done way more like website content strategy and product content strategy. But the product content strategy I’ve done is usually, “Okay, we’ve got the flow. Give us some words now.” And it’s funny, now as an exercise to point out the flaws in a content strategy, or even in a UX flow, that’s actually a really great way to do it, if you don’t mind that you wasted your time creating this flow. But as I tried to create copy, or as I try to create these interactions for the product, I notice, “Okay, so they’re supposed to produce this video as part of the flow, but you didn’t tell them that they needed the video before they started the flow. Maybe you should do that.” Right? Now, it works that out earlier. But if now’s when you want to do that, okay, we can do that now.
I’m not saying anything one million UX writers haven’t said before, but yes, have us there kickoff, actually, have us there at the estimating session, have us there when you have the dream of creating the product. The earlier you have us in there, the more value we can add because we’re basically approaching the exact same set of problems just with a slightly different toolset in a slightly different way of thinking about information. Right?
I mean, again, it’s nothing new. It’s not an assembly line. It’s more like, “Hey, there’s this one thing we’re all working on. And UX has this point of view. The research has this point of view. Engineering has that point of view. Content has this point of view. And if you get us all there working on it together, pretty much at the same time, you get…” And again, it’s that diverse perspectives. You get way better results, if you’re like, “Okay, you take a shot. All right, you take a shot. Okay. Now you take a shot.” And that’s just an easy way again, to just generate meetings and be like, “Oh yeah, we didn’t see that. Okay. Start over.”
Lily Smith:
Yeah. Dave, it’s been so great talking to you today. Thank you so much for joining us. It’s been an absolute pleasure.
David Dylan Thomas:
Thanks so much for having me.
Randy Silver:
So, Lily, I’m curious, we always end the intro saying with no further ado. What’s your problem with ado?
Lily Smith:
There’s definitely an acronym, I read somewhere like agile doughnuts or an orangutan. I don’t know, I have no idea.
Randy Silver:
Okay. If you’re talking agile and doughnuts, that means you’re talking about Ken Norton. And we may have to invite him back on one of these days. In the meantime, we’ve got other fabulous guests coming up that we’re not going to tell you about today, you’re just going to have to subscribe and get next week’s episode.
Comments 0