The hidden UX of AI: What every PM needs to know - Nina Olding (Weights&Biases, Gemini, Meta)

November 19, 2025 at 04:46 PM
,

In this episode, Nina Olding, Staff Product Manager at Weights & Biases and formerly at Google DeepMind, working on trust and compliance for AI, joins Randy to explore the UX challenges of AI‑driven features. As AI becomes increasingly woven into digital products, the traditional UX cues and trust‑signals that users rely on are changing. Nina introduces her framework of the three "A's" for AI UX: Awareness, Agency, and Assurance, and explains how product teams can build this into their AI‑enabled products without launching a massive transformation programme.

Chapters

  • 00:00 – Intro: Why AI products are failing on trust
  • 00:47 – Nina Olding's journey from Google DeepMind to Weights & Biases
  • 03:20 – The UX of AI: It's not just a chat window
  • 04:08 – Introducing the Three A's framework: Awareness, Agency, Assurance
  • 08:30 – Designing for Awareness: Visibility and user signals
  • 14:40 – Agency: Giving users control and escape hatches
  • 21:30 – Assurance: Transparency, confidence indicators, and humility
  • 28:05 – Three key questions to assess AI UX
  • 30:50 – The product case for trust: Compliance, loyalty, and retention
  • 33:00 – Final thoughts: Building the trust muscle

Key Takeaways

  • As AI features proliferate, the UX challenge is less about the technology and more about how users perceive, understand and trust the interactions.
  • Trust is based on three foundational dimensions for AI‑enabled products: Awareness, Agency, Assurance.
  • Awareness: Make it clear when AI is involved (and when it isn't). Invisible AI = risk of misunderstanding. Magical AI without context = disorientation.
  • Agency: Give users control, or at least the option to opt‑out, define boundaries, choose defaults vs advanced settings.
  • Assurance: Because AI can be non‑deterministic, you must design for confidence—indicators of reliability, transparency about limitations, ability to question or override outputs.
  • Many organisations treat AI as a "feature" rather than a shift in UX paradigm. But the shift is major: the interactive cues and trust signals we've relied on may no longer apply in the same way.
  • Simple implementations matter. You don't need a massive replatform to start building trust. Even small design changes (badges, disclosures, data‐state transparency) can start to build the muscle.
  • From a product‑management perspective: ask the three questions (above) early and often. If any are "no", map out remediation.
  • The business case: building trust is both retention/competitive advantage and forward‑looking (regulation will arrive). So investing in this now is strategic.
  • For PMs working on AI features: collaborate closely with design, data, engineering and governance to ensure the UX signals for the "three A's" are baked in, not an afterthought.


Episode Transcript

Intro: Why AI products are failing on trust (00:00)

Nina Olding: This is growing. Like as AI adoption is growing, AI distrust is also growing. Like your product is going to fail.

Nina Olding: Working with AI is complicated for all kinds of reasons, but one of the most exciting parts of that is that we're still in the foothills of understanding what the UX of a fully dynamic experience should be.

Nina Olding: It can be very magical, but it's also like disorienting because you're missing a lot of the interactive signals that you usually get when you're deciding whether or not to build that trust. Does your user know when AI is in your product? With AI, we don't actually even always know why the product is doing what it's doing.

Randy Silver: Hey, it's the product experience and I'm Randy Silver. Nina Olding exited Google DeepMind to take on the challenge of the hidden UX of AI at Weights and Biases. And her model focuses on three things: awareness, agency, and assurance.

Nina Olding: It doesn't need to be a gigantic initiative that your corporation like undertakes. No, it should just like become like a muscle that you build.

Nina Olding's journey from Google DeepMind to Weights & Biases (00:47)

Randy Silver: Nina, thank you so much for joining us here in Cleveland. How are you doing?

Nina Olding: I'm doing really well. Thank you so much for having me.

Randy Silver: How's the conference going for you so far?

Nina Olding: It's awesome. There is a lot of energy and excitement and yeah, it's been really fun.

Randy Silver: So, you're doing a fantastic talk here at the conference, but for anyone who's not lucky enough to be here with us, just do a quick introduction if you don't mind. What are you doing these days and how did you get into the world of product and design in the first place?

Nina Olding: Yeah. So, like many people, I've had kind of a roundabout entry to product. I've been a PM now for over 10 years at a variety of different places. Right now I am three weeks in at my new gig, a staff PM at Weights and Biases, which was recently acquired by Core Weave. And so I am working on building products for ML engineers primarily and researchers and folks who are building AI. And I'm transitioning from three years at Google and Google DeepMind where I worked on Gemini. I worked on a lot of user trust and compliance features. I also helped build the feature that just shipped temporary chat which was very cool to work on. And before that I worked in ML at Zendesk and some startups and yeah I fell into product.

Randy Silver: So you asked also how I got into product.

Nina Olding: Um that is the million-dollar question I have people asking me all the time how to break into product and I think everyone's got a different answer. For me I got really lucky which I think that happens to most people who end up in product because it's awesome and it does require a degree of luck. I was working in—I have a Scottish law degree and I—

Randy Silver: As one does.

Nina Olding: As one does. I didn't want to live in Scotland or be a lawyer. Unfortunately, although it was a fun experience and so I was kind of aimless for a little while. A couple years I worked in client services and account management. I didn't know exactly what I wanted to do. I really liked working on kind of like the data storytelling, like doing software implementations for clients. And so I was like, "Oh, I think I'm like more technical than I thought." And so I ended up doing a coding boot camp, gave notice to my employer, who was a services company, and they were like, "Wait, stay and help us build this app" and it hadn't really been defined. And so that was that was it. Like I loved it and I've been doing it ever since.

Randy Silver: Fantastic.

The UX of AI: It's not just a chat window (03:20)

Randy Silver: So your talk here is about the UX of AI and I mean I'm going to be slightly facetious with this but I mean obviously we all know it's just a chat window right? I mean that is what people think of right they think of an LLM interface but AI shows up everywhere all the time now and it's going to be more and more and more prevalent and pervasive until it's just everywhere. There's like a tipping point somewhere in the future and we're not there yet. And the UX of AI is how your users perceive and interact with AI in your product. So the way you frame it though, I really liked. You've got a well a framework or model, whatever you want to call it, a pyramid or a triangle of A's. Tell us what those are.

Introducing the Three A's framework: Awareness, Agency, Assurance (04:08)

Nina Olding: I do love a good bit of alliteration. So the the three A's are basically, and I'll back up a little bit. It's really so that your user has all of the like context and the clues and the signals that humans rely on when they decide whether or not they are going to put their trust in someone or something. In this case your product and I think with AI a lot of that gets abstracted away and it can be very magical but it's also like disorienting because you're missing a lot of the interactive signals that you usually get when you're deciding whether or not to build that trust right and so the three A's are around awareness which is basically does your user know when AI is in your product because lots and lots of people are afraid that AI is active when it's not and that they'll use that to inform their decision-making. So being really clear with your users so that they can be aware when AI is there. It's around agency, giving your users control, making sure that they have the power and the ability to like set boundaries, that you provide escape hatches, that you allow your users to actually dictate like when and where the AI, you know, within reason is operational within your product, and then assurance to give your users confidence and the context to understand like why the AI is behaving the way that it is.

Randy Silver: I love it because there are things there that are really explicit and you know broadcasting out what you're doing with intent but there's a you're also covering all the implicit as well. How did you how did you get to this model?

Nina Olding: Yeah, I mean I think like it's such a new way of interacting like we've developed all these best practices around UX and UI over like the past however many decades of like human computer interaction and there's like these new like non-deterministic products and modes of interaction that you know the old stuff just doesn't necessarily like cover. And so some of the explicit things are just reminding people of like best practices like use labels, use like persistent like reminders like ask people for permission when you do stuff with their data. Like this is very obvious. But I guess the the acceleration of AI has made it like even more critical. And then there's so much now that's like more under the hood and hidden and that makes the implicit stuff even more important.

Randy Silver: I saw a really lovely talk years ago. I think it was Amber Casease who did it about ambient awareness of things and this is you know had nothing to do with AI but what can you do with you know just the color of lights in the room that signals what kind of weather it's going to be or how much warmer does things get or what being intentional about the sound design of things. But some things can be a really discordant, sharp sound that need to be annoying sounds that tell you in the background, well, there's a message. It's not important. Look when you're ready.

Nina Olding: 100%. I feel like users even like they don't even necessarily know why they like are uncomfortable or why they are really comfortable, why they like your product, they'll just it is these cues that they get. And yeah, like when I think do you remember when people were rolling out the like tap to pay with credit card and like you would tap and if you like hadn't held it long enough it would be like [makes sound] and like they had to do I chatted with someone once who was working on this and they had to do like a large scale rollback because the sound wasn't appropriate for the type of interaction.

Randy Silver: But yeah, I think that's a really good example.

Nina Olding: And you mentioned ambient too. When I worked at Google, I worked on assistant and a lot of what I spent my time thinking about was like how to make users aware of when we are and are not listening and like what data are we processing to help them understand. So that that informs a lot of my my thinking now around like okay users are trusting us with their data like let's make sure that we're deserving of that trust and that we make them feel good about that too.

Designing for Awareness: Visibility and user signals (08:30)

Randy Silver: Okay, let's dig into each of these these three A's. The first one was is awareness. Tell me about the one of the metaphors you use of invisible versus magical.

Nina Olding: Yes. So, two things. One is that AI is not a product or a feature. I think a lot of people are like, I'm going to add AI to my product. Like it—

Randy Silver: No, it's magic pixie dust.

Nina Olding: It's magic pixie dust, but you you use it when it makes sense, right? And so, it shouldn't be like our product is AI now. It should be like we built this really great feature and it works because we use AI in this way but AI often is like in the like plumbing of our product and like I said earlier like under the hood there are things that we're doing with like data processing and data handling and like making predictions and you like we're combining different models and that is like pretty abstract to a user and they can't see that happening and So they whatever the product is that you're working on like I don't know if you're generating insurance quotes like you put in your information and then some magic happens and then it delivers you the quote. Like you don't as a user understand what the mechanics behind that were and like to a degree you don't care but you do want to know that it was like fair and what happened otherwise you're going to like make up stories to to explain it.

Randy Silver: So actually let's be more specific about AI for a moment because you talked about we keep using that term but earlier we were talking about chatbots as an interface or chat interface which is LLMs but there's also RAG there's RPA there's ML does this does this approach work equally well is it equally important no matter which mechanic under the AI tool set we're coming into or is it more specific to one or two?

Nina Olding: That's a really good question I actually think this is most important. I don't think I don't think that the the specific type of ML technology is super relevant. Like if you're using ML at all, I think that's fine. But I think it's really really relevant when you are not like an AI native like core product like it's kind of hard to opt out of like using ML if you're like you know talking to Claude.

Randy Silver: Yeah.

Nina Olding: But if you are like, you know, going to like book a flight through like a flight booking platform, like you don't know if it's an AI product or not, and it like it kind of doesn't matter, but like you I'm sure that users think about it and that they they do make things up to to explain what they're seeing.

Randy Silver: Is there a fundamental difference? Doing let's use flight booking because that's been algorithm algorithmically generated forever and whatever the technology under the hood is the user experience is largely the same unless you're doing an agentic way of doing it. Yeah. So is is is it AI specific that we need to be in this case and we need to adopt our approach or is it it doesn't matter if it's algorithm based.

Nina Olding: I don't think that it matters pretty much if it's algorithm based. Like you could you can build like a very straightforward rules engine that's just like okay like show this type of flight first like if they have and actually I work on a hotel booking platform that did exactly this but I do think that AI has like heightened users sensitivity and awareness around this. Like I am here in beautiful Cleveland and when I arrived at the airport my Uber driver was like oh you're giving a talk like what's it about? And I said, "It's about building trustworthy AI products." And he was like, "Oh, like can you what is AI? Like I keep seeing this like it is that the same as A1? Like what is this A1?" Um, we're out. And yes. And so and and I like also from having worked this is like before I was like deep in ML, but like I did work in some internal tooling ML for like safety at Facebook. And the stuff that I saw like people saying about the way that we did things at Facebook, they're just like they just don't trust us. And so building in these signals where we're using AI allows us to when like the New York Times does this so so well. They are so clear about disclosing when they've used AI in like synthesizing data and sources for a story or like I play connections and when you like get to the end it shows it like uses AI to group the like wrong guesses and I trust now that when I'm reading the New York Times and they don't disclose that they're using AI that they haven't used AI like the presence of their disclosures makes the absence of the disclosure measures like also feel comforting to me.

Randy Silver: It's interesting. I'm listening to you say this. I'm thinking, okay, but Nina, you're a very high information user of this or consumer of these experiences. And you're also talking about at Facebook, nobody trusted them, trusted Facebook. So, people are going to be making up in their mind reasons for things. They're going to assume AI regardless. So, I'll just ask this one more time and then we should absolutely move on and dig further into what you're doing, but should we be applying these rules whether or not we're using AI because other people might assume we are. So, I said rules, I don't mean rules.

Nina Olding: Yeah. I feel like you're you're what you're suggesting is that like to to go back to the flight booking example, you would at the at the top say you like put in your flight details and we would explain why they're getting the results first. I I'm not 100% sure.

Randy Silver: I'm just in terms of the best practices that we'll get into of all these things. Are these should we only be using these when we when we know behind the scenes that we've used AI approaches or should we be applying these regardless some people might assume we are?

Nina Olding: I think in an ideal scenario like we would be clear with users about how we're processing their data, how we're like making recommendations to them. Like practically speaking that's not always possible. And I think that the key for me comes down to like deterministic versus nondeterministic use cases. Okay. So like if you have a rules engine, you're like okay like this is the inputs, this is the outputs, like this is I mean like we've been doing this since the advent of you know you would apply for like a loan before computers and someone would look at lists of like these are the criteria these are like whatever they would spit out like a result for you.

Agency: Giving users control and escape hatches (14:40)

Nina Olding: The difference to me is that with AI we don't actually even always know why the product is doing what it's doing.

Randy Silver: And well, we didn't always know what was in the person's head either.

Nina Olding: That's true. That's true. And I do take your point, but I think that like that they could be held responsible whereas the the black box came.

Randy Silver: Exactly.

Nina Olding: And I think that the the users fundamental lack of trust because I think like there was a Pew Research study that said like 50% of Americans think that AI is present when it's not and they worry about that. Like this is just a time when we have to be like really unusually attuned to our users needs around like these disclosures.

[Advertisement break for Mind the Product AI training courses]

Randy Silver: That's a really good answer. Let's go back to to talking about awareness. You there were a few practical tips you had about how to apply this, how to do it well. So can you just share a couple of those if you don't mind?

Nina Olding: Yeah, totally. So the ways that I've seen folks doing this really well are around like disclosures both like in your docs, but like be honest like a lot of people don't look at disclosures and docs. So in product disclosures when you're going to be like when you're surfacing results, badges, LED lights, I think we talked about a little bit earlier. Just making sure that you're like very visible or like a little sound like something that is going to be noticeable and persistent to your users when AI is active. Yeah. The other thing that's really important for users to be aware of is around data and be sure that your users understand what you're collecting, when you're collecting it. If you have, you know, time limits around like how long you're going to retain something, is it is it stored in memory? Like your users don't need to understand that, but they do need to understand if you're going to keep it persistent beyond a session, is it five years? Is it forever?

Randy Silver: Exactly. Exactly.

Nina Olding: Like there was a big data breach with like I've never used this app, but with Temu, I guess. Yeah. Where they were just keeping everyone's everything forever. Like don't don't do that probably but yeah being really clear with your users about what you're using and then like the underlying model and where you are getting your data from and what it's being trained on and that helps your users to to be more comfortable because I I think that the the biggest things that users are concerned about is around accuracy and around like fairness and bias and then also around their like data privacy and security. So if you can cover those two pillars around like awareness and around data and how you're using and treating your users's data, then you'll be really in good shape.

Randy Silver: Okay. So you've made people aware of what's being used, how, when, and when. Now you want to give them some agency. How do you do that?

Nina Olding: Well, yeah. I think about this in a sense of users being able to define the boundaries of the AI use. And there are like it's sort of a multi-dimensional thing. So your users can turn the AI on and off. Maybe they obviously this is not going to apply to everyone's product all the time, but maybe they can turn the AI off when they need to providing them escape hatches so that they can they can toggle it and having that actually be operational and not just like a placebo toggle.

Randy Silver: Yeah.

Nina Olding: So simple defaults and deep optionality. A lot of this is sort of just like product best practices that we need to be reminded of. But meeting your users where they are. So for some users, they will be jumping in and they will be like ready to go and like go in and like customize all their settings and those like that's probably not most of your users, but those users will really value and appreciate that ability. And so for example, you might want to have like really atomic and like discrete settings around data use and around like you know yes you can use my AI for training in these circumstances but you can't use it like like allowing your users to to customize and configure and then like I said meeting your users where they are and so setting the defaults in such a way that it like respects them and like is what a user might reasonably expect when they're first using the product. One example of this that I brought up in my talk I I was going to talk about this before GPT-o1 came out was around model routing because I think this is a really interesting problem and then GPT-o1 came out and they made that very relevant. They made it like even more relevant. So for for anyone who like isn't aware of what happened with GPT-o1 basically in previous GPT like before the release of GPT-o1 there was a model picker and so the user would go in and they would select whatever model they thought was was the most relevant for their use case and they haven't released this data publicly but I I'm pretty sure that most people didn't use that model right. You had most people using the like vanilla model which probably meant that they weren't getting the most like bang for their buck. They like weren't using thinking models where it would have made sense to use like like a smarter model. And then that we were probably also wasting a ton of compute on like very straightforward queries that could have used a more lightweight model. And so what OpenAI did with ChatGPT was that they they took away the model picker essentially and they they implemented routing based on the user's query which is super smart probably like worked really really well for like 90% of people.

Assurance: Transparency, confidence indicators, and humility (21:30)

Nina Olding: You were giving people the better model for their use case, but for that like 10% of people who actually use the model picker, you like took away this incredible tool that they were using all the time and you took away like the ability to choose their models. And so they are then going to go in and and they've sort of fixed this by by making it so that you have the the auto picker by default and the user can then go and select the right model.

Randy Silver: I think that's a really interesting things. By definition, anyone who is working in this space at this point is highly educated about this and are potentially the kind of people who want to do pick their own model. You we are advanced in this. Most people aren't. I mean, I wouldn't know which model to use 99% time. And even if I thought I did, I'd probably be wrong a lot of the time.

Nina Olding: This it's changing every week anyway.

Randy Silver: And depending on which question I have, there's all kinds of benchmarks and I'd have to change model picker three times in a in a prompt session potentially.

Nina Olding: Totally. And you do have power users toggling back and forth between the models, but like for most people that's not necessary.

Randy Silver: So when we're doing this there in terms of providing people agency, there's the I have a good intention, but I'm modeling this on the knowledge that I have and I'm advanced. So, I'm either overestimating or underestimating my consumers in this case. And then there's the other way of going of I'm going to be maliciously compliant and give them nine gazillion options so that yes, they have all the power and it's on them, but they're not informed enough or they don't have the time to do it. Is there an example of good in this space if you who's doing this?

Nina Olding: Well, so I think what GPT-o1 actually landed on after there was like a huge uproar was amazing. Like they now have when you log in the default experience is that there's the auto picker and you don't need to worry about it but user mode you can and then I think I I don't want to misspeak on this so I won't say the name of the company but I did see a company that is a very popular like text editing and refinement product like a household name brand they were training on all of their users data by default. They still are actually, but I think you couldn't turn it off without turning off a bunch of functionality and so I think they removed that and they made it so you could turn it off. So, but you still have to go and look for that and I—

Randy Silver: Yeah. Yeah. It doesn't feel great.

Nina Olding: Yeah. And every once in a while something will come out and whether it's Slack was doing some training on stuff and there are settings and various things and you'll see forum posts and forums for admins or for anyone who's got a paid account. These are the settings you want to change because the terms and conditions have changed. I mean, you see stuff on like these tech tech blogs like for non- techy people I guess that are like instruction manuals for how to change your settings so that they respect your privacy and like that's it's a little depressing.

Randy Silver: It is. But, you know, people have always wanted to play with things. It it's it's the fine balance of are you allowed to to play with things or not. I remember I I went for an Android phone because as good as iOS is, I didn't like the fact that it wouldn't let me install whatever app I want that I couldn't break it if I wanted to. I wanted the to know that it's my device and I could play with it even if I know what iOS is doing is protecting me and is objectively an excellent experience. But—

Nina Olding: Totally.

Randy Silver: Anyway, well, this is another example.

Nina Olding: My husband's an Android user. I I actually I love my iPhone. It's very easy. But my husband's an Android user and he does all kinds of funky stuff with it and he was very irritated with some of the the Gemini roll out, which was funny because I was working on Gemini at the time. But he really appreciated that there was an escape hatch and he could opt out of the, you know, circle to search and that type of thing.

Randy Silver: So, so that brings us to the the third A, which is assurance. So, you're making people aware of what there is. You give them the agency to do things, but you want to make sure that they trust what you've been doing.

Nina Olding: Yeah. I think what we've seen a lot in the last couple of years is overconfident AI. And no, and it's been it has been like pretty comical at times and hallucination has gotten a lot better. I've seen a lot of people object to the term hallucination because it's like anthropomorphism, but I'm going to keep using it anyway because everyone knows what it means and we don't have anything better.

Randy Silver: And we don't have anything better. It works fine.

Nina Olding: But when AI hallucinates, you can sort of research it, but you have to know what you're looking for and I I love it when I use a product and it tells me that it's not sure.

Randy Silver: Yeah, it's the same thing with people.

Nina Olding: Yes. Someone who's overconfident and you know who is mansplaining things that's not a good experience.

Randy Silver: Can you imagine like having a junior employee and you like bring them in to the board meeting to like report on like the the news from like the you know direct from like the front lines of of your of your company and they come in and they just like make up—

Randy Silver: Well, I've got my my son tries this all the time. I love him. He's a really smart kid, but every once in a while he's just confidently telling us something that's complete crap and we look and say, "Stop child explaining to us."

Nina Olding: I love that. Yeah. My kids will say, they're they're really little, but they'll sometimes say like, "I know more than you." I'm like, "You just don't."

Randy Silver: Possibly about Paw Patrol, but I mean, definitely.

Nina Olding: But they like like AI is super super smart and also so so dumb. Yeah. And this is actually really hard to implement but like any sort of like confidence indicator or like threshold or even like refusing to give an answer is better than giving the wrong answer and will actually inspire more trust from your users than giving the wrong answer.

Randy Silver: Is humility a good word in that?

Nina Olding: Yeah, I think humility is a really great word. Yeah.

Randy Silver: Okay. Yeah, you want your your product should have some humility. It doesn't know everything. It shouldn't pretend to know everything. Yeah.

Three key questions to assess AI UX (28:05)

Randy Silver: Well, hey, I can add something to this conversation. All right. So for people who are getting started with this, and this is actually one of the reasons I really loved your talk. You have three very specific places where people can start, three questions that they can ask. So, let's go through those. What's the first one?

Nina Olding: Yes, I think these are hopefully quite practical and actionable. The first question is going to be, do your users know when there is AI on your product and when they're using it?

Randy Silver: Okay. Well, that seems pretty obvious.

Nina Olding: Okay. It's pretty self-explanatory. I'm hoping that there are PMs out there who are listening or watching and and can can turn to their product and say, "Oh, shoot. I've missed like a badge indicator here." or like you know a very on a very practical level like I have models running under the hood and I need to be more clear with my users here. So like respecting your user that way and I think it it's like really obvious but it's not actually always executed on.

Randy Silver: Well let's come back to the execution on these minimums. The second question is the first question was do users know when AI is active? Second question is can they control it? Can they turn it on and off?

Nina Olding: Can they understand the boundaries and what it's doing and like what data it's using? Showing them sort of the contours of how the AI exists in relation to them in the product.

Randy Silver: And the third question, the third question is why did it do what it did?

Nina Olding: And that is maybe the hardest one for people to actually implement, but helping your users to understand. Whether that's like introducing like pointers to sources, giving confidence indicators, making them understand like what actions the AI took, if it did, or why it made the decision that it did.

Randy Silver: I love those questions and they're simple and easy to to ask. But if the answer to any of them is no, then you're going to want to take action. Ideally, you're going to want to remedy it. But it kind of feels like the old discussions I've had with people about accessibility where you know you should do these things. Yeah. But it's sometimes expedient not to at least in the short term where there was an oversight and now you have to make the business case we need to fix this. We should be thinking about this. How do you have that discussion?

The product case for trust: Compliance, loyalty, and retention (30:50)

Nina Olding: There are a couple of ways that I would approach this. And I guess the first thing is that you mentioned accessibility. A lot of folks still don't want to care about accessibility. I actually do care a lot about accessibility. I think it's really important. But they've been regulated into caring about accessibility. They have to. And so I think this is coming for you like whether or not you wanted to. And you will be in such better shape if you plan for it now. And if you implement these things, you can almost like demonstrate compliance like before it arrives. And the second thing is that your users will love you. And that is worth so much. Like as PMs, this is very very important. You want your users to trust you. If your users don't trust your product, which like they won't, and I think we've talked about this, but the the growing like distrust, we went from 19% of users distrusting AI just a couple years ago to like 50% of users distrusting AI now. Like this is growing. Like as AI adoption is growing, AI distrust is also growing. Like your product is going to fail if you don't implement these trust mechanisms. So that's two things. One is you will have to care because someone will make you care. Two is you have to care because your users rely on you and they won't want to use your product and they like we said they might not even know why they don't want to use your product. They just won't.

Randy Silver: So you've got a carrot there in a stick. The carrot is that you're to to increase the trust and the loyalty of your users to create a defensible moat and and to retention. It's a retention play. And then you've got the stick of regulation is coming in some way, shape or form. It, you know, whether it's GDPR or something like that. Something will come and if we are ahead of the curve and doing it well, then maybe it won't be as onerous or as bad or we'll be showing the legislators of the world what good looks like.

Nina Olding: Yes, that's exactly right. There's a carrot and a stick.

Final thoughts: Building the trust muscle (33:00)

Nina Olding: And then the third thing is that it's actually going to be very easy. Like the tools that we've talked about are really straightforward to implement. Like there are obviously going to be ways that are harder to do it than others. Like showing your users exactly like the the underlying like function like you don't want to do that. But like implementing a badge where there's AI or like telling your user like we're only going to keep this data for X amount of time and like pointing them toward a source when you tell them information. Like these are very straightforward things to implement and the like the cost value is like so obvious.

Randy Silver: It's not about building a whole new system or re rejigging things. This is putting a notification somewhere, putting something in to your terms and conditions in plain English.

Nina Olding: Yeah, they're very basic and to to a degree it can also be done like as and when you have time like it doesn't need to be a gigantic initiative that your like that your corporation like undertakes and you're going to spend a year on it. Like no, it should just like become like a muscle that you build and you're like, "Okay, well, we're going to put this here because this is the this isn't the design pattern for our product." Makes sense. Yeah.

Randy Silver: Nina, thank you so much. This was a fantastic talk. I really enjoyed it. Enjoyed the rest of the conference.

Nina Olding: Thank you so much for having me. It was a pleasure to chat.

Randy Silver: The product experience hosts are me, Lily Smith, host by night and chief product officer by day, and me, Randy Silver, also host by night. And I spend my days working with product and leadership teams, helping their teams to do amazing work. Luron Pratt is our producer and Luke Smith is our editor.


About the author

The Product Experience

The Product Experience

Join our podcast hosts Lily Smith and Randy Silver for in-depth conversations with some of the best product people around the world! Every week they chat with people in the know, covering the topics that matter to you - solving real problems, developing awesome products, building successful teams and developing careers. Find out more, subscribe, and access all episodes on The Product Experience homepage.

Become a better product manager
Learn from product experts and become part of the world’s most engaged community for product managers
Join the community

Free Resources

  • Articles

Popular Content

Follow us
  • LinkedIn

© 2025 Pendo.io, Inc. All rights reserved. Pendo trademarks, product names, logos and other marks and designs are trademarks of Pendo.io, Inc. or its subsidiaries and may not be used without permission.