The Hidden UX of AI - How to build trustworthy AI products: Nina Olding at INDUSTRY 2025

December 22, 2025 at 09:30 AM
,

One in three Americans trust AI today, down from 50% just two or three years ago. While AI adoption skyrockets in the tech industry, user trust is plummeting in the opposite direction.

At INDUSTRY 2025, Nina Olding, Staff PM at Weights & Biases and former Google DeepMind product manager, tackled a challenge that many product teams aren't prioritising: closing this widening trust gap. Watch the video in full, or read on for her key takeaways.

The trust crisis in AI products

When Nina polled the room, nearly everyone raised their hand when asked who was building AI products. But when asked how many organisations had standards or governance around AI implementation, only about a quarter responded. 

This lack of governance matters because consumer anxiety centres around two fundamental concerns: security and privacy (what data are you collecting and how are you using it?) and accuracy and bias (is the AI making fair, reliable decisions?). These concerns are amplified by the fact that nobody really knows how AI works.

The danger of the vacuum. When companies don't intentionally build trust patterns into their products, users fill that vacuum with their own mental models, and those models are almost always negative.

"If you show a user a ranking of policies that we recommend for you, they'll think, oh, they're showing me the most expensive policy first. They're trying to rip me off because of AI," Nina explains. "Even if it's not there, they are adding it."

The technology has become a scapegoat for any confusing or frustrating experience. Even when AI isn't actually involved, users attribute negative outcomes to it.

The hidden UX framework

To combat this trust issue, Nina introduces what she calls "the hidden UX", a framework built on three pillars: Awareness, Agency, and Assurance.

"It seems really, really straightforward, but actually applying it in practice is the tricky part," she notes. She breaks down how each pillar works:

1. Awareness: Do users know when and where AI is active?

About 50% of Americans worry that AI is acting when it isn't. Until we reach saturation where AI is truly everywhere, transparency is critical.

Nina says to make AI presence visible through badges, watermarks, LED lights, and clear language. But visibility alone isn't enough.

Be crystal clear about data. "Data is giving everyone anxiety right now," Nina stresses. What are you collecting? Why? Are you using it for personalisation or training? How long will you keep it? "Being as crisp as you can in the product, not just in your docs."

2. Agency: Can users control it?

Awareness without control leaves users feeling observed but powerless. Control gives users comfort by allowing them to establish their own boundaries.

Nina's design principle: Simple defaults + Deep optionality

Meet users where they are. Some users will never visit settings, so set them up optimally out of the box, not recording without permission, not keeping data in perpetuity. For power users who want granular control, provide deep optionality over every setting.

3. Assurance: Do users understand why it did what it did?

If you have awareness and agency, you're in a good place, Nina says. But assurance is what takes trustworthy AI from good to great.

She shares two  ways to build assurance:

  • Show your work: Explain reasoning in plain language. If you show users where you're getting the information so they can validate it, you're solving a worry for them.
  • Help users calibrate: Show confidence levels. Plant identification apps do this brilliantly by displaying uncertainty when appropriate. 

"Being uncertain and showing your users a little bit of uncertainty can make them much more confident," Nina explains. "If you had an employee who was consistently very confidently delivering you incorrect information, that would be awful. Your product is the same."

From magical thinking to magical products

"With AI, a lot of the complexity and how it works is just even more abstracted. It creates this vacuum. It makes it feel magical, right?" Nina says. "But your users are not going to think that your product is magical because they don't understand how it works. They're not going to think it's magical because it's invisible. They will think it's magical because it's good."

To make your AI product good, ask yourself three questions:

  • Do users understand where AI is active? Can they see it? Do they have the awareness they need?
  • Can they control it? Are you allowing them to establish their own boundaries and guardrails?
  • Will they understand why it took the actions it took? Can they validate the reasoning?

"If you can’t answer all of those questions, then you may have some UX debt that is waiting to surface and bite you," Nina warns. The good news? You can tackle this incrementally, as long as you're working systematically through the framework.

"This is sort of a moment of inflection in human-computer interaction where we are developing these design patterns for the first time," Nina concludes. "This is a great opportunity and I'm very excited for you all that you get to build products with AI."

Want more practical product insights? Access the full INDUSTRY 2025 recap to discover more talks.


About the author

Louron Pratt

Louron Pratt

Louron serves as the Editor at Mind the Product, bringing nearly a decade of experience in editorial positions across business and technology publications. For any editorial inquiries, you can connect with him on LinkedIn or Twitter.

Become a better product manager
Learn from product experts and become part of the world’s most engaged community for product managers
Join the community

Free Resources

  • Articles

Popular Content

Company
  • Careers

    HIRING

Follow us
  • LinkedIn

© 2025 Pendo.io, Inc. All rights reserved. Pendo trademarks, product names, logos and other marks and designs are trademarks of Pendo.io, Inc. or its subsidiaries and may not be used without permission.