,
Artificial Intelligence (AI) & Machine Learning (ML)
NOV 6, 2024

How Google makes AI work at Enterprise scale – Miku Jha (Director, AI/ML and Generative AI, Google Cloud)

Share on

In this week's conversation, we speak with Miku Jia, the Director for AI and ML at Google Cloud, about the practical applications of AI in enterprises live from Pendomonium 2024. They discuss Miku's journey into AI, the importance of addressing real business problems with AI solutions, and the key success factors for using AI effectively. The conversation also covers the challenges of AI hallucinations, the need for responsible AI practices, and the significance of data strategy in using AI technologies. Miku highlights the accessibility of AI tools for entrepreneurs and the potential for solving complex challenges with generative AI.

Key takeaways

Chapters

Featured Links: Follow Miku on LinkedIn | Google Cloud | '#mtpcon @ Pendomonium 2024 Encore' recap feature

Episode transcript:

Randy Silver: 0:00
Heya Randy here, and we've got another great chat lined up for you today on the product experience. I mean, I say we, there's a whole team who work together to bring you these episodes every week, but it feels really weird to say we whenever I do an intro without Lily. But we split our responsibilities recently, where each of us recorded some great chats with speakers from the Mind to Product stage at the Conference, while the other emceed the actual stage. So today it's me firing up the infinite improbability drive to talk to someone with a brain the size of a planet for a conversation that's truly fascinating. I got the chance to sit down with Miku Ja. She's the Director for AI and ML and Generative AI at Google Cloud and we talked about how they actually make this stuff work at enterprise scale.

Lily Smith: 0:55
The Product Experience Podcast is brought to you by Mind, the Product part of the Pendo family.

Randy Silver: 1:05
Every week we talk to inspiring product people from around the globe. Visit mindtheproductcom to catch up on past episodes and discover free resources to help you with your product practice.

Lily Smith: 1:14
Learn about mind the products conferences and their great training opportunities create a free account to get product inspiration delivered weekly to your inbox. Mind. The product supports over 200 product tank meetups from New York to Barcelona. There's probably one near you.

Randy Silver: 1:36
Hey, we're live here at Pendomonium in Raleigh. I'm here with Miku. She was on stage earlier and gave a really interesting talk about AI and how to make it actually work. But, miku for anyone who wasn't here, didn't get a chance to hear your bio and your introduction. Can you just do a quick introduction? What do you do these days and how did you get into this world in the first place?

Miku Jha: 1:58
Sure, no. Thanks. Thanks to Andrew for having me, and I'm having too much fun at Pendo, so really happy about being here. Currently, I am running Google Cloud's partner engineering group and it's very much focused on AI in the sense that we are working closely with our partners to really kind of. You know, take the generative AI-based solutions and applications from interesting concepts to actual, real deployments at scale so that it can start delivering the business value and business outcome.

Miku Jha: 2:31
Very excited about the chance that I have in my role also, and in terms of how I got into AI, that's a very different story. I'm a serial entrepreneur. I founded multiple startups and then one fine day in 2014, I got up saying I want to do something and reduce food waste. At that point, a lot of articles and mind space was that we need to grow more food to feed people and my take was that what if we wasted less food? You can get to the same outcome. So that's where my journey started and you know I started doing rounds with farms in Cape Valley back in Bay Area and after seven or eight months, decided to build a solution which can help supply chain reduce food waste by automating the food grading process.

Randy Silver: 3:21
Oh, wow.

Miku Jha: 3:22
So you know how, when you buy something, you see the USDA grade one, two. So that's a very manual, tedious, subjective process when it comes to supply chain. And that's the solution I ended up building, which was using AI predictive AI, because back in 2014, there was not much about generative AI and built that solution to help businesses, you know, become more objective in how they grade food and, in the process, reduce waste. So it was, I would say. The reason I succeeded, which holds even today and maybe holds more today, is that I started from a business problem. I wanted to reduce food waste, and AI just ended up being the best tool to accomplish that outcome. And today, in a lot of my interactions as I assess the ecosystem, I see that we are starting from AI for the sake of AI.

Randy Silver: 4:16
Yes.

Miku Jha: 4:17
And I think we need to change that, and then we will start seeing better outcomes.

Randy Silver: 4:21
It's interesting in the UK, where I live, there's a company that took a similar problem and attacked it in a very different way, which is they started marketing wonky looking fruit and vegetables as attractive and useful and saying don't waste it. So no technology, just marketing and labeling and positioning, and it's a really interesting way of doing it. You don't always need to throw the magic pixie dust of AI just for the sake of fixing a problem.

Miku Jha: 4:49
Exactly, and that's something today we understand. Generative AI, we understand there is something very real here. There are a lot of applications, but now we are seeing that it's not the paradigm of one size fits all. There's no such idea of one single model which will solve every problem for an enterprise. So, again, you have to figure out what is the use case, what is the business problem, and then make a practical decision on which model would be the right model to solve that, which is where I think a lot of what we are doing at Google too. For example, Google, we have our own model garden as part of our Vertex AI platform, and that model garden today has more than 150 foundation models and other models built from Google itself, built from open source partners, built from third-party partners, so that you can choose the right model for the right use case.

Randy Silver: 5:51
Okay, before we even get to choosing an AI model or choosing AI as the thing at all. I'm curious we were talking, before we turned the cameras on, about. There is a little bit of a hype cycle around this and more recently, we saw this with things like crypto and blockchain, and, as far as I can tell, the best business case for those technologies was often separating VCs from money rather than solving a real problem yeah.

Randy Silver: 6:15
And I see the same mistake being made with AI. Ai, actually, I see real practical use cases, for I think that there is good stuff there, as opposed to some sometimes with the other things. But most of the companies, most of the people I know, are still taking a very reactive approach. It's the sales or leadership coming and saying can we use AI in this? Rather than and being reactive to a perceived problem, rather than saying is this the best tool. So you had four key success factors that you were talking about to making enterprise AI work. Let's start there if you don't mind.

Miku Jha: 6:52
Yeah, yeah, no, no, absolutely. I think it's really important to figure out what is the recipe for success, right? Like, what does a playbook look like? Especially when, in many ways, it's a very overwhelming time from technology perspective, where every day, there's just so many new things which are announced, new capabilities which are announced, which is great from the pace of innovation, but from the business leaders and decision makers, they need to figure out, kind of cut through that and say, okay, what should be my strategy here to get to my outcomes?

Miku Jha: 7:26
So, and the reason we I talked about these success factors because it comes from our experience of working with enterprise customers for last 18 months right, that what is needed, right? So, first thing is that, like I was touching on it even before, businesses today have multiple scenarios in which Gen AI specifically can help them. Whether it's your marketing department or your HR or your IT or your developer productivity, there is a role that Gen AI can play in any of these. But it also means that you might you would be dealing with multiple models, right? So then the notion or the decision on a platform becomes really crucial, because you're not going to be maintaining the entire lifecycle of a model in-house for each single model. You need a platform strategy.

Lily Smith: 8:19
Right.

Miku Jha: 8:19
So that's one, that's one. But then you say, OK, what? What should I be thinking of when I'm thinking of investing in a platform, Because that's probably one of the most strategic decisions that you have to make, because you're not doing this investment in Genai or the platform for one problem. You are future proofing your investment for the next phase of innovation of digital transformation for your business, the next pace of innovation of digital transformation for your business, right? So the most important thing, I think, is that when you make a decision on which platform to go with, you have to be sure that you have optionality and choice in the platform. Why? Because of two reasons One, we don't know what. We don't know because we haven't been through deploying enough Gen AI applications into production. So what happens a quarter from today is the narrative yet to be written. That's one. And second is that that optionality will future-proof your investment over coming years. So choice and optionality is really important when you think of which platform should I invest in for my strategy.

Miku Jha: 9:26
And then the next thing which is important is the knowledge and data Like what differentiates you as a business from everybody else. They're all going to have access to platforms, capabilities and models. That democratization is happening right. So that's not where the differentiation comes from. It will come from your business. What's the unique value for your business? What is the unique data that you have? What's the unique knowledge that you have? Can you bring all that together and then go for that application which would be your IP? So that differentiation is number two. And number three is enterprise readiness. If you think about it, today, not a lot of applications which are in early exploration get deployed in production and the problem is that we are dealing with a very different world of what happens if there is a leak in my data. What happens if there is a hallucination in the model? How will it impact my brand? Am I ready to make those decisions from a business perspective? So that enterprise readiness is the next factor, which is cushion.

Randy Silver: 10:36
And you just talked about hallucination and this comes up a lot. Yeah, this is one of the most dangerous things, because in almost everything we've done in the past at least everything automated there's an audit trail. We can map how processes work, and they can be complicated, they can even be complex, but we'll have a record of how something, a decision, was made and be able to, uh, iterate and change for the future. It's not so easy with ai what, what, how do we handle that? Yeah yeah, how do you uh protect yourself?

Miku Jha: 11:08
no, it's. It's a tough one, you know. On a lighter note, I I say that you know. Hallucination is like more of a feature as opposed to a bunk when it comes to.

Randy Silver: 11:21
Okay, you're going to have to explain that one.

Miku Jha: 11:23
What I mean is that it's the outcome of the inherent way in which the language models, or foundation models, learn Mm-hmm. You can't at the core of it, you can't just decouple it. That's how the model learns. That's how we have gotten to how powerful the models are in terms of reasoning and advanced capabilities. But we do have to figure out ways to have guardrails around it right, and one of the ways is by grounding it on data. So, for example, I'll give you an example of one of the journaling organizations we worked with. So now, journalism has to be probably one of the hardest vectors when it comes to. I cannot tolerate hallucinations.

Randy Silver: 12:11
Well, some of what we call journalism, unfortunately.

Miku Jha: 12:14
But if you want to be reputable, yes, so the idea there would be that you ground that responses into the clean, curated data from your own journalism repository, say, over the last 24 months, over the last 24 months. That way, when the model is giving you the output, it's cross-checking against that repository and that serves as a way to reduce hallucinations. So grounding is a very important aspect. In fact, with Google's foundation model, you can ground your data on Google Search, so not only you get a more fresher source of information, because otherwise you know you must have come across. Hey, I can't answer this question because I was trained. My cutoff was 2021.

Randy Silver: 13:01
I've gotten it, but I've also gotten the other one the lack of the ability to say I can't answer this question at all, because I don't want to anthropomorphize, but the incentive model within the LLM is often to provide an answer because the KPIs are around that, rather than saying I'm sorry, I can't give you a good answer to that one, which I mean.

Randy Silver: 13:25
I've got a 12 year old at home and the way we do it with him. He's not an artificial intelligence, but he does a lot of the hallucination. We we tell him to stop, we talk to stop. Child explaining to us yeah, and it's effectively it's just gonna sound, it's the wrong choice of words, but effectively it's shame that we're trying and humility that we're trying to teach him so that he knows that it's okay to say I don't know yeah, rather than making things up. Unfortunately, he's gotten the reinforcement sometimes that if he says it confidently, that he can get away with it, and I think a lot of the generative AI just says things confidently. You know, hope they'll get away with it, but there's no way of shame or humility that I can see to answer to. What do we do?

Miku Jha: 14:08
Yeah, so that's what I'm saying. Like we are now coming up with many different ways in which we can address.

Miku Jha: 14:14
So rigor now coming up with many different ways in which we can address so rigor, yeah, in which we can address this. One is the grounding so you can ground it on your own data, you can ground it on curated data, you can ground it on Google search. The other aspect of it is essentially building those guardrails as centers in your model, right, so that's another, another aspect, not just for hallucination, but even for like is the content harmful? Right, that's a big thing, right, even so it's, and it's not just the. Is the content harmful from the output perspective? It's like if you're going to ask going in if, what about? If that query is not part of how your enterprise thinks about compliance or guidelines? Right, so you can have. This gets us into something equally important, which is responsible AI.

Randy Silver: 15:03
Yes.

Miku Jha: 15:04
So that's where you can have the safety filters, the taggings, the categorizations, additional knowledge that, hey, this content is harmful. Here is how you categorize it. Here is the response you take if that makes it into the input or the output, right. So I think we as an ecosystem, especially from Google's perspective that's what I meant when I said enterprise readiness A lot of these capabilities now it's inherent part of the platform so that you can start having ways to get to the point where you can start taking it into production, addressing these concerns.

Randy Silver: 15:39
This episode is brought to you by Pendo, the only all-in-one product experience platform.

Lily Smith: 15:45
Do you find yourself bouncing around multiple tools to uncover what's happening inside your product?

Randy Silver: 15:50
In one simple platform. Pendo makes it easy to both answer critical questions about how users engage with your product and take action.

Lily Smith: 15:58
First, Pendo is built around product analytics, enabling you to deeply understand user behavior so you can make strategic optimizations.

Randy Silver: 16:05
Next, Pendo lets you deploy in-app guides that lead users through the actions that matter most. Then Pendo integrates user feedback so you can capture and analyze how people feel and what people want, and a new thing in Pendo session replays a very cool way to experience your users' actual experiences.

Lily Smith: 16:25
There's a good reason over 10,000 companies use it today.

Randy Silver: 16:28
Visit pendoio slash podcast to create your free Pendo account today and try it yourself.

Lily Smith: 16:35
Want to take your product-led know-how a step further? Check out Pendo and Mind, the Product's lineup of free certification courses led by product experts and designed to help you grow and advance in your career.

Randy Silver: 16:46
Learn more today at pendoio slash podcast. Enterprise readiness comes in in two ways, though. Though there's what you're talking about, there's also what I've seen in the past uh, back when we were just trying to take huge corpuses of knowledge within organizations and throwing in google or some other somebody else's search into our own internal thing, and everyone expecting it to work the way it does in the consumer world. But it's a garbage in, garbage out problem, because there is no market within a company or no motivation inside a company to do good tagging and SEO, because you're not going to get the return in the way that there is in commercial world.

Miku Jha: 17:25
Yeah.

Randy Silver: 17:25
So how do we ensure that this corpus of knowledge and that attitude of what is responsible, our tolerance of risk and all that? How do we build that? How do we get everyone, when you say enterprise, ready? How do we ensure that it's not seen as again, it's magic pixie dust or we have all this. Let's just throw one of the big AIs at this and say, there, that's your corpus now.

Miku Jha: 17:52
Yeah, no, I think a lot of effort to your point to get this right relies on data.

Randy Silver: 17:59
Yes.

Miku Jha: 17:59
And what is your strategy around data? Right, I always kind of share this that there's no good Gen AI without data, but also there's no good way to utilize the insights from that data without Gen AI.

Randy Silver: 18:13
Yeah, fair enough.

Miku Jha: 18:14
So it's like a cycle right. So I think businesses have to kind of, at the end of the day, answer some really basic, fundamental questions to your point about enterprise readiness. First of all, it's like are you data ready? Is data in a shape, state or form that you can feed it or that the models can consume? Second, is responsible AI right? Like what are your inherent corporate guidelines or policies in terms of what you define as responsible AI?

Lily Smith: 18:43
right.

Miku Jha: 18:44
As Google, we published it as one of the first companies that these are the things we are going to do and these are the things we are not going to do when it comes to AI. Right? So you need an equivalent of that and, most important, which I think a lot of enterprises miss today, is that responsible AI is not the last mile before you take something to your hundreds of users. It should be a day zero strategy, so that we see as a miss because it's not a check mark, it's not a feature that you'll just put as a last thing before you take it to scale.

Randy Silver: 19:19
So you shouldn't be doing a proof of concept then adding this this is an inherent part of your proof of concept.

Miku Jha: 19:24
Yeah, because it's part of your entire principle of how you want to take AI to scale right, and it's a paradigm shift because we are not used to doing it. But that's what is needed right from responsible AI perspective. And then, equally, on the other side of it, is that what is your end goal in terms of how many users, what is the volume of that workload? What will it look like at scale? And then plan for that early enough, because those numbers of what is the cost that you're going to deal with, what is the price, what is the performance, what is the latency right? Have you done that back-of-the-envelope analysis early on so that it will help you do the evaluation of are you on the right track to choose the model or the platform or the solution right?

Randy Silver: 20:13
You choose the model or the platform or the solution. Right? This goes into an interesting place of when do you use somebody else's model as a commodity and when do you start fine-tuning and building your own?

Miku Jha: 20:24
Yeah.

Randy Silver: 20:24
And you had a few different fine-tuning models. Can you talk us through that approach?

Miku Jha: 20:29
Yeah, no. So this is really interesting, I would say a year back I would say, yes, you know, for selected use cases which may be true even today you can full fine tune a model. That means train the model from scratch. But then we have made so much progress in the last 12 to 16 months and one of the philosophies that we have, at least from Google, is that you want to make these foundation models so powerful out of the box that when you as a business want to use it for a certain use case, you don't have to go through all of that. You have to do very minimal delta level of effort to adapt that to your use case. So today, where we are at, except for maybe some corner cases, you don't as a business, really need to go think about full fine-tuning anymore, right? That's number one. Then number two is that you again have to answer the question there's a whole spectrum of choices when it comes to fine-tuning, right? So you have to figure it out that.

Miku Jha: 21:26
What is the skill set that you have in-house? How much data do you have in-house? How much is your ability to absorb the cost and the time to market? Because if you're going to do based, which option you choose. Each of these parameters I mentioned is a factor right. So you make the decision based on that lens and then you choose. In many scenarios you might not even need to do any fine tuning because today a lot of solutions can work well out of the box to what we discussed earlier using RAG architecture, where you're essentially just making that specific data available to the model and that's pretty much it. And model is able to factor that in and generate more informed responses, so you might not need to do at all fineuning. Again, this goes back to what is the use case. Start from the most non-intrusive option. If you don't get the performance, then go for incrementally the next option. Then go for incrementally the next option, as opposed to saying I'm going to invest $10 million and build a model from scratch. You don't need that today.

Randy Silver: 22:33
No, right, okay, you just mentioned RAG, architecture.

Miku Jha: 22:37
Yeah.

Randy Silver: 22:38
And I think that's something, that it's a phrase that has become very popular and not everyone really understands it, so can we just go into that for a moment?

Miku Jha: 22:46
Yeah, so RAG. First of all, it stands for Retrieval, augmented Generation. So there are two aspects to it retrieval and then generation, right? But at the macro level, the idea is very simple. The idea is that, like I just touched on it, right, you have a foundation model. You want to give it your corpus of data, right, for any of the reasons which we touched on. It could be to reduce hallucinations, it could be so that the responses comply with your corporate guidelines. It could be because that's a unique data set that you want the model to pick from. So, for any of these responses, typically, we are seeing that businesses are having a lot of success with RAG architecture.

Miku Jha: 23:23
So the idea is that, imagine you have 100 PDF files for a marketing campaign. Now you want the model to factor that in when you're asking queries from it. So you'll take those files. Now it gets into a little bit of a complicated of how we make that happen, right? So you'll have these files, you'll chunk them, you'll break them into chunks, then you will embed them, because you are only dealing in bits and bytes when it comes to the model. Then you will generate the vectors and then you will store it. So, now that you have stored it.

Miku Jha: 23:54
Now, when a query comes, you're going to do the other way around, right? So you will generate the embedding for that. Then you will have a way to match that. You'll get the top hits, you'll figure out those chunks, you'll bring it back, you'll blend it with the prompt. Now you will give this revised query to the model and you'll get the response. So that's kind of a very high overview of how RAG works. But we from Google have simplified it a lot so that all these steps that I talked about, you kind of. You just need to have a single API from Google's Vertex search, which we are offering for the RAG, and then underneath all of this will be taken care of. So essentially, we are simplifying it so that you as a business can hit the scale on RAG much faster and much more efficient way than before.

Randy Silver: 24:39
Okay, we started this conversation in a different place and we went from. We don't want to approach this as a hammer looking for nails, so let's take the step back. What needs to change to make this more successful in terms of people understanding when to use these tools and making the right choices, making intelligent choices and really enjoying the benefit of it, rather than just saying we're starting with AI because buzzword.

Miku Jha: 25:07
Exactly no, and that you know it's surprising, but it is. We are still there. But I think what needs to change is that decision makers have to really answer the question that what is the business problem? What is it solving? What is the success criteria? What is it that you're measuring? If you can answer those four questions, it might sound very simple, but that serves as an anchor for all the other decisions that has to be made, and I also think that we have to be at a point where we have to start seeing some wins.

Miku Jha: 25:41
So, for example, start with your internal applications right. Start with bringing Gen AI infusing Gen AI rather into your developer productivity. You can measure the ROI. It's probably one of the most straightforward way in which you can measure the ROI. That's one use case. Then you bring Gen AI into any kind of your IT ops. You bring Gen AI into your help desk. Now, why I'm talking about these applications? Because these are internal applications, but it gives you that first base of getting the operational efficiencies, measuring the operational efficiencies, and it gives you that DNA, that skill set that you need to take applications into production. So once you do that, then you're ready to get to the next phase of taking those applications for your external user base.

Randy Silver: 26:31
So right now we're making the decision about where we infuse it, often based on sexiness. We like playing with toys.

Miku Jha: 26:41
Yes, it's changing, but it needs to.

Randy Silver: 26:43
But the four questions that you asked are very sensible. This is what are we trying to achieve and how we know that we're successful? What are the criteria for us choosing a tool? Who should actually be choosing when we implement AI versus something else, or which tools we use? Who should be making that choice responsibly?

Miku Jha: 27:02
You mean? What is the profile of the user?

Randy Silver: 27:05
Who in the company should be making that choice?

Miku Jha: 27:08
I think honestly it's a collective decision, because today what we are seeing is that you have all these different line of business decision makers. Like you might be responsible for the marketing. Your decisions, your guardrails might be completely different than the person who's heading the QA, right? So at the line of business level, I think there needs to be that decision making. But here's the important thing when you are taking that to scale, then that entire change management has to come together, because if there is an impact on your brand that's not mapped to a line of business, it's mapped to your entire organization. So change management is the biggest thing.

Miku Jha: 27:49
So, like we discussed, right, the responsible AI aspect, the governance aspect, the data aspect, the ethical aspects, what you are comfortable with from your brand perspective, how are you going to process the hallucinations? What guidance you have put in place? All of this is a common set of kind of changes that typically organizations put together what we call a center of excellence Yep, right. And then you have all the different stakeholders to align on that and then you can start cranking on specific use cases.

Randy Silver: 28:20
And this all comes back to responsibility and the fact that, because these tools are becoming easier and easier to implement and people get frustrated with proper governance processes, there's shadow IT, and shadow IT will now start using these things and it becomes embedded. And we could talk for hours on that. I don't think that's the topic for today. Yeah.

Miku Jha: 28:39
No, I think, see, we are making progress as a whole. We have made amazing progress in terms of the capabilities of foundation models themselves, like today. If you go and I encourage you to do that go bring up the Notebook LM from Google and upload any file and you will get the most realistic two-people podcast out of that file without any.

Randy Silver: 29:02
Not as realistic as this, sorry.

Miku Jha: 29:04
But my point is that we have made a lot of progress in probably the fastest pace of innovation that I have experienced, and I've been through four waves. I've been through cloud, I've been through mobile, I've been through SaaS so this is my fourth one but there's nothing which comes close to the pace of innovation that I'm seeing with GenAI right. So we are getting there. We are getting there in terms of extracting the ROIs, deploying it in production, getting to scale right. It's fascinating because, if you think about mobile, it took us seven years to get to a point where you could start seeing first flavor of mobile, first applications. We are just, maybe at the most, two years into it.

Randy Silver: 29:40
Yeah, I remember I think we only have time for one more bit but I remember early on in all of this, the dream was we had all these different silos of data and if only we could come up with common data standards, then things would actually work. Now we don't have to because we've got RPA, we've got AI to do lots of these things and fix the fact that we are inherently imperfect in the way we store things and deal with it, and there's some things out there that can make educated guesses and make the connective tissue in the same way our brains will.

Randy Silver: 30:11
So what's the one thing that you wish people would take away from from all this? One thing that uh, you think would make the world better if people uh knew it and started applying it tomorrow?

Miku Jha: 30:23
I think it's about. You know you have something, we have something so powerful with us today and it's completely accessible like you said, uh to anybody, right? So it's the first time that, like you know, when I was thinking of reducing food waste, it took me years to get to the point where today you could get there in months. You can have single entrepreneurs who can solve some of the biggest challenges that we have, because a lot of that power and capability is served using the progress we have made in, especially in foundation models and gen ai.

Miku Jha: 31:02
So I think one takeaway is to take all this as a tool and start solving some really crucial problem statements which are out there. You don't need an army today. You don't need a massive team. You don't need an army today. You don't need a massive team. You don't need, honestly, to get to the first set of that experience. You don't need to wait for years and years. So the whole aspect of taking a solution and letting it apply to solve a real problem has changed for the better, right, and that's what I think we should cap in on and solve some really amazing, complex challenges we are facing with Gen AI.

Randy Silver: 31:39
Mika, thank you very much.

Miku Jha: 31:40
Thank you. Thank you for having me.

Lily Smith: 31:52
The Product Experience hosts are me, Lily Smith, host by night and chief product officer by day.

Randy Silver: 31:59
And me Randy Silver also host by night, and I spend my days working with product and leadership teams, helping their teams to do amazing work.

Lily Smith: 32:08
Luran Pratt is our producer and Luke Smith is our editor.

Randy Silver: 32:12
And our theme music is from product community legend Arnie Kittler's band Pow. Thanks to them for letting us use their track.

Artificial Intelligence (AI) & Machine Learning (ML)

LLM workflows for product managers: 3 key takeaways (Niloufar Salehi, Assistant Professor at UC Berkeley) – ProductTank SF

The future of product management: Insights from ProductTank San Francisco

Product lessons learned making early moves with AI in media: Lindsey Jayne (CPO, Financial Times)

50:43

A year with ChatGPT and product innovation: Navigating the AI landscape

24:28

How to keep your head about generative AI (when everyone is losing theirs) by Claire Woodcock

43:50

What we get wrong about technology by Tim Harford

Product management in the age of ChatGPT by Yana Welinder

How Canva uses AI-powered features to drive PLG

01:02:33

The role of an AI product manager by Hiroki Nakamura

Recommended

How Canva uses AI-powered features to drive PLG

20:17

A gentle introduction to AI in product by Rand Hindi

How Google makes AI work at Enterprise scale – Miku Jha (Director, AI/ML and Generative AI, Google Cloud)

40:55

Metrics don’t have all the answers: a human-centered approach to business strategy by Samihah Azim