Solving Ethical AI Problems, When Algorithms Go Wrong, and the Skills Needs To Work on AI Solutions: Insights From Kriti Sharma "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs 22 December 2020 True AMA, Artificial intelligence (AI), Machine Learning, Premium Content, Product ethics, Mind the Product Mind the Product Ltd 1579 image illustrating Artificial Intelligence Product Management 6.316

Solving Ethical AI Problems, When Algorithms Go Wrong, and the Skills Needs To Work on AI Solutions: Insights From Kriti Sharma

In our final AMA of the year, exclusively for Mind the Product members, Kriti Sharma, VP Product at GfK, shares her AI insights.

Watch the session again for real-life examples from Kriti’s awesome back catalogue of AI work and learn about AI trends, tackling ethical AI questions as a product manager, the skills needed to work on AI products, when algorithms go wrong, and why we should all stop watching Black Mirror (head straight to 41:44 for that!).

You can also read on for a few, quick highlights.

About Kriti Sharma

Kriti Sharma is the Founder of AI for Good, an organisation focused on building scalable technology solutions for social good. In 2018, she also launched rAInbow, a digital companion for women facing domestic violence in South Africa.

Sharma was recently named in the Forbes “30 Under 30” list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Government’s Centre for Data Ethics and Innovation.

AI Trends

Timestamp: 2:37

Asked about AI trends, Kriti says she’s noticed a positive change in who’s now involved in AI conversations. Specifically, more appreciation for the design-led and human-centred centric approach to AI.

During the last 15 years, Kriti says the topic of AI and Machine Learning (ML) were very much driven by the Chief Data officers, CTOs and CIO functions. “It was seen as very much a tech thing,” she says, “but now there’s a very strong appreciation that any AI or data technologies, for them to truly drive value, need to be human-centric and designed from a customer experience layer and not just purely from a data, data point of view.”

In addition, Kriti’s seen how, in bigger companies (even large fintech companies and banks) many of the AI/tech ethics questions that used to be handled with a ‘we’ll figure it out when the regulator comes in’ approach, have transformed their approach. Today, more of them are questioning how to build ethical technology that creates trust in the systems and in the company itself. “It’s no longer the legal or compliance departments who are driving those conversations, which is an interesting change to see.”

Solving Ethical Problems -Product Managers Can’t Do It Alone

Timestamp: 7:29

A number of early questions during the AMA suggest to Kriti that product managers can feel the pressure to tackle ethical product fights solo but, as she explains, we should not assume that a product person can solve the ethical questions singlehandedly. Often, she says, they can be about the “entire system”. Providing the example of launching rAInbow in South Africa, a tool designed to help people who are in abusive relationships, Kriti explains how she learned this lesson the hard way.

When working through the needs of the user, namely help to take the first step in reporting abuse, the team identified that, on average, when somebody reported abuse for the first time, it was actually the 36th time they’d been abused. However, they had no record of what had happened the previous 35 times, meaning they had no proof when they were ready to take action. “We thought ok, that’s a problem, we could help solve,” says Kriti, but when they started investigating, they experienced huge backlash from the authorities who didn’t want those numbers to be made transparent. The reason why, she explains, is that in doing so, and in making it easier to report incidents, it “made their numbers look bad”.

What this highlighted was that often there are fundamental, systemic challenges you need to tackle when you’re pushing for more transparency. “Often ethical issues go beyond the constraints or remit of a product.” Kriti’s advice? Surround yourself with diverse people, and never assume that, as a product person, you can solve the ethical questions alone because very often they can be about the entire system.

When Algorithms Go Wrong

Timestamp: 11.54

We’ve all heard about algorithm disasters, and Kriti talks about a recent one in particular – the UK A-level grading fiasco. As a result of the coronavirus pandemic, exams were cancelled and students’ grades were replaced by teacher assessments and algorithms. The decision to do this, says Kriti, was made on a “systemic level” and “the impact was that those in more disadvantaged communities, or in the wrong postcode so to speak, were penalised in a very disproportionate manner.”

Kriti’s since spoken to the team behind this decision, who explained there’d been three potential solutions on the table. One of the three, the team strongly advised against, however, in the end, this solution was rolled out and the impact was huge.

“If an algorithm I decided what my future was going to be when I was 18 that would not be here”

Big decisions made using automated technologies, that are about peoples’ futures, and which can have such a harmful impact, must be made in a different way, holding people to account, says Kriti. Considering how a decision like this might have impacted her, she says: “If an algorithm had decided what my future was going to be when I was 18, I would not be here.”

“I think as product managers we should always keep in touch with the constantly changing and evolving regulatory space, but regulation is nothing but a way to change behaviours and if we start to do that more proactively, it makes, it makes a lot more sense.”

To help us all to think about this differently, Kriti shares a practical tool that can be helpful during the early discovery stages for new products.

“Map out unintended consequences or misuse cases by playing the devil”, she says. “Think about the worst thing a product can do, like destroy democracies for a few years or whatever. It’s a very creative way to think through potential scenarios, and then start to map out and build the right systems in place.”

Setting Objectives for AI and Non-AI-Based Solutions – What’s the Difference?

Timestamp 23:49

Krit explains that, in AI products, it is often important to consider the following when setting objectives and goals:

  • How accurate the solution is (what level of accuracy makes the system meaningful)
  • How rapidly the system gets smarter/become a better system
  • How far you want the solution to go

But, she says, there’s something else you always need to factor in.

To explain this, she offers an example relating to a big product rollout for an automated AI system, designed to handle complex customer support queries, helping small businesses figure out how to be more productive.

Kriti’s team felt a machine could automate a number of the support queries and quickly built a model that could automatically handle 50%-60% of the queries. In two weeks, it could handle 65-70%, and in one month the machine could handle 85%. This meant the remaining 15% (the more advanced, complex problems) could be routed to human helpers.

“It met the objective that I thought was meant to—automate, self serve, reduce time.” At the same time, says Kriti, the machine improved the experience of the humans, who could now solve more complex problems. However, Kriti realised she’d made a big mistake when she spoke with the human experts, those taking the remaining 15% of queries. “They said they didn’t like what I did at all and I went ‘what are you talking about, your workload is reduced’ and they said, not really.” Previously, only a small part of their job was handling complex problems, but with the new solution, 100% of the role involved handling complex problems.

“That’s one of those powerful times that I realised I didn’t set the right objective with the full picture in mind.” So, in addition to thinking about the system’s accuracy, how rapidly it improves, and how far you want it to go, Kriti recommends thinking about the overall impact on everyone who interacts with it.

What Skills Do You Need To Build an AI Product?

Timestamp: 40:12

Kriti’s number one piece of advice is to simply build the first one and she assures us that, even in her team, the majority of new hires over the last year had never previously worked with AI products. “I don’t think that good PMs, who are working on AI or ML products, need to have hands-on AI or ML experience. It’s not really necessary. It’s helpful because then you can speak the same language.”

Typically, she says, some of the same rules of product management apply. “It’s really figuring out the right ways of working with data science teams by defining the outcomes and acceptance criteria.” These skills are the important ones – being able to define what good outcomes look like, testing and fine-tuning with your users to determine what’s good enough.

Timestamp: 36:22

“A great PM would solve complex problems and I have coached and trained PMs who have never done an AI product before and who are really flying in their careers and the way they’re solving problems.”

“You need more interpersonal and communication skills there because there’s a lot more coordination and often data science teams involved in some cases or ad platform teams involved.” And, something else Kriti has observed while working on AI products is that 90% of the product management actually happens after you’ve launched. “There’s so much tuning and testing and iterations and improvements that need to happen, even more so than other product slices.”

More from Kriti on Mind the Product

Comments 0

Join the community

Sign up for free to share your thoughts

About the author