Should you really be using machine learning? "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs July 07 2021 False Agile Transformation, Guest Post, Machine Learning, Mind the Product Mind the Product Ltd 1508 Should you really be using machine learning? Product Management 6.032

Should you really be using machine learning?

BY ON

Used properly and in the right place, Machine Learning (ML) is an incredible tool to bring value to your product and your users. But how can you use the five product risks—value, usability, feasibility, viability, and ethics—to know if the opportunity is right for ML?

Andy Kelly on Unsplash

Whether it’s checking your online transaction for fraud or recommending songs to get you pumped up for the weekend, nearly everything we do today involves ML. This gives us a huge opportunity to help solve customer problems. If you’re not pushing for it to form part of your product, chances are someone else is.

As product people, it’s also our job to make sure our product grows in the right way. “Everybody is doing it” (whether or not that’s true) is not a valid reason to build something! Fortunately, we have the perfect framework to assess whether ML is right for our product—the five product risks.

When you go through product discovery, you’re trying to deal with the key product risks. You need to make sure that your product is:

  • Valuable – will customers choose to buy it, or users choose to use it?
  • Usable – can users figure out how to use it?
  • Feasible – can our engineers build it?
  • Viable – does it work for the rest of our business?
  • Ethical – should we build this?

Don’t fall into the trap of only thinking some of those risks are relevant. Value risk and feasibility risk are obvious and easy to tackle. But all of the risks are important to product discovery, and ML impacts all of these. Let’s go through them in turn and see what kind of questions you should be asking.

Value risk

Will customers choose to buy it, or users choose to use it?

The key here is to create a product that solves a problem that customers care about— substantially better than any of your competitors. Do you need Machine Learning to do that?

If you were building a product to stop elderly customers receiving anonymous phone calls, a simple rule based system would be appropriate instead.

What does ML bring to the table that can solve that customer pain?
Don’t use technology for technology’s sake. Your customer doesn’t care how you solve their problem!

What are the existing benchmarks you need to substantially beat?
What is your customer measuring? How do existing solutions benefit them? Make sure you think in customer terms here – unless you’re building for Data Scientists, they probably don’t care about your F1 score!

Can you measure this value quantitatively?

If you can, how much value does a customer get from a 5%, 50%, or 500% increase? Is that worth the cost of switching to your product? If not, how will you measure success?

Do the upsides of using ML outweigh any trade-offs for your users?

The cost of any false positives will vary depending on your situation. The cost of a poor Netflix recommendation is low compared to the cost of a self-driving car missing a stop sign!

Do customers need to know why a decision was made?

Depending on the industry and whether your product is B2B or B2C, explainability may be a hard requirement for a customer to buy your product

Usability Risk

Can users figure out how to use it?

Typically, opportunities involving ML will be heavily influenced by engineering, meaning that design-led areas such as usability don’t get the attention they deserve.

For example, if you were building a product to tell a credit card company how risky a transaction is, what does a score of 80/100 mean? Will it be fraudulent 80 times out of 100, or is it just “more risky” compared to a score of 70/100?

How will users interact with the output from the system?
Is it a completely new output for them, or a behind-the-scenes change to something they already use?

How is that output different to what they have today?
Will users now be dealing with a number or probability, instead of a yes/no decision? How might that change their perception? Is there implied confidence behind that output?

Do users know what that output means?
Do they know what that output represents? How might they interpret it? As an example, would they expect a transaction with a risk score of 90% to be fraudulent 90% of the time?

Will it require users to respond?
If you are expecting users to give feedback to help train the model, is that obvious? Are they likely to do that?

Feasibility Risk

Can our engineers build this?

Alongside the usual feasibility questions you need to answer when building a product, some are especially relevant to ML. If you have never done this before, it can seem like the most daunting of all the risks. A five-second response time from a product might be perfectly acceptable if you’re analysing a company’s financials to determine if you should invest…but not if it’s a self-driving car working out if the car ahead is slowing down.

Do you have the right people?
Given an adjustment period, a good engineer can move between product teams and even into a new language. This is not as easy for ML. It requires significant upskilling and it may make sense to get external help when you first start

Do you have enough good quality data?
ML systems are as much powered by the models themselves as they are the data. If you don’t have enough good, clean, and properly categorised data to train and test your model, you’re dead in the water.

Will it work as well in production?
When testing your model, you might show excellent results, offline. What extra constraints are there in a production environment, inside or outside of your control?

How will you handle multiple customers?
Especially in a B2B environment, you may have to support multiple versions of the product for different customers. Is that possible? Does your technology, data and infrastructure support that?

How long will it take to build?
Building and deploying ML products is not a quick process. Does that fit with your product strategy?

Viability Risk

Does it work for the rest of our business?

Machine Learning is a big commitment across your whole company, not just your product and your team. What other considerations do you need to give to how it could affect the wider company?

For example, if you were a company building a product that needed to be launched quickly and on a shoestring budget, building a Machine Learning engine from scratch does not tie in with the wider company.

Does it fit the company strategy?
If your company is on a five-year cost-cutting strategy, that’s not a good fit. Machine Learning is expensive!

How will it impact marketing?
Does introducing ML impact your brand promise? How does the message to customers or partners need to change?

How will it change the sales process?
Salespeople will only be confident in selling something they understand. Salespeople rarely think like engineers. How will you explain it to them, and how will they explain it to their customers?

How will it impact the customer success team?
Will introducing ML lead to any customer queries? What if things go wrong? Who will be the first point of contact for customers? Will they be able to respond without immediately taking up your time or your engineers’ time?

What are the legal concerns?
If you are using certain data sets to build your model, do you have the right to do that? What are the privacy implications?

Ethical Risk

Should we build it?

Some people exclude ethical risk in product discovery. Now more than ever, especially within Machine Learning, this simply cannot be ignored.

A very real example is OpenAI’s GPT-3, which has well documented biases in its training text

What biases exist in your data?
As we discussed in the feasibility risk section, without data you’ve got nothing. But what’s missing or overrepresented in the data you’re using? Is the labelling correct? What biases went into that data set?

How does this bias your outputs?
Any biases in your data are reflected in your model and your product. If you can’t correct this in your training data, how can you correct that in your outputs?

How could this impact user behaviour?
Think about how your product fits into the customer’s overall setup. What impacts could a biased decision have on your customers and their customers?

Summary

When you have a very powerful machine learning hammer, everything can start to look like a nail. Don’t get caught up in the hype and FOMO—use the framework of the five product risks to make sure ML works for your customers, your team, and your company.

For more insights on discovery, browse our library of content on Machine Learning. For even more product management topics, use our Content A-Z.

Used properly and in the right place, Machine Learning (ML) is an incredible tool to bring value to your product and your users. But how can you use the five product risks—value, usability, feasibility, viability, and ethics—to know if the opportunity is right for ML? Andy Kelly on Unsplash Whether it’s checking your online transaction for fraud or recommending songs to get you pumped up for the weekend, nearly everything we do today involves ML. This gives us a huge opportunity to help solve customer problems. If you’re not pushing for it to form part of your product, chances are someone else is. As product people, it’s also our job to make sure our product grows in the right way. “Everybody is doing it” (whether or not that’s true) is not a valid reason to build something! Fortunately, we have the perfect framework to assess whether ML is right for our product—the five product risks. When you go through product discovery, you’re trying to deal with the key product risks. You need to make sure that your product is:
  • Valuable - will customers choose to buy it, or users choose to use it?
  • Usable - can users figure out how to use it?
  • Feasible - can our engineers build it?
  • Viable - does it work for the rest of our business?
  • Ethical - should we build this?
Don’t fall into the trap of only thinking some of those risks are relevant. Value risk and feasibility risk are obvious and easy to tackle. But all of the risks are important to product discovery, and ML impacts all of these. Let’s go through them in turn and see what kind of questions you should be asking.

Value risk

Will customers choose to buy it, or users choose to use it? The key here is to create a product that solves a problem that customers care about— substantially better than any of your competitors. Do you need Machine Learning to do that? If you were building a product to stop elderly customers receiving anonymous phone calls, a simple rule based system would be appropriate instead. What does ML bring to the table that can solve that customer pain? Don’t use technology for technology’s sake. Your customer doesn’t care how you solve their problem! What are the existing benchmarks you need to substantially beat? What is your customer measuring? How do existing solutions benefit them? Make sure you think in customer terms here - unless you’re building for Data Scientists, they probably don’t care about your F1 score! Can you measure this value quantitatively? If you can, how much value does a customer get from a 5%, 50%, or 500% increase? Is that worth the cost of switching to your product? If not, how will you measure success? Do the upsides of using ML outweigh any trade-offs for your users? The cost of any false positives will vary depending on your situation. The cost of a poor Netflix recommendation is low compared to the cost of a self-driving car missing a stop sign! Do customers need to know why a decision was made? Depending on the industry and whether your product is B2B or B2C, explainability may be a hard requirement for a customer to buy your product

Usability Risk

Can users figure out how to use it? Typically, opportunities involving ML will be heavily influenced by engineering, meaning that design-led areas such as usability don’t get the attention they deserve. For example, if you were building a product to tell a credit card company how risky a transaction is, what does a score of 80/100 mean? Will it be fraudulent 80 times out of 100, or is it just “more risky” compared to a score of 70/100? How will users interact with the output from the system? Is it a completely new output for them, or a behind-the-scenes change to something they already use? How is that output different to what they have today? Will users now be dealing with a number or probability, instead of a yes/no decision? How might that change their perception? Is there implied confidence behind that output? Do users know what that output means? Do they know what that output represents? How might they interpret it? As an example, would they expect a transaction with a risk score of 90% to be fraudulent 90% of the time? Will it require users to respond? If you are expecting users to give feedback to help train the model, is that obvious? Are they likely to do that?

Feasibility Risk

Can our engineers build this? Alongside the usual feasibility questions you need to answer when building a product, some are especially relevant to ML. If you have never done this before, it can seem like the most daunting of all the risks. A five-second response time from a product might be perfectly acceptable if you’re analysing a company’s financials to determine if you should invest...but not if it’s a self-driving car working out if the car ahead is slowing down. Do you have the right people? Given an adjustment period, a good engineer can move between product teams and even into a new language. This is not as easy for ML. It requires significant upskilling and it may make sense to get external help when you first start Do you have enough good quality data? ML systems are as much powered by the models themselves as they are the data. If you don’t have enough good, clean, and properly categorised data to train and test your model, you’re dead in the water. Will it work as well in production? When testing your model, you might show excellent results, offline. What extra constraints are there in a production environment, inside or outside of your control? How will you handle multiple customers? Especially in a B2B environment, you may have to support multiple versions of the product for different customers. Is that possible? Does your technology, data and infrastructure support that? How long will it take to build? Building and deploying ML products is not a quick process. Does that fit with your product strategy?

Viability Risk

Does it work for the rest of our business? Machine Learning is a big commitment across your whole company, not just your product and your team. What other considerations do you need to give to how it could affect the wider company? For example, if you were a company building a product that needed to be launched quickly and on a shoestring budget, building a Machine Learning engine from scratch does not tie in with the wider company. Does it fit the company strategy? If your company is on a five-year cost-cutting strategy, that’s not a good fit. Machine Learning is expensive! How will it impact marketing? Does introducing ML impact your brand promise? How does the message to customers or partners need to change? How will it change the sales process? Salespeople will only be confident in selling something they understand. Salespeople rarely think like engineers. How will you explain it to them, and how will they explain it to their customers? How will it impact the customer success team? Will introducing ML lead to any customer queries? What if things go wrong? Who will be the first point of contact for customers? Will they be able to respond without immediately taking up your time or your engineers’ time? What are the legal concerns? If you are using certain data sets to build your model, do you have the right to do that? What are the privacy implications?

Ethical Risk

Should we build it? Some people exclude ethical risk in product discovery. Now more than ever, especially within Machine Learning, this simply cannot be ignored. A very real example is OpenAI’s GPT-3, which has well documented biases in its training text What biases exist in your data? As we discussed in the feasibility risk section, without data you’ve got nothing. But what’s missing or overrepresented in the data you’re using? Is the labelling correct? What biases went into that data set? How does this bias your outputs? Any biases in your data are reflected in your model and your product. If you can’t correct this in your training data, how can you correct that in your outputs? How could this impact user behaviour? Think about how your product fits into the customer’s overall setup. What impacts could a biased decision have on your customers and their customers?

Summary

When you have a very powerful machine learning hammer, everything can start to look like a nail. Don’t get caught up in the hype and FOMO—use the framework of the five product risks to make sure ML works for your customers, your team, and your company. For more insights on discovery, browse our library of content on Machine Learning. For even more product management topics, use our Content A-Z.

Leave a Reply

Your email address will not be published.