Product Design in the Era of the Algorithm by Josh Clark
Machine learning has taken over huge parts of our world, from diagnosis of medical conditions to legal queries to beating human players in Go. How does this affect how we design, build, and manage products? In this insightful talk from Mind the Product London 2017, Josh Clark shares how we need to think about product design and product management in the era of the algorithm.
Rather than trying to design a single route through an experience or product, when working with machines it’s about working out all the possible outcomes and how to handle them. Product managers will be key in this process, if we hope to create products that are going to have a positive impact on people’s’ lives, rather than simply embed the injustices of the past.
At this point algorithms still make too many mistakes and this holds up their widespread adoption. Voice commands, for example, is one area where there are many bugs – the machines are struggling to pick up the nuances that are communicated in most languages. Microsoft’s picdescbot can describe what is in an image – but is still struggling to pick up the difference between many types of imagery. But this is not going to last long.
So how do we design products in world of algorithms?
Google’s snippet tool gives answers to questions posed – for about 15% of searches. When it works it’s great, but there are many instances where it gets it wrong. This can be especially risky when associated with news events and politics – given the amount of controversial information that is on the web.
As with many of these tools, only a single answer is typically presented – which by default reduces the nuance which can be illustrated. We have to design interfaces to have some sort of humility when they are not sure of the answer. Most of the image analysis tools now available have a confidence scale, but it is rarely used in their outputs. This needs to change if they are to become truly useful – when they are confused, let them say so.
Following on from this, transparency and collaboration between humans and machines are where really interesting work can happen. When an algorithm fails, that’s the point at which the machines should ask for human input to improve their results. Wikipedia for example flags a number of its pages as ‘Disputed’ – so that the reader knows to proceed with an extra cautious eye.
Machines Only Know What we Tell Them
Garbage in equals garbage out. The goal of machine learning is often to decide what’s normal, and point out when things are going to deviate from it. The problem is that they learn from the existing situation, which is often far from perfect. For example Google’s speech recognition is worse at recognising a woman’s voice than a man’s because of the data that it was fed originally. This can get much worse, there are examples from across the industry of automated systems that don’t recognise people with dark skin or non-caucasian eyes.
We must be cognisant of the fact that we could easily code our historical biases into the machines of the future. People who have been persecuted in the past are not outliers, they must be integrated into the fabric of our societies and we can help make that happen with technology.
Responsible Data Collection
Data input is actually UX research at an unprecedented scale. Design teams need more diversity so that we can better interpret these inputs and allow our products to meet the needs of the range of perspectives we now design for.
We need to make it easy for our users to contribute accurate data to our products. Tinder and Facebook have done this, because of the inherent motivations of their users. We need to see how else we can get our users to help us improve their experiences – but it has to be done transparently so that everyone understands and agrees to how that data is being used.
Be Loyal to Users
Finally, be loyal to your users. We are going to be designing products that impact peoples’ experiences, lives and perhaps even human rights. We need to take this seriously and apply our human decency to product decisions – so that we can be kind to each other in the tools and experiences that we make.