In this ProductTank London talk, Nathan Kinch, CEO at Greater Than X, examines the issues of ethics, privacy, and trust. He looks at how product managers can kick-start their ethics journey and ship products that make life better for everyone.
The Edelman Trust Barometer is a resource that measures levels of trust in different contexts. It shows us that in 2017, trust levels were the lowest they have ever been. There is a fundamental power imbalance between the experience of individuals and the design efforts of organisations when it comes to privacy and trust. We know that trust is reflected in the behaviour of users – when we make choices between products or switch between services. When we are unable to verify something, we act in good faith. The growing trust gap is therefore a worrying trend and one that is not good for business or people.
What’s Broken About Trust?
Mistrust has Become Distrust
Data shows us that people have lost faith in the system such that they actively distrust rather than mistrust. Given that trust can be influenced but not controlled, designing for trust(worthiness) means being able to control organisational behaviours to influence trust(worthiness). By influencing trust effectively, we influence user behaviour effectively and can close some of growing trust gap between individuals and organisations.
Design and Intention Rarely Match
The gap between the work of organisations and the experience of users is growing. A 33,500-word terms and conditions document for new customers to read and accept demonstrates this disconnect. The incentive here to read these is not designed to be customer-centric.
Instead of trying to “hack” trust, how can we design for it to be truly reflective of users’ expectations?
Modern Existence Demands Trust
We live in a world of surveillance capitalism. We cannot participate in the modern-day sharing economy unless we trust. Our attention is constantly captured, measured, and profiled to create economic benefits for organisations. Adtech is a prime example and one that has come under scrutiny for being a business that trades in privacy and trust.
What Shifts are we Seeing?
People want to know that they can trust you. Trust isn’t a matter for regulators, we want business leaders to take charge. Several CEO surveys from firms such as Deloitte and PwC point to the same – trust is a big priority. Leaders need to care about taking ownership of trust and acting on it. We saw that trust in leadership was the main focus of the WEF in Davos. Facebook CEO Mark Zuckerberg wrote a 3,200 word essay in March 2019 on how privacy is our future. However many do not believe that business leaders are doing a good job. Perception of trust is therefore becoming increasingly important as leaders try to close the gap between the intentions of their actions and the reality of people’s experiences.
Business Models are Adapting
Regulation is making it trickier for business models that capitalise on data. This has led to new models emerging where consumer trust is showcased by unveiling the outcomes that data contributes to. Users are given agency in the use of their information while at the same time businesses have to earn access to their data. These businesses actively control the flow of users’ information in specific contexts and at specific times. The result is a business that takes responsibility for and actively recognises the value of people’s trust. These are person-centric business models that bring participation into the sharing economy.
We Know More About What Trust Looks Like
We now have more nuanced data about where people feel less trusting of organisations. The data trust gap – the trust that an individual places in an organisation’s data practices – is an example. This is different from general feelings of trust and we can see that for example, we have higher general feelings of trust for supermarkets than we do feelings of trust for the way they handle our data. This shows that people may trust the brand, but have mistrust for the way it handles information about them.
We also now know that people do not just value transparency, it is a basic expectation. Other key drivers of trust include privacy and the ability to control what they share. Microsoft released a blog three months after GDPR came into place about tools it had developed to allow users to control their data settings. Over five million people put in significant amounts of time to use it. The scale of this reveals the growing appetite for people to participate in how much they decide to trust organisations with their data. Another study revealed that increased transparency also makes people more willing to share their data.
How can Organisations Become Verifiably Trustworthy?
Trustworthiness of products lies on a scale that goes from unacceptable, acceptable, to socially preferable. The focus should be on becoming socially preferable. We can verifiably achieve this with the data ethics framework tool. It is used to decide, document, and verify how an organisation’s actions are socially preferable.
Step 0: Know your principles
What are you fighting to defend? Principles are not specific and do not showcase any independent verification or customer collaboration. However, principles inform the framework and form the basis for defining the purpose of verifying trustworthiness. The next step is to bring intent to these principles.
Step 1: Define the purpose – just because we can, should we?
Why does the product/feature exist? Does it match up to the organisation’s values/principles? This forms the basis for the next step.
Step 2: Test for social preferability in context
This involves creating a prototype that tests intent against outcomes for a specific product/feature. Testing this with a small group of customers and taking the results to independent groups is a great way to test if your proposed feature/product is socially preferable before it is deployed. This does not need to be complicated – simply asking the question on a scaled response could be enough.
Swapping Value for Data is a Tough Sell
It is almost impossible for organisations to make a viable argument that we gain commensurate value in return for the use of our data in their economic activities. While it’s often used as justification for the use of data by organisations it will always be impossible for an individual to comprehend it as a true “value swap”.
Who are we to Know?
Even if Google follows the data ethics framework perfectly and tests for social preferability flawlessly, there will always be sceptics that believe that organisations only collect evidence to justify their own trade-offs. We can work within this bias. By making the testing process a collaborative effort with customers, independent bodies, and regulators, we can increase the level of confidence in our perspective and bring trustworthiness to the industry in a collective way.