Artificial Intelligence (AI) & Machine Learning (ML)
APR 25, 2024

Building products with trust and safety in the age of AI

Anuja More, Product Lead at Meta, shares insights on how product managers can build a secure environment for their user community. She draws from her experience building products with trust and safety in the age of AI

7 min read
Share on

What do marketplace apps such as Uber, Airbnb, social platforms such as Instagram and TikTok and video/gaming platforms such as Youtube and Twitch have in common? 

“An acute need to keep their community of users, viewers and creators safe, especially as modern-day AI revolutionizes online content and interactions.”

With the advent of Generative AI, bad actors now have access to large language models (LLMs) and GenAI tools such as Dall-E and Midjourney to commit a wide variety of fraud. AI-generated text, selfies, video, and audio can all be used to open fake accounts and establish synthetic IDs. For online marketplaces and other social platforms, this raises serious questions about the future of trust and safety. As the importance of trust and safety continues to grow, product managers play a critical role in creating and maintaining secure online environments. With my experience leading business integrity at WhatsApp, I want to share insights on building strategies, overcoming challenges, and implementing best practices to effectively navigate the new complexities of trust and safety in the age of AI.

Generative AI has the potential to transform the way companies interact with customers and drive business growth. It is projected that AI bots will power 95% of all customer service interactions by 2025. Companies are exploring how it could impact every part of the business, including sales, customer service, marketing, commerce, IT, legal, HR, and others.

Meanwhile, bad actors aren’t behind in leveraging these tools in a plethora of different ways to commit fraud. As much as AI is an opportunity, it poses immense threat with bias, misinformation and user privacy challenges. This is especially true for online marketplaces such as Lyft, Airbnb etc. and social technology platforms such as Twitter, Facebook, Snap and WhatsApp.

Businesses need a game plan for how it will deal with the threat and this is where trust and safety in products becomes more important than ever.

Trust and safety refer to the measures and practices implemented to create a secure and reliable environment for users engaging in online platforms, services, and transactions. It encompasses strategies, policies, and tools designed to protect users from risks, such as fraud, harassment, misinformation, and other harmful activities. Trust and safety efforts aim to build confidence, foster positive user experiences, and safeguard the integrity of online ecosystems.

​​Balancing user experience and safety: The biggest challenge in building robust trust and safety measures is maintaining a seamless user experience. At WhatsApp, we recently launched a cloud platform to enable businesses to interact with their customers on WhatsApp. We need to ensure that any business onboarding onto the platform is real and authentic by going through a multi-step verification process. At the same time however, we need to make this onboarding experience seamless for businesses to get started quickly. On the other hand, we also have a bunch of user controls to enable blocking or reporting of malicious activities. Achieving the right balance between these controls, which possibly interfere with a seamless user experience, is a formula product managers need to crack.

Legal and regulatory compliance: Building a global, social platform requires you to stay updated on relevant laws and regulations to ensure compliance. At WhatsApp, for example, we prohibit use of the platform for buying, selling or promoting certain regulated or restricted goods and services such as firearms, alcohol, adult products etc. Local country and regional laws further add to the complexity of these policies. As a product manager, building a safe platform for your users requires you and your team to stay abreast with global laws and regulations. 

The emerging threat of AI: Recently, an incident reported a fake version of President Joe Biden’s voice had been used in automatically generated robocalls to discourage Democrats from taking part in the primary. AI-generated text, selfies, video, and audio can all be used to open fake accounts and establish synthetic IDs. If left unchecked, fake profiles, fake product listings, and other fake content can all cause serious hardship for your users and irreparably damage the hard-fought trust that you have built. This may cause your users to think twice before completing a transaction on your platform; at worst, it may send them toward your competitors.

Risk and policy frameworks: At the core of trust and safety is the development and implementation of robust policies that align with legal and ethical standards. Once policies are defined, you now need a framework and the necessary operational support to enforce and help mitigate risk. 

Risk assessment: As your threats evolve in this new AI age, so will your risks. Leverage ML, analytics and data science to evolve risk assessment techniques associated with new AI-generated content, transactions, and interactions. Don’t assume you’re immune. Deepfakes, synthetic IDs can be tricky to spot, so stay vigilant, and keep scanning for risky signals and suspicious connections between accounts.

Content moderation: Explore strategies for efficient and scalable content moderation, including automation, machine learning, human review and user reporting.

Account verification frameworks: Invest in building robust verification systems such as user profiling, account verification combined with anomaly detection. You may need to incentivize your users to go through these verification steps for example, access to advanced features or a verification symbol such as a badge or blue check.

User education: Building a T&S system alone isn’t enough! Invest in educating users about platform guidelines, security measures, privacy policies and quality best practices. After all, you are building a product for legit, well-intentioned users. Think of creative, timely and precise pieces of communication on the website, in-app or triggered by a specific action.  

Trust can be very subjective. It means different things to different people. So how do you measure the results from your work, making sure the features you worked on had a positive impact on the product? In other words, how do you measure trust? Spam, fraud and abuse keeps evolving, with the advent of LLMs and Gen AI tools, there is a substantial risk of scaled abuse. Product managers in trust and safety have always looked at ‘effectiveness’ metrics such as prevalence, false positive rate, precision etc. However, in the world of GenAI, ‘efficiency’ becomes equally, if not more important. Cost of review, turn-around-time, ease of scale etc. need to be a part of your core metrics to ensure you not only optimize for accuracy, but the speed of scaled harm. 

With the advent of Gen AI, constant investment and improvement of trust and safety driven through its platform is a critical factor for the success of an organization. Users expect a safe and secure environment, free from fraud, abuse, and harmful content which in turn drives trust and potentially a repeat customer in your future. AI is here to stay and product managers need to adapt to these changes, be creative, make the right investments and balance product growth with integrity and safety. Playing the long game is the only way to ensure sustainable user growth.

Explore more great product management content by exploring our Content A-Z