As a teenager in the 90s in the US Bible Belt, you didn’t have to look far to find someone wearing an iconic bracelet with 4 simple letters: WWJD. This was meant to remind Christian youth to constantly evaluate decisions in their daily lives against the question, “What Would Jesus Do?”. It was intended to be an encouragement to be kind and treat others as you would want to be treated.
After spending over a decade in the tech industry, I have seen that humans (particularly online) do not always have treating others well as their first instinct. Between internet trolls, bot farms, black-hat hackers, or any number of hate groups that have found their place online, it can feel like the internet is getting more toxic every day. It’s no coincidence that product ethics has become a popular topic on conference stages over the last few years.
We All Need to Think About Product Ethics
It is easy to think that product ethics is something that “those big companies” need to think about: the ones whose reach can actually have an impact on things like world politics, the rise of toxic nationalism, and democracy itself, but I would contend that nearly any seemingly innocuous product or feature can be misused.
We are in an age where some of the worst of humanity is seeing the light of day. And our products are enabling them to do real harm. There have been well-documented examples of domestic abusers using smart home technology as instruments of surveillance and abuse. Our fitness apps have shared regular running routines with strangers on the internet, posing serious safety risks for vulnerable populations and even exposing detailed layouts of sensitive military bases.
I personally have had someone from my past track me down on Facebook, not because I have them in my contacts (they were deleted long ago as someone I wanted no further interaction with), but because my phone number was still in their contacts and was also in my Facebook profile. Facebook’s “find your contacts” feature was turned from an easy way to find people you know into a way of tracking someone who might not otherwise consent to be tracked. I am lucky that my situation was more of an annoyance than an actual threat, but I guarantee that is not always the case.
Evaluating Ethical Edge-Cases
A little over a year ago, I was discussing the concept of product ethics with a colleague, Eli Montgomery. He said: “Whenever I start ideating on a new product feature, I like to ask myself, ‘What Would Trolls Do?’” Basically, how would the worst people on the internet, given access to this feature, use it to harass or harm other people? As product managers, we are trained to think about the edge-cases in our products. We let ourselves think about how our products would behave if a user lands on a page with no data, or has a really long name, or gets annoyed and starts rage-clicking a button. So why do we spend so little time thinking about how our products might actually be misused?
I used to manage a travel app called TripCase that allowed you to track your travel. One regular feature request that we discussed was being able to automatically post your trips on Facebook. It sounds like a simple feature – “I’m taking a trip to Paris! Let me share that with my friends!” But as our team discussed it, we realized how easily this could go wrong. We could see a very short leap to a scenario where our app posted that someone was going to be out of town for the next week, and gave potential thieves the exact dates during which their house would be empty. Or a user forgot to turn off this automated feature before flying to another city to interview with their company’s biggest competitor…and their boss was a Facebook friend. We decided the risk wasn’t worth the “fun”.
Mariah Hay of Pluralsight gave an example at #mtpcon of a feature request for the Pluralsight app that would share a learner’s skills assessment scores with their employer. While the request was well-intentioned on the part of companies, the team could easily see this feature resulting in biasing promotions, raises, or even hiring opportunities, which would turn the product into a weapon. They went back to the drawing board until they could find a way to solve the company’s problems without weaponizing their product against employees.
Realizations like these don’t take big leaps of imagination or a horrific loss of innocence to uncover. All they take is admitting that everyone on the internet might not have the purest of intentions, or might even think they are doing good but end up using your product against other people. A quick brainstorm in which you ask “What would trolls do?” could keep your product from being the stage on which someone else’s nightmare unfolds.
The WWJD bracelets of my youth were meant to be a reminder that we are supposed to act in good faith in all aspects of our lives. But as a product manager in technology, I think what we really need is a reminder that all of our users do not have good intentions, and there will always be people out there looking to do harm. What we need is a bracelet that reminds us to ask ourselves, “WWTD?”