As product people, we know we’re not the customer. We also know the importance of doing product research to get to know our real customers, to figure out their problems, and how they react to the solutions we might provide.
In our book, Product Research Rules, we explain how your entire team can conduct effective product research within a couple of weeks — easily, cheaply, and without compromising quality.
Here we share Chapter 1: Prepare to Be Wrong. Read on for a taste of the book and, if you like what you read, you can buy it on Amazon.
Chapter 1: Prepare to Be Wrong
Some years ago, C. Todd worked at a biotech startup that provided products for academic research laboratories, universities, hospitals, and pharmaceutical companies. The startup wanted to develop a product that could extract DNA from plant samples, which they hoped would allow the company to expand into the agri-bio industry. They created a prototype that worked in laboratory conditions, which he thought was a great step, and they embarked on a research project to discover its viability.
The research team traveled to a number of agri-bio companies in Europe to see how the prototype would work on site. It worked well, but the team discovered that one of the companies already had its own “homebrew” method of DNA extraction that cost significantly less to implement. Despite this, the company’s senior leadership was keen to explore how the new product could work and whether it would mean they could replace their own makeshift method. After nearly eight months of further prototyping and testing and more travel to Europe, the company’s leaders decided that, although the new product was 10 times more effective, customers in that market were not willing to pay three times the price for it. The product effort was abandoned.
This research project took months, cost tens of thousands of dollars, and provided next to no insights into the suitability of the product for the market. Why did it take so long and produce such limited insights? Because C. Todd just didn’t approach it with the right mindset. He was reticent to accept the weak interest shown by market research. He later realized that he could have saved significant time and money by conducting less customer research before he cut the project. Why didn’t he do it? Because he (and his managers) wanted to be right. They broke Rule 1: they weren’t prepared to be wrong.
Research that leads to insight has a few properties. First of all, good product research assumes an insight-making mindset: a mindset that does not aim to be right, just to learn from our customers. This mindset is essential because you will be wrong more often than you think when doing product research. Good product research starts with a deliberate question about a problem you care about. It requires you to work with the correct participants, using the correct research methods. Product research is a collaborative effort: you conduct the research together and analyze it together. You don’t hole up in a room and write a report; you share actionable findings and possible solutions with many teams, over and over.
All successful research efforts follow this framework, no matter how big or small the effort is. The teams that follow this framework gain insights in a quick cadence and eventually make research a habit for delivering great products—which means they get used to being wrong more often than they are comfortable with. Yet they can accept this because they’ve taken on the correct mindset.
Ego Is the Enemy of Product Research
One reason research projects fail is because the researchers start with the wrong research mindset: being led by their own agenda instead of staying open to insights. The main factor in this is ego.
Ego can be the enemy of good product research. We think we know the right product to build because, based on our experiences and knowledge, we know the exact product, feature, or service our customers need. Don’t we?
Some years ago, the online marketing company Constant Contact experienced an increase in call volume from its customers, mostly small businesses. More and more callers wanted answers to their marketing questions, and data showed an increase in visits to its support pages and forums from mobile devices. This led the VP of customer success to believe that Constant Contact customers weren’t getting the answers they wanted.
An executive had the idea to package all the company’s content into a mobile app called Marketing Smarts. The idea had backing within the company (partly because, if we’re being honest, of the political ramifications of going against an executive). This executive allocated more than
$200,000 to build this app and send it to the app stores. Fortunately, it never became a reality. We say “fortunately” because the mindset in the company at the time was “go build it for scale,” which meant this project went to the newly formed innovation team, the Small Business Innovation Loft. The innovation team decided to run a design sprint to test concepts with customers and observe their reactions. They found that customers were often using a desktop when they had a question, so they didn’t need a mobile app at all. The prototype the innovation team built in a few hours quickly invalidated the need for the mobile app and highlighted the underlying problem: the customer support content and forums were disorganized and difficult to navigate, causing customers to pick up the phone. Constant Contact subsequently improved and relaunched its help center.
Constant Contact’s project started because an executive thought they had a great idea. Their ego and some partially related data informed them that this was the solution to the current issue, and the rest of the company wanted to prove them right. It was only when confronted with the actual behavior of the customer that the real problem—and the solution—were found. This was one example of the mindset shift that needed to occur in the company. The Constant Contact Small Business Innovation Loft was alive for about three and a half years and helped embed a mindset of product research into the company.
You might assume that data-driven teams are immune to ego. Not actually. In the mid-1970s, researchers at Stanford University ran a study asking two groups of students to tell real suicide notes from fake ones. They gave the students 25 pairs of notes, each pair consisting of a real and a fake suicide note. The first group identified only 10 out of 25 notes correctly. Remarkably, the second group correctly guessed the authenticity of 24 out of 25 notes. The study was, of course, a setup: the researchers lied about the number of notes the students had correctly identified. Then the researchers came clean and told the students that their number of correctly identified notes had been made up. They asked the students how they thought they’d really performed in the task. The group that had been told they’d achieved 24 out of 25 said they thought they’d actually done very well, while the group that had been given a low number of correctly identified notes reported that they’d probably performed just as badly in reality. Despite being told that their results were fake, the groups still conformed to their existing beliefs. Knowing the facts didn’t change their minds—it reinforced what they already thought.
What does this mean for product research? It means that if we find ourselves to be “right” even once, we’re going to think we’ll be “right” again in the future. Challenging this egocentric mindset is a key part of doing product research well.
So what if we switched things around? What if, instead of expecting to be right, we expected to be wrong? What if we sought to be wrong? Each time we disprove a theory or belief, we in fact discover more validation that we can be right, because we know the process we’re undertaking is real.
The challenge here is that although you are reading this book, your boss, CEO, or other executives in your organization might not read it. The whole team needs to adopt the right mindset for product research. We’ll address this in Chapter 9, when we talk about how to make research a habit, but for now it’s enough to start challenging your own limiting beliefs.
Different Mindsets in Research
Next, we’re going to describe three particular mindsets that can cause product research to fail and one that brings great insights. But before we do, there’s one more thing to remember: you, as a product person, need to care. You can pay people to do lots of things they don’t want to do, but it’s really hard to pay someone to care. (The Managed Heart: Commercialization of Human Feeling by Arlie Russell Hochschild is a great book on this topic.) Product research requires the heart as well as the head, and as much as we all think everyone is rational and cerebral, people are not (more on this in Chapter 2). People are emotional and messy. Having a mindset that’s open to that mess and how wildly it may differ from your initial beliefs will set you up for product research success. You’ll never discover something new unless you’re prepared to be wrong.
Transactional Mindset: “How Can I Sell This?”
Todd, in his role as VP of product at MachineMetrics, a data platform for manufacturers, visited a customer with a teammate some time ago to resolve some specific issues they were having with the product. As he started to discuss a particular feature, he heard his teammate asking the customer, “Wouldn’t it be cool if you had…?” The details of the feature in this story don’t matter. What matters is that his question was all about selling his initial idea. By prefacing it with “Wouldn’t it be cool…?” he was turning an opportunity to gather genuine insights into a leading question that would only further his own sales agenda.
The transactional mindset is all about the transaction and limits itself to whether or not a customer would buy a product. An example is asking, “Would you like to buy…?” or “What if you had…?” If this sounds more like a sales pitch than research to you, you are correct. The transactional mindset doesn’t take into account the nuances of a customer journey or the complex needs of the user. It explores the topic at a surface level, avoiding the depths at which the researcher may find evidence that they are wrong. This approach is often seen in market research that solely focuses on sales performance, and it is not useful.
Confirmatory Mindset: “Am I Not Right?”
The confirmatory mindset is where you try to get the answers you want. If you beat up the data, it will tell you what you want to hear. The problem with this approach is that it’s about confirming the ideas or beliefs you already have rather than listening to what the customer has to say. If you are in the confirmatory mindset, you might find yourself asking subtly leading questions, usually focused around the feature you are working on. These questions are laced with a secret desire to be liked by the customers, and their wording reflects your own product development world, not the customers’ world.
An example of such a question is “What do you like about the new faceted search filters?” There are many subtle leading aspects of this question. Why are you assuming that they are using these filters? What is your motivation behind stating that the search filters are new? Do you think they’ve heard the word faceted before? What if they don’t care about the filters at all but feel compelled to say something just because you are suggesting that your new search filters are likable things? This mindset is not useful for creating insights that teach you something new about your customers and your products. The only thing it creates is a false comfort zone that makes you feel better about yourself. It is common to see a confirmatory mindset in teams that prioritize building over understanding and planning. They want to be right; they want to check the “we did our research” box so that they can keep building and meet that deadline they gave themselves.
Problem-Finding Mindset: “How Can I Improve This?”
Some teams focus on finding things to fix. These teams carry out research just to find problems. They’ll treat a usability study like a driving test: there are right and wrong answers, and the student either passes or fails. If you have a problem-finding mindset, you will be watching the sessions with a very keen eye on what the participant cannot do. This drive to find an underlying problem causes you to be aggressive in interviews, which sometimes look more like interrogations. You assume that the participants are hiding a problem, which you are there to extract from them. You believe the users will give you a great insight if you continue asking, “Why? Why? Why? Why!”
What does this problem-finding mindset do to product research? It focuses the effort on problems and problems only. The researcher isn’t genuinely interested in the experiences of the participant; you’re there to find a problem—even though there may be none. It tries to reduce complex user interactions to black and white. In that light, a product that has too many problems to pass the “tests” might be abandoned along with its positive qualities. Conversely, a product that shows no problems and passes might be shipped despite being a poor market fit.
A subtle ego issue may surface in this mindset. The problem-finding mindset assumes that whoever worked on the product did a lousy job, which led to these problems the researcher so eagerly found. It assumes that the current team isn’t good enough and needs the researcher’s gracious guidance in telling them what to fix. The question “How can I improve this?” has a strong emphasis on I.
The problem-finding mindset may lead to quick improvements in the short term, but it focuses on the negative, limits learning, and can be harmful to collaborative work in the longer term.
The Right Mindset, the Insight-Making Mindset: “I Want to Understand”
We talked about teams that focus on the sales motivation, teams that focus on being right, and teams that focus on mistakes and improvements. There is a fourth type of team that focuses on learning. They are aware of their assumptions and biases, they work very hard to avoid leading their users during research, and they check their egos at the door. Their sole goal is to learn from their users. They seek to learn about users’ good and bad experiences, their favorable and critical thoughts, their suggestions and complaints. When they hear an unpleasant surprise, they stop and listen, giving users space to express themselves in their own terms. These are the teams that assume the insight-making mindset. They are the ones that are most successful with product research.
The insight-making mindset focuses on the positive and negative aspects of your product. Researchers in this mindset work with an open mind, trying their best to withhold judgment and focus on the research question at hand. Unlike researchers with a transactional mindset, they are not just interested in that “switch” moment when the purchase decision is made. Unlike researchers with a confirmatory mindset, they do not lead their participants to affirm their product’s features. Unlike researchers with a problem-finding mindset, they are not looking to fix an immediate problem. They just want to listen and understand without bias.
The insight-making mindset adopts a diagnostic approach, which is about getting to the real problem. It’s about trying to understand the needs of the customer with as little bias as possible and asking open questions, such as “Tell me about the last time you…” or “What happens when…?” Diagnostic questions are based on past events and decisions, not a hypothetical future. They allow the participants to bring in their unique, personal perspectives and share their experiences in their own language.
A friend who runs a start-up accelerator remarked that of all the startups he sees, the most successful are those that take the diagnostic approach to understanding their customers. In our early years of product research, we admit that we often carried out research with a confirmatory mindset. It took time and experience to realize that all we were doing was looking to be right—and that this was making our products more and more wrong.
The insight-making mindset is where you are open to being wrong. It’s interested in generating insights, not confirming opinions. This different focus helps you keep a cool head when you discover serious issues with your product, which is especially challenging when those issues arise from features you built with your own hands. If you don’t focus all your attention on the product’s problems, you can see its strengths, so you know what to retain as you develop it further. Not only does focusing on insights with an open mind give you a better return on the time you’ve invested, it also causes less finger-pointing when the results come out.
The Value of Being Wrong Early
Todd learned about the value of being wrong in the early stages of product development. He was tasked with solving a problem of loyalty programs and how they related to email marketing. The state of user research at Constant Contact at the time was robust: they had a dedicated user experience (UX) department and full-time UX researchers. Their projects tended to run deep and long, frequently taking three to six months to derive any insights. The marketing team had tons of data to sort through, and there was a small but growing team of data scientists to help with that analysis.
Ken Surdan, then the senior vice president of product, asked the newly formed innovation team to organize a cross-functional team to get working on customer loyalty programs. So C. Todd and a small team went off and ran a design sprint. They spent about one week preparing and organizing, one week running the sprint, and one week finalizing the findings and combining them with market research to draw conclusions about which ideas, if any, should move forward. Turns out, the answer was none of them.
You might think that they wasted three weeks! Remember, though, that the company had been trying to determine a solution for about a year with no success. That three-week effort gave the executives the data they needed to kill the project formally and finally. As much as we’d love to write about the smashing success that effort was, know that learning what not to do is also a success. Imagine if Constant Contact had funded a team of 8 or 10 people to build a product, and after three months they had nothing but a handful of code. We’ll take a failed effort over three weeks with a small team rather than a failed three-month effort with a larger team any day.
Steps for Good Insights
Starting with an insight-making mindset is crucial to making product research efficient and actionable. However, it is equally important to follow some general steps to ensure that you actually get insights instead of random conclusions or kind confirmations.
On a general level, any successful product research endeavor has six basic steps.
Step 1: Focus on a research question
The goal of product research is finding actionable insights for product development in a timely manner. Therefore, it is very important to start with a research question. A research question is a single focused question that frames your research endeavor: what is it that you want to learn? Finding one is not hard, and it guarantees quality and focus throughout the process.
Step 2: Identify your research method and participants
This is the step where you decide on your method and who you will work with. There are hundreds of methods that you can use to answer your research question. But each method answers a different type of research question. For example, if your question calls for statistically meaningful numeric data, you will use quantitative methods. If your question calls for personal interpretations of meaning, you will use qualitative methods. Be selective about what kinds of participants will give you good insights and whose data will be valuable in finding an answer.
Step 3: Collect data.
It may come as a surprise that collecting data comes at such a late stage in the process! Depending on your research question and your selected method, this step can entail talking to participants, asking them specific questions, doing work together, watching them use your product, or analyzing historic data from users.
Step 4: Analyze as a team
Your data will gain meaning if you analyze it from different perspectives. The best way to do this is as a team—not necessarily your immediate team but a group of people who offer different, even opposing, viewpoints. This is almost seeking to be wrong about your initial ideas! Involving fellow team members, peers on other teams, stakeholders, and sponsors will help you arrive at richer insights in a very short time.
Step 5: Share findings
Sharing your research findings is just as important as the previous four steps, and it is a separate step because of the effort involved. It is so sad to see teams do a very good job with the first four steps and then write a report that no one will ever read. Sharing your findings is an opportunity to share stories, judge the business impact, and show what you suggest through prototypes. Successful product research teams don’t do this just once; they do it over and over to have meaningful discussions with all affected parties.
Step 6: Plan the next cycle
No, you’re not done yet! Good product research is about continuously learning from the market and from your customers. One research endeavor leads to another, and you never stop discovering new insights. (If you are interested in more detail about continuous learning, you should have a look at Continuous Product Discovery by Teresa Torres.)
As you develop your research muscles, the outline of the research process previously given might change, as might the duration of these steps. You might even skip some of them entirely (more on this in Chapter 9). But if it is your first take at research, we think it’s a good idea to stick with these steps.
What Does Continuous Learning Look Like in Real Life?
Product research is a mix of qualitative and quantitative approaches, and the order can vary.
To highlight this point, let’s look at examples of two teams’ research cycles. Both span a long period of time: they include prototyping rounds and even feature releases. The details aren’t the point here, so we’ve left them out and skipped some steps; the point is that both teams are engaging in a cycle of continuous learning. Which steps you take in your own cycle will vary.
Example 1: Start with quantitative data
Quant: Start with sales data. Look at won/lost numbers and reasons for these. Focus on the top reason sales are lost.
Qual: Interview lost prospects and/or look-alikes.
Quant: Based on insights around won/lost reasons and interviews, examine product analytics to compare what you heard with what actually happened.
Qual: Prototype a product feature that solves an identified problem and get customer feedback.
Quant: After feature release, track analytics to check whether you are seeing the expected behavior.
Example 2: Start with qualitative data
Qual: On-site customer visits/day-in-the-life
Quant: Analytics on those customers’ product usage
Qual: Video interviews of customers
Quant: Market analysis of new opportunity
Qual: Prototype of new product area
There are endless variations of these. The cycles could start by looking at a number of customer interviews you recently conducted. They could start by hearing a few similar stories from tech support. They could start with you on a bike ride, thinking about why the last product release did not reach the anticipated adoption rates. The point is that you start and keep going!
Ego is the biggest enemy of product research. Product people want to be on point with their predictions; we want to see our ideas shine. But those expectations may blind us from seeing the actual needs and motivations of our users. Being open to being wrong is at the heart of product research, and the teams who let go of their egos are the ones who arrive at great insights.
It is inevitable that you will be wrong at some point in your product research endeavors, sooner or later. You’ll be wrong more than once. Teams who are OK with being wrong get better and better with each iteration of their research, and they develop a habit of learning from their users, without being hurt when their ideas fall flat.
Rules in the Real World: Founders of Zeplin Were Very Wrong
It is impossible for designers to know the technical details of the platforms they are designing for at the same level as developers do. Pelin is a designer who is meticulous about the little details that make an app a pleasure to use: the transitions between screens, recovery steps in error cases, the timing of animations, correct text baselines, aligning icons in list views, consistency between the iOS and Android versions of the same app. She spent hours in spec tools to document designs and then spent just as many hours with the developers to go over these little details again so that the code would reflect exactly what she intended to deliver to the end users.
She worked closely with Berk, who also wanted to write code that delivered designers’ work as intended. Pelin and Berk found there was always something missing in the static documentation that designers shared with the developers. Berk found Pelin’s detailed specs very valuable, but he always needed more information when he started coding. This often started an endless back-and-forth between design and development.
To solve this problem, Pelin, Berk, and two other cofounders founded Zeplin, a collaboration platform that makes software as a service (SaaS) tools for designers and developers. They were confident that they were addressing a sore spot but wanted to make sure that they were covering a wide variety of workflows. So Pelin, Berk, and two other Zeplin employees started interviewing designers and developers around the world to understand how they work together. They asked only two questions: “How do you share designs in the team?” and “How do you collaborate with designers (or developers)?” They talked to more than 40 designers and developers with different collaboration styles over two weeks. Then they all sat down together and identified about a dozen themes from their data. For each theme, they sought to understand the user’s challenge and who, other than the user, was unhappy about it.
This research invalidated about half of their initial ideas! The Zeplin team used this input to build their first release while keeping their ears open for customer feedback through feedback forms, a beta program, and a proactive customer success team. Setting out in the right direction, one that is firmly based on actual needs of end users and not on the whims of the founders, made Zeplin the industry standard for designer-developer collaboration.
What if Pelin and Berk had skipped doing their research at the beginning and started building instead, like most founders do? They probably would have heard the same feedback from their users in their first release. Starting with research allowed them to get the same feedback without investing huge amounts of time and effort into building a product first. Having a focus and working collaboratively allowed them to make sense of their data quickly, with high impact. Because they followed Rule 1 and were prepared to be wrong, their open-minded research approach invalidated most of their initial ideas and opened up space for new, better ideas.
- The foundation of product research is being open to being wrong. The insight-making mindset gives you the space to learn and to arrive at genuine insights.
- Good product research consists of six steps: focusing on a research question, identifying your research method and participants, collecting data, analyzing as a team, sharing findings, and planning the next cycle.
- Planning the next cycle in product research is a key behavior. Without it, you risk making research a one-time, disposable showpiece.
- Product research is an ongoing endeavor where you cycle between different types of research approaches.
Buy Product Research Rules
Drawing from decades of experience in product development, C. Todd and Aras lay out nine simple rules that combine user research, market research, and product analytics to quickly discover insights and build products customers truly need.