Qualitative versus quantitative data: we’ve all been involved in a conversation debating their respective merits at some point in our careers. We’re often flipping backwards and forwards between letting feedback from a handful of customers drive all our product decisions or requiring everything to be backed up by statistically significant data. So which type of data is better? The answer is neither.
What’s the Difference?
Let’s first get on the same page about what these terms actually mean. If we take the most literal definitions, quantitative data> is information that can be measured and written down with numbers, like the number of times a certain feature was used or a page was visited. Qualitative data is information that can’t be measured and which is subjective, like how intuitive a feature is to use.
In a product management context, these terms refer to data gathered from customers about their experience with our products. Quantitative data is gathered through surveys, application metrics and A/B tests, while qualitative data is what customers say during interviews, calls, email exchanges, tickets, or casual conversation. Another angle is the softness and hardness of the data. Quantitative data is hard: it’s measurable and concrete. Qualitative data is soft: it is subjective information that is filled with opinion.
In order to make balanced product decisions, we need both types of data. They both tell us about customer behavior, but each gives a different perspective and level of detail. Hard data is more precise as it is measured – “How many customers are doing X?”. It can also give insights into our customers’ experience, such as the average latency customers are experiencing when they visit your site. Soft data gives us a lot more detail and context behind the customer behavior: “This feature makes it very difficult to do my job”, “This is important to me”, “Your site is slow”, “That screen doesn’t work for me because it is too cluttered”.
On the subjectivity of soft data, Pablo Seibelt, who runs the data science team at Auth0, says: “Humans are subjective! We are not robots, but being aware of our subjectivity leads to better results.”
Here’s an example of how quantitative and qualitative data complement each other. We’re using an A/B test to figure out whether the call to action in our user interface should be orange or green. If we can reach significance in this test, we’ll have identified the color using hard data. What we don’t know is the context, why one color is preferred over another. This is something we could get from studying soft data, like what users are saying during user testing sessions (“Oh, I hadn’t spotted that button there, it kind of blends into the background”).
Why do you Need Both?
Taking the example above, we might say after getting the hard data “Who cares?”. We’ve found out which of the two buttons works better so we can just act on that. In this isolated case, that might be true. However, without going deeper we don’t understand why one is preferred over the other. Once we talk to customers, we can ask them, and then understand their preferences, which can then be applied in other contexts as well. If the green button won out in the test, does that mean we should change all of our buttons to green? Or is it the context of this specific test which makes it better? We’ll need more soft data to find the answer.
On the topic of balancing both types of data, Sergei Shevlyagin, group manager at Zillow says: “Qualitative helps you narrow down on the right questions by understanding what problems the customer is having, what actually matters to them and why. Hard data gets your precise answers to precise questions once you did the narrowing down.”
So Where do you Start?
Wouldn’t it be great if there was a hard and fast rule? Unfortunately, as is often the case in product management, the answer is “it depends”. Where to start depends on the situation, what we’re trying to measure, and which data we have available.
Starting with Quantitative
When making decisions where there’s a lot of uncertainty, it’s common to use hard data. Imagine we are planning to add a new product to our portfolio. We might put ideas of products to build into a survey which we send to our customers. This might help us to identify how well these ideas resonate with our customers (or at least with those customers engaged enough to complete the survey), but it doesn’t tell us why they prefer one over another or whether there are more important problems we could be solving. We need soft data to make sense of the survey information.
Another place where hard data is useful is showing patterns of customer behavior. Let’s say behavioral data from our ecommerce platform tells us that users aren’t engaging with our special offers. By taking the data at face value, we may conclude that people don’t find the special offers compelling, and we need to work on either improving them or de-prioritize the feature. This assumption may be wrong. Digging deeper and talking to a bunch of customers, we may discover why, perhaps the special offers are not obvious or discoverable. Thus, if we act simply on the hard data, we’ll go down the wrong path. Starting only with hard data is starting with only half the picture.
Starting With Qualitative
An alternative approach to figuring out which product to build would be to start with the soft data. If we spend some time with our customers (or people we think could become our customers), we’ll probably learn an awful lot about their problems, challenges and needs. We can use these insights to form hypotheses around the products or services that will deliver the most value to this group. We can then aim to gather hard data to validate these hypotheses. By starting with the soft data, we can figure out what questions to ask in a survey or better design experiments to validate the RIGHT hypotheses with hard data. Bruce McCarthy, founder and chief product person at UpUp Labs, explains this way of working in more detail in this interview.
If there are more “knowns” and hard data already available, it is common to start with soft. For example, we’ve been given the task to identify the best way to improve the conversion rate on an ecommerce platform that’s already getting a decent amount of traffic. At that point, we have access to hard, behavioral data to show us where the main pain points might be. However, as Lulu Cheng, product manager at Pinterest explains in this interview: “If you just see a number go up or down week to week, that tells you what’s happening, but it doesn’t tell you why users are doing that particular action.” Soft data will uncover the reason. Without soft data, we could very well end up solving the wrong problem or solving the wrong part of the right problem.
The Customer Data Cycle
Both hard and soft data feed into each other. Starting with hard data, we can uncover significant patterns in our product usage that lead us to do interviews to understand more context. We may uncover new clues while interviewing, and we can then go back to our hard data to measure their significance. In some cases getting some hard data will lead us to other related hard data to make more sense of what we are seeing. The same applies to starting with soft data.
A great illustration of this cycle is Calm, a mobile app for meditation. After launching a meditation reminders feature, the team at Calm looked at their data and noticed only 1% of their users took advantage of reminders. Further investigation showed that those 1% had a retention rate three times as high as the rest of their users. Why weren’t reminders used by more customers? They looked at their settings page and realised reminders were deeply buried. Based on the hypothesis that this was the reason for low adoption, they experimented with increasing the feature’s discoverability, by prompting new users to create reminders. Through testing they found 40% of users who saw the new prompt then set a reminder with the same three times retention rate! As you can see, Calm’s continual delving into soft and hard data was critical in helping to arrive at the right decision.
Be Careful of Your Assumptions
Both hard and soft data need to be analyzed, and the results can be wrong. We have a bunch of metrics about feature usage, but what is the meaning behind those metrics, and what action do we take? We heard a bunch of requests from different customers in interviews, how common are they? Are they really valid? Did we really hear what we really thought we heard? In the same way we should question the data, we should question our assumptions.
Ultimately, when it comes to qualitative and quantitative data, it’s key to understand that we need both. Without knowing how significant the feedback of a handful of users is against our entire user base, we might overemphasize its importance. Without knowing why we’re seeing a certain pattern in behavioral data, we might try to solve the wrong problem, or solve the right one incorrectly. Without validating gut feelings and intuition, we might go completely down the wrong path.
As product managers, our job is to learn as much as possible about our customers to help us understand where to invest. Balancing qualitative and quantitative data is critical for driving product decisions.