Data-Driven Product Management by Matt LeMay "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs 22 July 2020 True Data, Data driven products, Goals, Qualitative Data, quantitative data, Mind the Product Mind the Product Ltd 2469 Product Management 9.876

Data-Driven Product Management by Matt LeMay


In this deep dive, Matt LeMay, author of Agile for Everybody and Product Management in Practice, explains what data means to product management and how it should be used to ensure that we’re building products designed to achieve meaningful outcomes.


Nearly every conversation I have with a product team looking to take a “data-driven” approach leads to this exact question:

When do we know that we’re making the right decision?

And for teams and individuals looking to launch products into the irreducible complexity of real-world markets, it isn’t too hard to understand why. Making the wrong decision can spell disaster for a product team. And in a world of rapid change and growing uncertainty, the very notion of “data-driven product management” suggests an appealing and sorely needed sense of finite, quantitative certainty.

But that very quest for certainty is often what leaves real-world approaches to “data-driven product management” lacking in real-world impact. “Data-driven product management”–as with product management in general–requires fearlessly navigating a world of incomplete information, unanswerable questions, and best guesses.

What Is Data-driven Product Management?

Depending on who you ask, “data-driven product management” can mean anything from “product management that is accountable for developing complex quantitative models” to “product management that involves looking at a Google Analytics dashboard from time to time.”

But beneath that spectrum of definitions is a single important idea: data-driven product management means making decisions based on real-world information. This might seem broad and obvious, but in practice it often requires challenging long-held patterns of decision-making by opinion and politics.

This does not, of course, mean that opinions and politics disappear. In most cases, data-driven product management means fearlessly bringing to light the real-world information that might contradict or complicate opinions and politics–including a product manager’s own opinions. In other words, data-driven product management means taking a clear-eyed look at all the real-world information available to us and making the best decisions we can, even when those decisions are neither clear nor obvious.

Keeping It Simple and Descriptive

The very word “data” can be a dangerous one, as it carries with it great authority but highly variable specificity. For example, the statement “The data tells us that we should pursue a strategy that starts with retaining and upselling existing customers, rather than investing in the acquisition of net-new customers” sounds authoritative and convincing. But what is “the data”? And what exactly is it telling us?

Here, we can immediately see both the lure and the danger of “data-driven product management.” If our goal as data-driven product managers is to look fearlessly and comprehensively at the specific real-world information available to us, then we are not living up to that goal if we use generalized terms like “the data” to stand in for cherry-picked bits of information that appear to support a single, unassailable course of action.

Truly data-driven product managers use specific and descriptive language to share information with their colleagues. Imagine how you might rewrite the above statement about what “the data tells us” to describe:

  • What decision we set out to make
  • What a “successful” decision would be
  • What exact “data” we consulted to make that decision
  • What conclusions we drew from that data
  • The course of action we suggest as a result
  • What information is missing, incomplete, or unavailable
  • When and how we will evaluate the success of the decision and adjust course if needed

As you set about to make a “data-driven” case for any approach you plan to take as a product manager, make sure you are able to speak directly and comprehensively to each of these points.

Start with Your Goals

As the above bullet points suggest, having a clear and shared definition of success is critical for cultivating a truly data-driven approach. If we do not know what goals we are working towards, then any and all information we consult can and will simply be used to justify our own opinions–or those of the people who carry the most political sway within an organization.

For that reason, it is important to know what success looks like for any product, feature, or service being developed. There are innumerable frameworks available to do so–including SMART goals, CLEAR goals, and OKRs. Any and all of these offer a great starting point for working with your team to articulate a clear sense of where you are going–the “why” behind your features, products, services, and other outputs.

Teams should be clear on why they need that data and how they intend to use it (Image: Shutterstock)

This is a particularly important step for teams that find themselves stuck on the notion that they need more data, without being clear on why they need that data or how they intend to use it. Acquiring data–especially data that can be purchased–is less difficult and less controversial than actually making a decision, and many teams wind up using a lack of available data as a generalized excuse for inaction. I have often found it helpful to run a quick “destination thinking” exercise with teams in this position, which starts with the question, “If your team had access to all the data in the world, what decisions would you make?” This kind of blue sky thinking helps realign teams to the problems they must solve and the decisions they must make before getting into the details of what data they must consult to move forward.

Hypotheses and Assumptions

Once you have aligned on your goals and objectives, then you can begin looking at how exactly you will measure progress towards these desired outcomes. This is where the greatest test of data-driven product management often presents itself. Here, teams tend to either push towards an “air-tight” case for the approach they favor by way of the elisions and generalizations discussed earlier, or do the much more challenging work of documenting their hypothesis (what they think will happen and why), and their assumptions (the things that must be true for their hypothesis to play out as expected).

Documenting hypotheses and assumptions is perhaps the most critical part of a truly data-driven approach to product management, because it allows a diverse set of stakeholders to apply their particular knowledge towards making a better decision. For example, imagine that a team working towards the general goal of increasing revenue per active user of a subscription service user decides that the best course of action is simply to raise the price of each subscription by a small amount. This is by no means a sure thing, but it is a hypothesis–a theory to be tested and validated. Within this hypothesis is a series of assumptions, including but by no means limited to:

  • Users will pay a higher price rather than canceling
  • Users will accept a higher price without receiving new functionality
  • If raising the price of a subscription results in a net increase in the revenue per current active user, but a decrease in the number of new subscribers, that is acceptable to the business

The last assumption on the above list is particularly critical, because it could so easily be omitted by a team that has specifically been tasked with working towards the specific goal of increasing revenue from existing users. The assumption that working towards this goal means that other high-level measures of success can be negatively impacted is just that–an assumption–and one that is always better to document.

Product Metrics and Leading Indicators

Testing and validating hypotheses and assumptions is one of the critical disciplines of product management, and is well documented in books like The Lean Startup and Testing Business Ideas. It is always helpful to think about quick experiments you can run to validate your hypotheses and test your assumptions–but keep in mind that the real world of product development is never as scientifically clear-cut as a laboratory. At a certain point, you will need to move forward with incomplete knowledge–and you will need to have a plan in place for how to measure the ongoing progress of your product against the goals you have set and the hypotheses you have developed.

Here, deciding on specific product metrics is an important and necessary step. These are the measurable things within your product that you can track and analyze as needed to see whether your hypothesis is being proven out, and whether your assumptions are proving to be correct (more on that soon).

Deciding upon product metrics often involves making its own set of hypotheses and assumptions–usually about the short-term measurable things that we believe will drive long-term business outcomes. These are often called leading indicators–the signals that we believe will prefigure the longer term outcomes (lagging indicators) we are working towards for our business.

Building off of our ongoing example, a team might be tasked with the lagging indicator of increasing the average revenue per user over a six-month period. But how will that team measure incremental progress towards that goal? And, perhaps more importantly, how will we measure it in a way that helps us understand whether the specific actions we’ve taken are actually driving the desired result?

Here, again, there is no absolute right answer–just the best guess we can make with the information we have. Imagine that, rather than simply choosing to increase the cost of a subscription, we chose instead to introduce a new value-add feature at an additional cost. How might we measure this feature’s success in driving revenue? We would likely choose, for example, to measure how many users click through to learn about the new feature and how many users actually convert to purchase the new feature. You might decide to look at these metrics daily, and to set specific goals for the number of daily clickthroughs and daily conversions you expect to see based on your team’s revenue goals.

This all seems straightforward enough–but let’s say that, at the end of six months, you have exceeded your daily goals for clickthroughs and conversions, but the company at large has failed to hit its revenue goals.

Reflecting on and examining potential disconnects like this is exactly why it is so important for us to start with our high-level goals, and then do our best to find product metrics and leading indicators that we believe will help us achieve those goals. The more specific and systematized we can be in our approach, the better we are able to make decisions that are clearly scoped to our particular areas of focus, while working towards the broader goals of the business.

“Data” Means Quantitative and Qualitative Data

When people talk about “data-driven product management,” there is often an unspoken assumption that this only means quantitative data–information that can be captured by numbers. This assumption is profoundly dangerous, because it often leaves us unable to speak to why our assumptions or hypotheses might be playing out in the way that they are. We might be able to see, for example, that users are not clicking through on our new feature as much as we had hoped they would. But without qualitative data–information that can be captured by words and stories–we are often powerless to act upon quantitative trends.

Shutterstock image of soeone reviewing data
It’s often assumed that data only means quantitative data (Image: Shutterstock)

Product managers often assume that qualitative analysis is “easier” than quantitative analysis, but this could not be farther from the truth. Qualitative research is a challenging, and in many ways counterintuitive, discipline especially for product managers who are primarily accustomed to interviewing stakeholders as opposed to users. Here, product managers must simply avail themselves of as many resources as possible to skill up on qualitative research, including books like Erika Hall’s Just Enough Research and direct collaboration with trained researchers in their organizations.

As a general rule, it is helpful to keep in mind that if you don’t know why you are observing a particular trend in quantitative data, then that trend is of little use in guiding your decision making.

What Good Data-driven Product Management Looks Like?

Simply put, good data-driven product management means that anybody on a product team is able to speak confidently to the goals they are working towards, the way they are measuring progress towards those goals, the current status of those measures and, critically, why.

Note that this in no way ensures that a product team’s work will be successful–or even that they will be working towards the right goals and outcomes for their business or their customer. It simply means that product teams are able to speak candidly to the real-world information that is informing their work and to adjust course based on what is happening–even if what is happening is not what they had planned or hoped for.

What Bad Data-driven Product Management Looks Like?

As we have discussed, bad data-driven product management often involves the invocation of “data” as the justification for taking a politically expedient path. One reliable test of whether a product team has fallen into this trap is whether or not any meaningful retrospection or conversation takes place if a product fails to meet its stated quantitative goals. For teams that are practicing bad data-driven product management, a failed product is simply met with a noncommittal shrug. After all, somebody got what they wanted–and properly addressing the product’s failure would be too politically treacherous to be worthwhile.

Another common pitfall of bad data-driven product management involves using metrics (particularly engagement and adoption metrics) as a stand-in for more meaningful high-level goals. Teams that fall into this trap–one closely related to the Build Trap described by Melissa Perri–drive thoughtlessly towards the usage of their products and features without any clear sense of what that usage is ultimately meant to accomplish for the business or its customers.

In both of these cases, “data-driven” teams actually wind up less connected to the full set of qualitative and quantitative information that they need to make smarter decisions.


While many are drawn to “data-driven product management” in the hopes that it will minimize uncertainty and empower teams to make decisions with absolute confidence, the truth is much more nuanced. Truly data-driven product management asks us to be more direct about documenting our unanswered questions, untested assumptions, and best-guess hypotheses. Data-driven product management requires more collaboration and communication, not less. But it is the best approach we have at our disposal to build products that achieve meaningful outcomes for our business and our customers.

You Might Also Like To:

Rate this Article