Facing uncertainty is a company’s biggest challenge. It doesn’t matter whether you’re in a startup or an established company, when you start work on a product, you need to validate your problem, your solution, and find your market. When you scale, uncertainty arises in the form of change – your market changes, your users’ needs change, and your organization changes. With that in mind, the most decisive tool you have when facing uncertainty is knowledge. Some companies rely on instinct and opinions to make decisions, but this article focuses on knowledge as a decision-making model.
The Data / Information / Knowledge Paradox (Drawn From the DIKW Pyramid)
- Data: data is feedback from your users, your market, or your product. It can be a lengthy spreadsheet (quantitative) or a one-on-one interview with a user (qualitative). Data is an input, and as such has no real value in itself, it tells you what is happening.
- Information: information is the output of data, after analysis. As such, the goal of data analysis is to find out why something is happening. Information is only as good the analyst’s capacity to understand data – or the UX researcher’s capacity to empathize. In that sense, the gap from data to information is technical. Quantitative data becomes information in the hands of of the data-savvy. Qualitative data requires method and empathy.
- Knowledge: knowledge is the outcome of data. Knowledge only arises when an information is digested and accepted by its recipients. Unfortunately – though data cannot be refuted – information is almost always open to doubt. And, if information can be doubted, then it’s up to the subject to accept or not accept the information. Thus, the gap from information to knowledge must be cultural.
|Question||People involved||Skills needed|
|Data||What is happening?||Individual||Technical|
|Information||Why is it happening?||Individual / Team||Analytical|
|Knowledge||What have we learned?||Team / company||Organizational|
Going From Data to Information: the Learning Curve
Professionals spend their careers perfecting their expertise on how to extract information from data. Data analysts have software and querying to treat data at scale. UX researchers rely on empathy and method to understand user psychology. Data or user research is often considered a job that requires a high degree of specialization.
However, most people don’t need to achieve perfection to manipulate data. In fact, more and more tools come out every year to bring data in the hands of the data newbies, making it less complicated. User friendly data platforms are making the technical challenge increasingly achievable.
Knowing how to use a tool, however, is only part of the equation, you also need to know what to make of it. The uncertainty in going from data to information can come in several shapes:
|Data collection||What data should I collect?
How should I collect it?
Is my data reliable?
|Data analysis||What do I measure?
How many metrics do I monitor?
How can I be sure that this data means that?
As mentioned before, some people spend their careers learning all of that. Knowing how to apprehend raw data or how to choose the best graph out of an analysis takes time. If you’re the data-savvy one at your company, this is where training, documenting how tools work, demos and so on, come in handy.
Going From Information to Knowledge
Extracting information from data often means one person doing the analysis and coming up with results. Going from information to knowledge, however, involves other people – potentially many. It is exactly because other humans are involved at this stage that we now have to look at company culture:
- Organizational silos cause information to travel towards the top, and rarely across departments. Not everybody gets the same access to information (unfortunately, as we’ll see, diversity of thought is key).
- Managerial layers distort the information as it travels upwards: the decision made at the top can end up being very far from what was intended in the beginning.
- Just because you’ve made your analysis available, doesn’t mean that people in your company will hear it. Companies often suffer from confirmation bias, especially when the information challenges accepted opinion.
The model below offers one approach for success in your endeavor. For now, we can see that if going from data to information requires expertise, going from information to knowledge relies on company culture. Hence the paradox: a company that leverages data needs to work on its culture.
The Decentralized Model for Data Governance
The flow from data to information to knowledge naturally mirrors the flow from the individual to the team to the whole. So, to build your data model, you need to address three levels – the company, the product and the individual.
One of the most common mistakes when setting up a data governance model is to over-engineer it and measure everything. Building any kind of governance is deeply iterative. You should start small and work your way up. Not everything needs to be measured, at first. In fact, too much noise can be worse than no data at all. Start by measuring the most important thing: your north star metric (NSM).
What is a North Star Metric?
It’s the one metric that measures your company’s impact (Amplitude wrote a great article on this). If someone asks you how successful your company is in realizing its mission, what would you think about? For some it can be a revenue metric, others a value metric, but it is NEVER an acquisition metric (ie monthly signups).
Your north star should be
|Automatic||It can be accessed in minutes at any point in time.|
|Actionable||Our actions have a measurable impact on it.|
|Relative||You can compare it to competitors and through time.|
|Fundamental||It is directly linked to your product’s value proposition.|
Let’s take a for instance, Medium’s NSM: Time spent reading.
- Automatic: Total time spent reading is something that can be easily displayed, and can be accessed by anyone.
- Actionable: When you see total time spent reading, you very quickly see what to improve. You either have to increase the number of people reading, or the number of people writing. It makes growth very explicit.
- Relative: Total time spent reading can be measured each week/month/year and measured in cohorts, to track evolution. It can also be compared to other blogging platforms.
- Fundamental: Total time spent reading is baked into Medium’s value proposition.
Why you Need a Macro Metric
Call it north star metric, call it company goal if you don’t like buzzwords, what matters is what macro metrics bring to the table:
- Teams often have different objectives. But, when you collectively define a north star metric, you create a common goal that naturally aligns all teams and transcends silos.
- When you define a north star metric, teams understand what role they play in the bigger picture and it’s easier to make sense of the overall purpose of the company.
- When teams are synchronized and they understand how they can help the company’s mission, there is a lot more trust and increased autonomy.
Product/Team Level: The Funnel Approach
Once you have set up your macro goal, you are ready to play your part in bringing up that metric. When you look at Medium’s north star metric, you can see how teams play a role in increasing the total time spent reading.
- You can increase traffic on your platform.
- You can work on conversion.
- You can get more of the same readers to come back.
- You can work on how users share content.
- You can work on pricing to get more users engaged.
Any of these actions would end up increasing the north star metric : total time spent reading. This is exactly what the AARRR framework is all about.
- Acquisition: How do you get more users?
- Activation: How do you get more users engaged?
- Retention: How do you get users to come back?
- Referral: How do you get users to bring more users themselves?
- Revenue: How do you monetize? (if you monetize)
Of course, these are macro questions, they’re an abstraction of what happens in the field. However, they are key in changing the way a company approaches growth. In answering the question “how do you get more users engaged?”, you come up with hypotheses. Some hypotheses may have a high degree of uncertainty, some have a high impact on your metric, others little impact.
This is where data, information and knowledge come into play. You start with a set of questions. For each question, you come up with hypotheses. You experiment, gather the data, analyze it, and find an answer to your assumptions. With the new set of information, you come up with new hypotheses, try them out, learn from them, and so on. You go from work based on opinions to work based on hypotheses. And, it is in the process of solving hypotheses that people, teams, and the company learns. Growth becomes highly predicated on knowledge, and faster growth means faster experiments.
Of course, coming up with hypotheses isn’t as straightforward as saying that it comes from data analysis. If you put 10 very similar people in front of the same piece of data, it’s likely that you’ll have 10 very similar hypotheses. Coming up with ideas requires diversity of thought, and Ross Ashby’s law of requisite variety is probably the best example of this need for diversity. Information isn’t one of those things that should be left to the company’s data experts and executives. In fact, the more people participate in the identification and testing of hypotheses, the richer your results will be, and the faster you’ll learn. In plain words, your capacity to learn as a company is determined by the amount and scope of people involved in the experimentation process.
So, what is diversity of thought? Simply put, it’s the extent to which individuals in a company can approach a problem from different perspectives. It can come from the diversity in people – having people from different genders and backgrounds. It also comes from the diversity in know-how – working in teams made of different expertise (feature teams, squads etc.). It can also come from involving people at different stages in their careers, for example bringing junior employees together with more senior employees. Place the same set of data in front of a diverse group and you will gather rich and insightful hypotheses on how to achieve your impact.
How do you Encourage People to get Interested in Data?
The biggest challenge is technical. The learning curve is pretty flat and can be discouraging. The tools you choose to build your data stack are determinant in the accessibility of data. As such, user-friendly data platforms such as Amplitude, Mixpanel and such can play a big role in making it easier for individuals to jump on board. Such tools can also be very useful playgrounds to get individuals interested in the harder stuff.
Failing the use of simple tools, you can document techniques, tools, or processes, as well as offer to organize group training on different topics. Give open access to the raw data and document its structure. Also, encourage people to ask questions about the data, and improve your documentation accordingly.
Perhaps the hardest part in getting people interested in data, is making them curious to try in the first place. If you have to remember one point from this article is that it isn’t about data, it’s about learning: learning at an individual level, at the team level, and as a company. But, thinking that individuals want to learn in their company implies that they are excited about what they do. Curiosity comes from believing in your company’s purpose and adhering to its culture, and it can’t be forced onto people. If you’ve tried everything above and still, people aren’t interested in data, it isn’t because data’s boring, it’s because your company is.