Don’t get me wrong, I’m an advocate of data-driven decisions and I prefer my opinions and ideas to be grounded in a robust number backed case. I can have jubilatory moments when analysing figures and seeing the positive impact of what I do, as a product manager. This is also why I continue working out the numbers myself even if there are teams of analysts who could do it for me. And this is also why I have learned to not blindly trust the numbers.
Numbers are wrong more often than you think
What can possibly go wrong? Well, a lot. Take a look at this article on Google Analytics common mistakes or this discussion on trusting GA data. Your tracking set-up may be incorrect, the knowledge and documentation may be missing important information – for example, events may not be named in an explanatory fashion or the tracking start date not taken into account – the analyser may make errors in querying the data and then calculating results.
You’re missing the full picture
Numbers can be misinterpreted if the full context is not understood. For example I’ve sometimes scratched my head wondering why my conversion rate was not going up after making improvements to a purchase funnel, and then realised that the marketing team had started an acquisition campaign that meant a higher volume of our visitors were less ‘qualified’ than before, and consequently were less prone to convert on average.
If the reverse had happened and my conversion rate had benefited from a drop in marketing acquisition volumes, I may not have questioned it and prided myself for the full extent of the conversion rate hike.
The rule of one key success metric for all devices does not work
Your user behaviours may vary depending on the device used, and so should the metrics. Having worked with publishers, who do not have the ‘easy’ task of measuring against a purchase funnel, it can be challenging to find the right KPIs to worry about. And content consumption, especially, will most likely show lower KPIs on mobile, making it easy to deduce there is a problem.
However, sometimes the metric is just not adapted. I have once seen a same scroll depth target of 75% applied to desktop users and to mobile users. When inspecting the layout of the pages on each device, I realised that on mobile all the left hand and right hand elements of the desktop page were stacked underneath the core content, meaning that a user was only 50% down the page once they had read the full article.
‘Good’ numbers may hide big UX issues
It is common practice to check user experience when you realise a figure does not match expectations, for example if you identify a huge drop of conversion from page 2 to 3 in your purchase flow.
But what if the conversion goes up between those 2 pages because the user has missed a crucial piece of information and all they saw was to press ‘next’? Sometimes, numbers will make you think all is fine, when it is not.
Stats cannot replace the value of regularly watching customers use your service either through face-to-face or recorded sessions.
When KPIs do not say anything useful
One of the big culprits – thankfully slowly becoming irrelevant – is page views. If set as a target, someone will surely find ways to grow this KPI without any improvement in customer behaviour. And this is probably how these endless galleries of images were born, where each picture counts as a page view, or articles broken down in multiple pages, with no benefit to the users.
For all of these reasons, I would not take numbers for granted, and would bear the following in mind:-
- If possible, verify the numbers (as a QA manager would test the code) because they can create as much harm (if not more) to your business as bugs do
- Be critical and challenge them, and not just the unfavourable ones
- Keep adapting your metrics as you enhance your own product
- Combine with qualitative insight