Data-Driven Product Design at the BBC "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs 23 April 2016 True Beta, Data-Driven Product Management, Product Design, Saas, User Research, Mind the Product Mind the Product Ltd 746 Iwan Roberts discusses data-driven product design at the BBC - ProductTank September 2014 Product Management 2.984
· 3 minute read

Data-Driven Product Design at the BBC

Iwan Roberts (Business analyst, BBC) is part of a relatively small agile team building location services at the BBC, continuously iterating for over a year now. In this ProductTank talk – “Driven By Data” – Iwan gives a whistle-stop tour of how his team has iteratively built a set of operational dashboards to help them understand their data-driven product, and unravel how users are actually behaving.

The Product

BBC Travel sounds pretty straightforward – you enter a location, you receive travel news across multiple travel modes (road, rail, etc.). The previous iteration of the product was released in 2009, and now the team (& the users!) wanted a responsive product, geolocation support, and to increase the speed with which incidents could be published and discoverable.

The team also wanted to change how users could search for locations. Previously this was done via a simple, relatively coarse-grained list. Now, the product allowed users to search for very specific, fine-grained locations.

Editorial and Tracking Challenges

The BBC Travel team actually have very little editorial control over what they publish. Their data is almost entirely externally sourced, and so their reputation is based on the data they have access to. They also reused common components from other areas of the BBC (e.g. Mapping and search) – which is fine and frugal in principle, but also means that they are even more beholden to external providers (e.g. Google for mapping).

When it comes to monitoring their systems, the team have a dashboard tracking everything from the data they ingest through to their publishing speed. It’s not pretty, but it provides realtime tracking of the health of their public-facing system, and allows them to make data driven decisions as quickly and cleanly as possible. Even the dashboard was developed iteratively in step with the development of the platform itself, with a total focus on providing accurate, realtime data (built using the Dashy framework, in case you’re interested).

Development processes

While developing V4 of the product, V3 was constantly live, allowing the team to iteratively release new features and UI changes, testing new code and features on existing systems. V4 of BBC travel was actually in Beta for months before the team launched – just before a bank holiday. While the deployment went smoothly, the feedback was surprising (especially bearing in mind the long beta – they weren’t expecting any surprises!) There were 3 common feature requests:

  • Order incidents by road type (released – everyone happy)
  • Maps on mobile devices (already on the roadmap – the team just brought it forward)
  • Links to county-level incident list (completely against the grain of the revamped product!)

The team immediately started trying to understand why users where asking for this list, and sensibly turned to investigating actual user behaviour. How were users currently solving this problem? Were they searching for counties? Maybe they’re doing multiple searches? In both cases, the results couldn’t account for the volume of feedback.

Then, after looking at how users interacted with the map after searching, they saw lots of panning behaviours, suggesting that’s “counties” we’re a proxy for longer journeys. Crucially, this potential need never came up in user testing!

What Can We Learn?

By dint of a large user base and investment in analytics and research, the BBC Travel team has access to large amount of user feedback and quantitative data. This is a goldmine when it comes to understanding their users, but the team have realised that they also need to have a clearer definition of success in order to make sense of that data.

Unlike some other products writhing publishing ‘constellations’ success for BBC Travel is actually quite a fleeting interaction. To better understand what it means, they’re conducting experiments – tracking the number of searches each user performs (too many implies poor UX or missing data), or the percentage of users who don’t reach a results page. This is data that their system now allows them to collect, but the only way to make sense of it all is to understand what they’re users are trying to do in their own contexts, not just the context of the product.

If you’re not already gathering data about your users, and how they’re using your product – start immediately! But more importantly, your goal should be to understand what your users are trying to achieve beyond the limited context of your product – if you don’t understand, then you’ll only ever have a limited understanding.

Comments 1

The insight BBC team got was from user journey and heat map. Data of 4% or 6% of users searching county and 20% or so doing more than 2 searches doesn’t support the plan travel idea that was prioritized. Did I miss what data spoke to the BBC travel team?

Join the community

Sign up for free to share your thoughts

About the author