The great thing about technology is that anything is possible. But when anything is possible, how do you decide what to do? And, arguably more importantly, how do you decide what not to do? Every one of my clients struggles with this paradox of choice. Whether a small or large charity, Seed, Series A funded, or mature startup, deciding what to build is hard.
It’s hard because we’re human and we underestimate the time it takes to do things. We also place a higher value on our own ideas and don’t like making trade-offs (the list of cognitive biases on Wikipedia is bewildering). But it’s not impossible. Here are some methods that help solve the prioritisation problem, depending on how product teams work and what they need.
Effort and Reward
I believe that the simplest solution is usually the best. And the simplest way to prioritise work is by giving each idea a score related to effort and reward and plotting them on a grid, grading each as high, medium or low.
|Low Reward||Medium Reward||High Reward|
|Low Effort||Idea 3||Idea 1
|Medium Effort||Idea 6||Idea 9||Idea 10
|High Effort||Idea 11
|Idea 4||Idea 2|
Once you’ve done this, it’s easy to see where you should prioritise your efforts. Anything in the top right, like ‘idea 7’, should go straight to the top of your roadmap. If the table is a tree, that’s where you’ll find the metaphorical low hanging fruit.
This is a good method for ballpark-sized estimation where there are a lot of unknowns. And not only does it help a product team to decide what to do, it’s an enormous help to demonstrate to your wider stakeholders why you’re not doing something.
The ICE method
Not all predictions of reward are equal, and when there’s a large degree of uncertainty about the chance of competing ideas having an impact, you might need more help to prioritise. This is where the ICE method helps.
ICE stands for Impact, Confidence, and Effort. With this method (made popular by Sean Ellis), you rate each idea on a numeric scale, usually 1-10, and then calculate a score for all ideas using the following equation:
ICE score = Impact x Effort x Confidence.
Here’s how that might look for your product:
Here we see that Idea 2 is the clear winner, and should be prioritised. Had we been only comparing ideas with Effort and Reward, we might have chosen Idea 1 as it would be classed as medium impact and low effort, whereas Idea 2 is high impact and low effort.
This is where adding confidence levels helps you get more granular with your decision making and potential. Depending on your preference, you could use a 1-5 scale, or divide your ICE rating by 3 to get a smaller average score that’s easier to compare.
While ICE is definitely a cool tool to use, you might find that you need a more substantial method to meet your hunger for prioritisation. This is where RICE can help.
The RICE Method
RICE is simply the same as ICE but with an added ‘R’ for Reach. This method is helpful if it’s important to understand how many of your customers a given feature will benefit.
You can calculate a RICE rating by multiplying your ICE score by reach (Intercom recommends another method) but it doesn’t really matter as long as you’re measuring each idea consistently.
So let’s add Reach to our previous example and see if the RICE is right.
Suddenly ideas 3 and 4 are looking better after all. Scrap that roadmap, it’s time to pivot!
Well, in the oft-used words of any product professional, it depends. While these are made-up examples and numbers, it does demonstrate that these are just ways to help you prioritise based on broad measures, and what’s most important for any given product could be different – and can certainly change over time.
Prioritising Your own way
I recommend that product teams start by keeping prioritisation as simple as possible, and get more granular as the results aren’t different enough or if the business context changes. If, for example, your Q1 priority is to increase paid users, then you might add another column to ICE that relates to whether an idea affects paid customers. Or if your Q2 priority is increasing acquisition, you might add a filter that decreases a score if an idea doesn’t affect acquisition. Equally your definitions of Impact or Reward could be different depending on your product goals, so you can bake in your own priorities into those scores.
There is no way to remove risk from a prioritisation process and ensure 100% certainty in your outcomes, but the methods above can certainly help you reduce risk and make better informed decisions about what to prioritise. But as with any methodology, the one that works best is the one that works for you and your team.