DIY User Research for Product People "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs 6 April 2016 True Derisking, Experiment, ProductTank, Ux Research, Mind the Product Mind the Product Ltd 686 Julia Shalet talks about DIY UX research for product people Product Management 2.744

DIY User Research for Product People

The specialism of Julia Shalet – a.k.a. The Product Doctor –  is in driving and enabling people-centred product leadership for those working in fast pace environments. She works with clients such as Pearson and Fitbug, enabling their staff to be ‘lean’ in their approach to customer and product development.

In this energetic and insightful talk, Julia offers a toolkit for those product managers who don’t have a UX designer or researcher in their team, and want to do user research themselves. In case you’re not sure why you’d want to do user research, here’s the short version:

  • You are responsible for creating value for your customers, users and the company
  • We, as product managers, take a lot of risk based on gut feel, hearsay, or HIPPOs.
  • We cannot give product direction without understanding the people (i.e. any humans) we’re affecting (be they customers, users, colleagues, distributors, or stakeholders)

The 6 Step DIY Reserach Toolkit

This toolkit applies to any size of organisation or team, and (at least in theory) any kind of product. The basis of why this is useful is very simple: you have made assumptions about your product. You may not even be aware of which assumptions are critical and which are trivial, but they are a risk in your product vision and development. User research and experience will help you reduce that risk by quickly clarifying your assumptions. This quick Toolkit is a great framwork to help you do that!

1 – Find your riskiest assumptions

Is it your USP, your business case, the product vision, the roadmap, the marketing plan, localised behaviours…? Ask yourself “Which single thing have I assumed which, if incorrect, would kill the product?

2 – Identify underlying questions, make your assumptions testable

Assumptions, almost by their nature, are generally either too vague or too prescriptive. Once you’ve selected an assumption to explore, dig into and identify what specific things are wrapped up in it.

3 – Turn it into a testable statement

You may well have done this in Step “, but in order to explore your assumption, you need to be able to phrase it in a manner that renders it testable. For example, “We believe that people respond well to green buttons” might be rephrased as “More people will click on a green button than any other colour buttons“. It may seem like a trivial rephrasing, but it is essential to creating a testable case. If you can’t rephrase your assumption into a testable statement, go back to Step 2 and keep working!

4 – Add success criteria

We will know our experiment as succeeeded when…” Setting criteria is a part of understanding your risk boundaries, and at what point you are happy to have decided when you have enough data to make a clear case for your assumption being either correct or incorrect.

5 – Design the experiment in a Lo-Fi manner first

“Lo-Fi” just means to design a quick, cheap experiment to validate the value of the experiment. Does the data exist elsewhere? What’s the least work / fastest process you can use to start learning and de-risking your assumptions? (Think MVPs). Once you’ve done all you can in a lo-fi manner, and you still need more data, only then should you consider more elaborate experiments.

6 – Act, record and start again

Track every experiment, and track the data. Share the findings within the team and across the business. Cross-pollinate! Designing follow-up experiments as a team will mean that your experiments are almost certainly better structured, and based on a richer understanding of who or what you’re trying to test. Crucially, now that you’ve got some data and are running regular user research, work out what you’ll do with the outcomes before you run the next test.

Wrapping up

Be honest about your assumptions, and work through them honestly. It might be difficult, because we are naturally attached to our own ideas, and don’t want them to be proven wrong. Or even worse, so wrong that they end up killing our product. But remember:

Killing a product is a beautiful thing – it frees up resources [for something else fantastic]!

Comments 0

Join the community

Sign up for free to share your thoughts

About the author