Tracking your A/B tests

BY Kevin Shanahan ON JULY 27, 2017

This is the second of two posts that deep dive on A/B tests, expanding on a talk I gave at Google Playtime 2016 in London. In the first post I explained how to step up your A/B testing, and in this post I’ll look at how using a tracker spreadsheet can help you manage your A/B tests and better retain learnings from past ones. Here’s a link to the tracker so you can follow along and make your own copy (it’s free).

Why you Need a Tracker

As mentioned in the first post, if you plan to run a lot of A/B tests then you should think about using a document to store test information, which I call the tracker. There are several benefits that a tracker gives you:

  • Forces you to think through the details of a test. Given that each test follows a similar structure, the tracker acts like a checklist to ensure you haven’t overlooked important details. This prevents costly fixes and re-runs later on.
  • Ensures everyone is on the same page. It’s easy for people to have different interpretations or recollections of what was agreed for an A/B test, but they’ll quickly identify these and converge if test information is listed in one place.
  • Gives visibility on live and upcoming tests. Testing typically affects many teams in a company, therefore each team needs to be able to see at a glance which A/B tests are running or planned so they can adjust their plans.
  • Stores details of past tests so you can easily reference them. It’s common to want to look back on old A/B tests to inform new ones, and the tracker prevents you from having to trawl through messages and project management tickets to do this.

Everyone Loves a Spreadsheet

We use a Google spreadsheet for the tracker at Peak. Why a spreadsheet? Because spreadsheets are easy to understand, update, share, filter, synchronise, and have different layers of permissions. They may not be exciting but they’re the perfect format for the job.

The Parts of the Tracker

Here’s a link to the tracker that we use. It has five main parts, each of which is marked with a different coloured header.

  • Test overview. Records what the test is, where it’s happening inside (or outside) your product, why you’re doing it, when you’re doing it, and what the status is.
  • Segment info. Describes the users in a test, including the countries they’re in, the languages they speak, and the devices they use

  • Variant info. Indicates how you will divide these users into variants and what the experience will be like for each one.
  • KPIs. Lists the metrics you will judge the success of the test on and the targets you hope to achieve if the test is successful.
  • Results and next steps. A short summary of the test results, links to more detailed analysis, and the next steps.

And that’s it! The tracker is unlikely to win an award anytime soon but we’ve found it to be a great way of staying on top of lots of tests.

If you found this post and the tracker useful, or have any questions or suggestions for improvements, I’d love to hear from you in the comments section below. Here’s the link to the tracker once more, which is free to copy and modify.

Kevin Shanahan

About

Kevin Shanahan

Kevin Shanahan is Head of Growth at Peak, the self-improvement startup that offers the #1 brain training app worldwide. He oversees product development of new apps under the Peak brand, from concept to prototype to MVP to launch.

18 Shares