Conducting experiments is an essential part of product discovery for product managers. It goes hand-in-hand with user interviews, when your goal is to understand the user’s behavior patterns.
However, conducting in-app experiments in B2B products, especially when targeted at enterprise persona, can be quite tricky and challenging. In this article I share some of the challenges I have faced and the lessons I learned.
User interviews vs. in-app experiments vs. in-app surveys
Before I talk about in-app experiments, the challenges, and their potential solutions, here’s a quick run-down of three types of validation methods along with some of their advantages and drawbacks.
- It allows us to deep-dive into a problem with the users with open-ended questions, ask counter-questions, and seek clarifications, leading to an in-depth understanding of the problem.
- Provides first-hand feedback with full context.
- Quite a slow process — In the B2B world, especially when you target large enterprises, it is difficult to get enough users of the desired persona type to spend time with you and give you feedback. It can take weeks on end to gather enough users for interviews. Some teams follow the practice of enticing them with gift vouchers but it’s not always a viable option, especially when you are on a shoestring budget like I am and it also runs the risk of “positive response bias” in their feedback.
- Frequently, there’s a notable gap between what users express in interviews and their actual interactions with your product. So, just depending on qualitative data from the interviews has a good chance of being mired with confirmation bias.
- Conducting in-app surveys is most useful when you just want to get the majority opinion between multiple choices.
- Can get a large volume of responses pretty quickly.
- Allows you to serve different questions based on different segments.
- No one-on-one interaction; therefore no follow-up questions or clarifications.
- It also doesn’t give enough context for the users to provide answers to any open-ended questions in the surveys.
- It is the best way to put your ideas to the test as you can gather the actual usage pattern at scale and analyze it for insights.
- Allows you to quickly validate a solution within weeks and is also a relatively low-cost option because it’s usually only a prototype being tested.
- Fake-door tests, A/B tests, and prototype testings are some of the examples of in-app experiments, which when paired with user interviews and analytics, can help paint the complete picture.
- Biased — Any test needs a random sample within your target cohort, but if the target users are part of your enterprise customer base, there’s a high chance that account managers/CSMs of those enterprise customers would want to know beforehand and control whether or not their customers are part of the experiment. That is mainly because they don’t want any surprises for their customers and do not come across as ignorant when the customer asks about the “feature” in any meeting. They also do not want to expose customers with a churn risk to such tests. So, then if you were to hand-pick customers for tests, it can introduce bias in your result.
- The concern also is around the look and feel of the prototype. Since it is only a test, a PM would be justified in not spending a ton of dev time polishing the solution before rolling the test out, however, the sales/account managers would be against putting anything out there that is not up to the standards. As a result, the PM might end up investing excessive resources in refining the prototype.
1. First, let’s address the experiment itself. Ensure the UI clearly conveys that it is only a prototype for testing and there’s no commitment that the feature will make it to the generally available feature set. Otherwise, invariably some customers will like what they see in the test and would want to have it available for them.
2. Run your experiment project by account managers before starting. Clearly explain why the target sample users need to be random but be open to excluding the customers who are due for renewal just to avoid serving an experience that might be a bit scrappy on purpose.
3. Provide clarity on what the experiment is about, how long it will run and how will the result be determined to ensure account managers are well-informed, especially if these details become topics of discussion with their customers.
4. Make sure the target for the experiment does not include any new customers unless the experiment is about the onboarding itself.