The 9 ugliest truths about NPS in B2B "Product people - Product managers, product designers, UX designers, UX researchers, Business analysts, developers, makers & entrepreneurs 27 March 2023 False Customer Acquisition, Guest Post, Metrics, Mind the Product Mind the Product Ltd 1197 The 9 ugliest truths about NPS in B2B Product Management 4.788

The 9 ugliest truths about NPS in B2B

Senior Product Manager, Paula Stürmer, shares her thoughts on product managers who tend to regularly use NPS as a metric in their B2B practices.


Net Promoter Score (NPS) is a metric used to measure clients’ loyalty and consists of a single question “How likely is it that you would recommend this company to a friend or colleague?”. Fred Reichheld introduced it in 2003, and even today, 20 years later, NPS is a widely used measure of customer satisfaction. But the simplicity of the survey that made it gain popularity has its pitfalls, especially in B2B products:

1. It’s a snapshot of the moment

After hosting an annual event for our clients, we sent out the quarterly NPS survey and received the highest score ever. While we took the survey seriously and worked hard on the feedback, it was unlikely that our efforts could have significantly improved all of our product portfolios in just one quarter. On the other hand, another team experienced a sharp decline in their NPS score after two quarters due to a major cloud provider’s instability which has never happened before. In both cases, it became clear that NPS only provided a snapshot of the moment and did not paint a comprehensive picture. The challenge was determining what actions to take to improve (or maintain) the scores influenced by such rare events.

2. It’s not clear what is the problem

Once, I received a detractor score from a customer who had interacted with multiple products and services we offer. It was difficult to pinpoint the root cause of their dissatisfaction: Was it due to issues in product A or B? Did they have concerns about the quality of service provided by our support team? Or were they unhappy about our yearly cost increase? Relying solely on a score and a vague comment wouldn’t give us meaningful insights. We had to engage with the client to fully understand their concerns and the underlying issues behind the detractor score.

3. High time lag in feedback and action

A major challenge with NPS surveys is the time lag between receiving feedback and taking action. NPS surveys are usually conducted at regular intervals, such as quarterly or annually, which means that by the time the feedback is received, a significant amount of time has passed. This delay can lead to missed opportunities to address issues promptly and prevent further damage to customer relationships. Additionally, it can be difficult to trace the root cause of problems when there is a time gap between the feedback and the occurrence of the issue.

4. An excellent score doesn’t mean much

We were once caught off guard by a client who consistently gave us promoters’ scores, only to cancel their contract shortly after. When we inquired about the sudden churn, they explained that their company had been acquired and would be standardizing all software to match the parent company’s system. Another pitfall of high scores is getting in the comfort zone compared to the competitors. When you have several high grades and the client is happy with your service, what should you do with it? You can easily fall into the trap of continuing to do the same things. But your competitors are evolving and finding others ways to improve their products and services. You have to be cautious and continuously identify areas of improvement, even if your clients cannot pinpoint them.

5. The non-responders bias

Neglecting the number of non-responders is a common mistake in measuring NPS that can result in a misleading score. A decrease in the number of respondents from the previous survey can create a false impression of improvement in the score. An Account Executive once asked a client to respond to an NPS survey, but the client declined, citing that they had the same concerns as the previous survey and everything had stayed the same. In this case, the client would have been classified as “Passive” in the NPS methodology, but it was clear that they were a detractor. Especially in B2B, customers with a high probability to churn are less likely to respond to the survey.

6. Benchmarks are not very helpful

If it’s hard to use your competitors’ software in the B2B market, imagine getting their NPS score. Even if you get benchmarks that segment by industry, factors such as company size, customer demographics and geographic region can affect NPS scores. It’s already challenging to compare NPS with different products in the same company because of those nuances. It gets even harder doing it across the industry when you don’t know how the survey was applied. There is no one-size-fits-all benchmark, even for B2C.

7. Pay attention to the averages

B2B companies often have a smaller customer base than B2C companies. That directly impacts the sample size used to calculate the NPS score, which makes it challenging to obtain a reliable score. The opinion of a few customers or a significant change in the number of respondents can significantly impact the overall score. Another problem with relying on averages in a B2B context is that they often assign equal weight to all responses, regardless of their relevance or significance. There are often multiple stakeholders who have varying needs and expectations. While it’s essential to gather feedback from all, they shouldn’t have the same weight. Positive feedback from stakeholders with little influence over contract renewals is relevant and has to carry on, but it’s not as critical as the sponsor’s answer.

8. It’s not that honest

In B2B, the person is answering the survey on behalf of his company, and it’s hardly anonymous, like in B2C. This particular scenario doesn’t generate a safe space for a sincere answer, especially when the respondent is concerned that his response will affect the work of their Account Executive. That’s why you can see promoters’ scores sometimes, but with implicit problems written in the comments.

9. The customer life cycle can skew results

NPS scores can be skewed based on the customer life cycle. If a client is in the discovery or adoption stage, they may enjoy the experience but still need to see the product’s value or promise fully. Once a new client answered our survey with a 6 score, followed by a long paragraph explaining his reasons: he was overall satisfied with the product, but it was too soon to be sure he would recommend it to family and friends. That was a fair point. Asking for recommendations at this stage may not accurately reflect their satisfaction level. It’s important to consider where customers are in their journey before interpreting NPS scores.

While NPS is still a widely used measure of customer satisfaction, it has some pitfalls in B2B products. Despite this, NPS can still be a helpful metric if used cautiously and with an understanding of its limitations. It’s essential to engage with clients to understand their concerns fully and use NPS in conjunction with other measures to obtain a comprehensive picture of customer satisfaction.

Discover more great content on product metrics

Comments 1

Join the community

Sign up for free to share your thoughts

About the author

Senior Product Manager, Paula Stürmer, shares her thoughts on product managers who tend to regularly use NPS as a metric in their B2B practices.
Net Promoter Score (NPS) is a metric used to measure clients' loyalty and consists of a single question "How likely is it that you would recommend this company to a friend or colleague?". Fred Reichheld introduced it in 2003, and even today, 20 years later, NPS is a widely used measure of customer satisfaction. But the simplicity of the survey that made it gain popularity has its pitfalls, especially in B2B products:

1. It's a snapshot of the moment

After hosting an annual event for our clients, we sent out the quarterly NPS survey and received the highest score ever. While we took the survey seriously and worked hard on the feedback, it was unlikely that our efforts could have significantly improved all of our product portfolios in just one quarter. On the other hand, another team experienced a sharp decline in their NPS score after two quarters due to a major cloud provider's instability which has never happened before. In both cases, it became clear that NPS only provided a snapshot of the moment and did not paint a comprehensive picture. The challenge was determining what actions to take to improve (or maintain) the scores influenced by such rare events.

2. It's not clear what is the problem

Once, I received a detractor score from a customer who had interacted with multiple products and services we offer. It was difficult to pinpoint the root cause of their dissatisfaction: Was it due to issues in product A or B? Did they have concerns about the quality of service provided by our support team? Or were they unhappy about our yearly cost increase? Relying solely on a score and a vague comment wouldn't give us meaningful insights. We had to engage with the client to fully understand their concerns and the underlying issues behind the detractor score.

3. High time lag in feedback and action

A major challenge with NPS surveys is the time lag between receiving feedback and taking action. NPS surveys are usually conducted at regular intervals, such as quarterly or annually, which means that by the time the feedback is received, a significant amount of time has passed. This delay can lead to missed opportunities to address issues promptly and prevent further damage to customer relationships. Additionally, it can be difficult to trace the root cause of problems when there is a time gap between the feedback and the occurrence of the issue.

4. An excellent score doesn't mean much

We were once caught off guard by a client who consistently gave us promoters' scores, only to cancel their contract shortly after. When we inquired about the sudden churn, they explained that their company had been acquired and would be standardizing all software to match the parent company's system. Another pitfall of high scores is getting in the comfort zone compared to the competitors. When you have several high grades and the client is happy with your service, what should you do with it? You can easily fall into the trap of continuing to do the same things. But your competitors are evolving and finding others ways to improve their products and services. You have to be cautious and continuously identify areas of improvement, even if your clients cannot pinpoint them.

5. The non-responders bias

Neglecting the number of non-responders is a common mistake in measuring NPS that can result in a misleading score. A decrease in the number of respondents from the previous survey can create a false impression of improvement in the score. An Account Executive once asked a client to respond to an NPS survey, but the client declined, citing that they had the same concerns as the previous survey and everything had stayed the same. In this case, the client would have been classified as "Passive" in the NPS methodology, but it was clear that they were a detractor. Especially in B2B, customers with a high probability to churn are less likely to respond to the survey.

6. Benchmarks are not very helpful

If it's hard to use your competitors' software in the B2B market, imagine getting their NPS score. Even if you get benchmarks that segment by industry, factors such as company size, customer demographics and geographic region can affect NPS scores. It's already challenging to compare NPS with different products in the same company because of those nuances. It gets even harder doing it across the industry when you don't know how the survey was applied. There is no one-size-fits-all benchmark, even for B2C.

7. Pay attention to the averages

B2B companies often have a smaller customer base than B2C companies. That directly impacts the sample size used to calculate the NPS score, which makes it challenging to obtain a reliable score. The opinion of a few customers or a significant change in the number of respondents can significantly impact the overall score. Another problem with relying on averages in a B2B context is that they often assign equal weight to all responses, regardless of their relevance or significance. There are often multiple stakeholders who have varying needs and expectations. While it's essential to gather feedback from all, they shouldn't have the same weight. Positive feedback from stakeholders with little influence over contract renewals is relevant and has to carry on, but it's not as critical as the sponsor's answer.

8. It's not that honest

In B2B, the person is answering the survey on behalf of his company, and it's hardly anonymous, like in B2C. This particular scenario doesn't generate a safe space for a sincere answer, especially when the respondent is concerned that his response will affect the work of their Account Executive. That's why you can see promoters' scores sometimes, but with implicit problems written in the comments.

9. The customer life cycle can skew results

NPS scores can be skewed based on the customer life cycle. If a client is in the discovery or adoption stage, they may enjoy the experience but still need to see the product's value or promise fully. Once a new client answered our survey with a 6 score, followed by a long paragraph explaining his reasons: he was overall satisfied with the product, but it was too soon to be sure he would recommend it to family and friends. That was a fair point. Asking for recommendations at this stage may not accurately reflect their satisfaction level. It's important to consider where customers are in their journey before interpreting NPS scores. While NPS is still a widely used measure of customer satisfaction, it has some pitfalls in B2B products. Despite this, NPS can still be a helpful metric if used cautiously and with an understanding of its limitations. It's essential to engage with clients to understand their concerns fully and use NPS in conjunction with other measures to obtain a comprehensive picture of customer satisfaction.

Discover more great content on product metrics