Interpreting Net Scores and Mean Scores for the Likelihood to Recommend Metric

calculator on graphs

How do people interpret mean scores and net scores? Is there an advantage of using one metric over the other? I am studying these two different ways of summarizing data to determine the usefulness of each method.

A popular metric in the field of customer experience management (CEM) is based on the following question: “How likely are you recommend COMPANY ABC to your friends/colleagues?” Using a rating scale from 0 (Not at all likely) to 10 (Extremely likely), customers are asked to provide a rating. Two popular ways to summarize these ratings to derive a metric are:

  1. Mean Scores: A Mean Score is the average rating across all respondents. The Mean Score is calculated by summing the ratings of all the respondents and dividing that sum by the number of respondents. The Mean Score can potentially range from 0 to 10 (higher scores indicate higher loyalty).
  2. Net Scores: A Net Score is a difference score. The Net Promoter Score (NPS) is calculated by subtracting the percent of respondents who give a rating between 0 and 6, inclusive (they are called Detractors) from the percent of respondents who give a rating of 9 or 10 (they are called Promoters). The NPS is a difference score. The resulting NPS can potentially range from -100 to 100 (higher scores indicate higher loyalty).

In a recent study, I compared the Net Promoter Score and the Mean Score across 48 companies. I found that the correlation between the Mean Score and the NPS across the 48 brands was .97! That is, the NPS does not provide any additional insight beyond what we know by the Mean Score; both metrics were telling us essentially the same thing about how the brands are ranked relative to each other.

So, what’s the value of the NPS beyond what the mean score tells us? First, it appears that the NPS is difficult to interpret. Because the NPS is a difference score, an NPS value of 15 could be derived from a different combination of Promoters and Detractors. Additionally, because of the difference score methodology, the NPS lacks any real scale of measurement. The score itself becomes meaningless because the difference transformation creates an entirely new scale that ranges from -100% to 100%.

Bob Hayes

Bob Hayes

How do Users Interpret Net and Mean Scores? A Study

Effectively communicating customer loyalty results requires the use of clear and meaningful summary metrics. Metrics are used to help senior executives track company progress, benchmark against loyalty leaders and prioritize improvement initiatives. Understanding how people interpret net and mean scores would be a good first step to helping businesses select the best way to present survey metrics to end users.

Toward this end, I am seeking help from customer experience management (CEM) professionals to complete a short survey (~5 minutes) to see how they interpret net scores and mean scores. In return for your contribution to science, I will give each survey respondent a copy of my new customer experience management book, TCE: Total Customer Experience (pdf version).

To complete the survey, click the link below.

Bob E. Hayes, PhD is the Chief Customer Officer at TCELab and president of Business Over Broadway. He is a recognized expert in customer experience management, and customer satisfaction/loyalty measurement, Big Data and analytics. He is the author of TCE: Total Customer ExperienceBeyond the Ultimate Question and Measuring Customer Satisfaction and Loyalty.


Speak Your Mind