In the realm of customer experience management, businesses can employ different summary metrics of customer feedback ratings. That is, the same set of data can be summarized in different ways. Popular summary metrics include mean scores, net scores and customer segment percentages. Prior analysis of different likelihood to recommend metricsÂ reveal, however, that they are highly correlated; that is, different summary metrics regarding the “likelihood to recommend” question tell you essentially the same thing about your customers. This post presents information to help you compare differentÂ likelihood to recommendÂ summary metrics.
The data were from three separate studies, each examining consumer attitudes toward either their PC Manufacturer or Wireless Service Provider. Here are the details for each study:
- PC manufacturer:Â Survey of 1058 general US consumers in Aug 2007 about their PC manufacturer.Â All respondents for this study were interviewed to ensure they met the correct profiling criteria, and were rewarded with an incentive for filling out the survey. Respondents were ages 18 and older.Â GMI (Global Market Insite, Inc.,Â www.gmi-mr.com) provided the respondent panels and the online data collection methodology.
- Wireless service provider:Â Survey of 994 US general consumers in June 2007 about their wireless provider.Â All respondents were from a panel of General Consumers in the United States ages 18 and older. The potential respondents were selected from a general panel which is recruited in a double opt-in process; all respondents were interviewed to ensure they meet correct profiling criteria. Respondents were given an incentive on a per-survey basis. GMI (Global Market Insite, Inc.,Â www.gmi-mr.com) provided the respondent panels and the online data collection methodology.
- Wireless service providers:Â Survey of 5686 worldwide consumers from Spring 2010 about their wireless provider. All respondents for this study were rewarded with an incentive for filling out the survey. Respondents were ages 18 or older.Â Mob4Hire (www.mob4hire.com)Â Â provided the respondent panels and the online data collection methodology.
From these three studies across nearly 8000 respondents, I calculated six customer metrics for 48 different brands/companies for those that had 30 or more responses. Of the 48 different brands, most were from the Wireless Service provider industry (N = 41). The remaining seven brands were from the PC industry. I calculated six different “likelihood toÂ recommend” metrics for each of these 48 brands.
The descriptive statistics of six different metrics and the correlations among them appear in Table 1.Â As you can see, five of the six summary metrics are highly related to each other. The correlations among these metrics vary from .85 to .97 (the negative correlations with Bottom 7 Box indicate that the bottom box score is a measure of badness; higher scores indicate more negative customer responses). The metric regarding the Passives segment is weakly related to the other metrics because the customer segment that it represents reflects the middle of the distribution of ratings.
The extremely high correlations among the rest of the metrics tell us that these five metrics tell us roughly the same thing about the 48 brands. That is, brands with high Net Promoter Scores are those that are getting high Mean Scores, high Top Box Scores (Promoters), Low Bottom Box Scores (Detractors) and Positive Scores (% â¥Â 6).
Comparing Different Summary Metrics
It is easy to compare your company’s Net Promoter Score to other companies when other companies also report their Net Promoter Score. When different companies summarize their “likelihood to recommend” question using a Mean Score Â or Top/Bottom Box Scores, this comparison across companies becomes difficult. However, we can use the current findings to help translate NPS scores into other summary metrics. Because the different metrics are so highly related, we can, with great precision, estimate different metrics from the NPS via regression analysis. Using regression analysis, I estimated the other five summary metrics from the Net Promoter Score.
I calculated five different regression questions using NPS as the predictor and each of the other summary metrics as the criterion for each equation. I selected regression equations (e.g., linear, polynomial) that optimized the percent of variance explained by the model. The Mean Score was predicted using a linear model. The remaining scores were predicted using a polynomial model. The regression equations for each of the metrics (along with a scatter plot of the associated data) appear in Figure 1. As you can see in Figure 1, most of the regression equations explain a large percent of the variance in the outcome variables.
Using these regression equations, you can calculate the expected summary score from any Net Promoter Score. Simply substitute the x value with your Net Promoter Score and solve for y. Table 2 provides a summary of predicted values of other summary metrics given different Net Promoter Scores. For example, an NPS of -70 is equivalent to a Mean Score of 4.9. An NPS of 0.0 is equivalent to a Mean Score of 7.1.Â
Customer feedback data can be summarized in different ways. The current analysis showed that different summary metrics (e.g., Mean Scores, Net Scores, Top/Bottom Box Scores), not surprisingly, tell you the same thing; that is, summary metrics are highly correlated with each other. Using regression equations, you can easily transform your Net Promoter Score into other summary metrics.
In an upcoming post, I will examine how well customer feedback professionals are able to estimate different summary metrics (customer segment percentages) from Net Promoter Scores and Mean Scores without the assistance of regression equations. This upcoming post will highlight potential biases that professionals have when interpreting Net Promoter and Mean Scores.