This is Part 2 of a series on the Development of the Customer Sentiment Index (see introduction, and Part 1). The CSI assessesÂ the extentÂ to which customers describe your company/brand with words that reflectÂ positive or negativeÂ sentiment. This post covers the development of a judgment-basedÂ sentiment lexicon and compares it to empirically-based sentiment lexicons.
Last week, I created four sentiment lexicons for use in a new customer experience (CX) metric, the Customer Sentiment Index (CSI). The four sentiment lexicons were empirically derived using data from a variety of online review sites from IMDB, Goodreads, OpenTable and Amazon/Tripadvisor.Â This week, I develop a sentiment lexicon using a non-empirical approach.
Human JudgmentÂ Approach to Sentiment Classification
TheÂ judgment-based approach does not rely on data to derive the sentiment values; rather this method requires the use of subject matter experts toÂ classify words into sentiment categories. This approach is time-consuming, requiring the subject matter experts to manually classify each of the thousands of words in ourÂ empirically-derived lexicons. To minimize the work required by the subject matter experts, an initial set of opinion words were generated using two studies.
In the first study, as part of an annual customer survey, a B2B technology company included an open-ended survey question, “Using one word, please describe COMPANY’S products/services.” From 1619Â completed surveys, 894 customers provided an answer for the question.Â Many respondents used multiple words or the company’s name as their response, reducing the number of useful responses to be 689.Â Of these respondents,Â a total of 251 usable unique words were used by respondents.
Also, the customer survey included questions that required customers to provide ratings on measures of customer loyalty (e.g., overall satisfaction, likelihood to recommend, likelihood to buy different products, likelihood to renew) and satisfaction with the customer experience (e.g., product quality, sales process, ease of doing business, technical support).
In the second study, as part of a customer relationship survey, I solicited responses from customers of wireless service providers (B2C sample).Â The sample was obtained using Mechanical Turk by recruiting English-speaking participants to complete a short customer survey about their experience with their wireless service provider. In addition toÂ the standard rated questions in the customer survey (e.g., customer loyalty, CX ratings),Â the following question was used to generate the one word opinion: “What one word best describes COMPANY? Please answer this question using one word.”
From 469Â completed surveys, 429Â customers provided an answer for the question, Many respondents used multiple words or the company’s name as their response, reducing the number of useful responses to be 319. Of these respondents,Â a total of 85 usable unique words were used by respondents.
Sentiment Rating of Opinion Words
The list of customer-generated words for each sample wasÂ independently rated by the two experts. I was one of those experts. My good friend and colleague was the other expert. We bothÂ hold a PhD in industrial-organizational psychology and specialize in test development (him) and survey development (me). We have extensive graduate-levelÂ training on the topics ofÂ statistics and psychological measurement principles. Also, we have applied experience, helping companies gain value from psychological measurements. We each haveÂ over 20 years of experience in developing/validating tests and surveys.
For eachÂ list of words (N = 251 and N = 85), each expertÂ was given the list of words and wasÂ instructed to “rate each word on a scale from 0 to 10; where 0 is most negative sentiment/opinion and 10 is most positive sentiment/opinion; and 5 is the midpoint.” After providing their first rating of each word, each of the two raters were then given the opportunity to adjust their initial ratings for each word. For this process, each rater was given the list of 251 words with their initial rating and were asked to make any adjustments to their initial ratings.
Results of Human Judgment Approach to Sentiment Classification
Descriptive statistics of and correlations among the expert-derived sentiment values of customer-generated words appears in Table 1. As you can see, the two raters assignÂ very similar sentiment ratings to words for both sets. Average ratings were similar. Also, the inter-rater agreement between the two raters for the 251 words was r = .87 and for the 85 words was .88.
After slight adjustments, the inter-rater agreement between the two raters improved to r = .90 for the list of 251 words and .92 for the list of 85 words. This high inter-rater agreement indicated that the raters were consistent in their interpretation of the two lists of words with respect to sentiment.
Because of the high agreement between the raters and comparable means between raters, an overall sentiment score for each word was calculated as the average of the raters’ second/adjusted rating (See Table 1 or Figure 2 for descriptive statistics for this metric).
Comparing Empirically-Derived and Expert-Derived Sentiment
In all, I have created five lexicons; four lexicons are derived empirically from four data sources (i.e., OpenTable, Amazon/Tripadvisor, Goodreads and IMDB) and one lexicon is derived using subject matter experts’ sentiment classification.
I compared these five lexicons to better understand the similarity and differences of each lexicon. I applied the four empirically-derived lexicons to each list of customer-generated words. So, in all, for each list of words, I have 5 sentiment scores.
The descriptive statistics of and correlations among the five sentiment scores for the 251 customer-generated words appears in Table 2. Table 3 houses the information for the 85 customer-generated words.
As you can see, there is high agreement among the empirically-derived lexicons (average correlation = .65 for the list of 251 words and .79 for the list of 85 words.
There are statistically significant mean differences across the empirically-derived lexicons; Amazon/Tripadvisor has the highest average sentiment value and Goodreads has the lowest. Lexicons from IMDB and OpenTable provide similar means. The expert judgment lexicon provides the lowest average sentiment ratings for each list of customer-generated words. The absolute sentiment value of a word is dependent on the sentiment lexicon you use. So, pick a lexicon and use it consistently;Â changing your lexicon couldÂ change your metric.
Looking at the the correlations of the expert-derived sentiments with each of the empirically-derived sentiment, we see that OpenTable lexicon had higher correlation with the experts compared toÂ the Goodreads lexicon. The pattern of results make sense. The OpenTable sample is much more similar to the sample on which the experts provided their sentiment ratings. OpenTableÂ represents a customer/supplier relationship regarding a service while the Goodreads’ sample represents a different type of relationship (customer/book quality).
Summary and Conclusions
These two studies demonstrated that subject matter experts are able to scale words along a sentiment scale. There was high agreement among the experts in their classification.
Additionally, these judgment-derived lexicons were very similar to fourÂ empirically derived lexicons.Â Lexicons based on subject matter experts’ sentiment classification/scaling of wordsÂ are highly correlated to empirically-derived lexicons.Â It appears that each of the five sentiment lexicons tells you roughly the same thing as the other lexicons.
The empirically-derived lexicons are less comprehensive than the subject matter experts’ lexicons regarding customer-generated words. By design, the subject matter experts classified all words that were generated by customers; some of the words that were used by the customers do not appear in the empirically-derived lexicons. For example, the OpenTable lexicon only represents 65% (164/251) of the customer-generated words for Study 1 and 71% (60/85) of the customer-generated words for Study 2. Using empirically-derived lexicons for the purposes of calculatingÂ the Customer Sentiment IndexÂ couldÂ be augmented using lexicons that are based on subject matter experts’ classification/scaling of words.
In the next post, I will continue presenting information about the validating the Customer Sentiment Index (CSI). So far, the analysisÂ shows that the sentiment scores of the CSI are reliable (we get similar results using different lexicons). We now need to understand what the CSI is measuring. I will show this by examining the correlation of the CSI with other commonly used customer metrics, including likelihood to recommend (e.g., NPS), overall satisfaction and CX ratings of important customer touch points (e.g., product quality, customer service). Examining correlations of this nature will also shed light on the usefulness of the CSI in a business setting.