Blog Post

Net Promoter Score (NPS): Concerns with Boiling Down Human Behavior to a Single Metric

Author: Cheryl Muldoon, Ph.D.

Since its introduction in 2003 by Fred Reichheld, the Net Promoter Score (NPS) has been widely adopted as the sole best indicator of customer satisfaction and loyalty. NPS relies on a single question: “On a scale of 0 to 10, how likely are you to recommend X company?”.

The responses are assigned to three categories. Respondents who answer nine or 10 are assigned to “promoters;” respondents who answer seven or eight are assigned to “passives;” and respondents who answer six or lower are assigned to “detractors.”

To compute NPS, subtract the percentage of detractors from the percentage of promoters. Companies undoubtedly appreciate the simplicity of the measure, and how easy it is to explain to stakeholders. However, from a cognitive psychology perspective, NPS is a less than ideal measure on which to hang your hat.

Category Assignment Is Arbitrary in Calculating NPS

One obvious issue with the assignment of respondents to the three categories based on their numeric response is scale usage bias. Each of us tends to use scales in particular ways, and this phenomenon is quite dramatic across different cultures. There are people who tend to always rate everything toward the positive end of the scale. Others center everything around the midpoint of the scale and tend to be very noncommittal toward any sort of opinion in either direction. Some skew all their responses very negatively due to an exceedingly high internal bar against which they measure things. And, of course, some will use the full-scale range. Given this, a score of low for one person, neutral for a second person, and high for a third person is common. NPS sees these three people all the same and does not consider an individual’s scale usage bias.

Likelihood to Recommend Is Context Dependent

NPS assumes a given person’s likelihood to recommend is static across all situations. However, it is human nature to assess the unique context of each situation and the people involved. A brand that might be a good fit for your best friend might not be a good fit for your grandmother. For example, my father-in-law’s old iPod recently ceased to work, and he was looking for a new solution to listen to music before bedtime. I use Apple Music and Spotify but did not recommend either of those to him as they would both be too difficult for him to figure out. I would recommend Spotify to my tech savvy sister, though not Apple Music as I know she does not care for the Apple brand. I would recommend both to my mother. In the context of my father-in-law and my sister, my NPS category for Apple Music would be detractor, while in the context of my mother that category would be promoter.

Likelihood to Recommend Is Product Dependent

Just as it assumes likelihood to recommend is static across all situations, NPS also assumes it is static across all products/services a brand offers. I may love one product from a brand, and at the same time hate other products from that brand. In my recent efforts to reduce my environmental impact, I purchased bamboo toilet paper, tissues, and paper towels from Lovely Poo Poo. The toilet paper is great, and I have already recommended it to people. The tissues are okay, nothing to write home about, but good enough. Just do not get me started on the paper towels as I will go on a rant for days – they do not tear cleanly on the perforated line, and they do not absorb anything. I have told anyone who will listen to never spend money on these paper towels. With respect to the toilet paper, I would be a promoter for Lovely Poo Poo; thinking about the tissues I would be a passive; and thinking about the paper towels I would clearly be a detractor.

Detractors Are Not All Created Equal

My last example with the paper towels illustrates another issue with NPS in that all detractors are not the same. That is, to indicate that you “would not recommend” a brand is not the same as indicating that you would speak out against a brand. Many people know that when all else fails, the way to get a brand to respond to your issue is to post something negative on social media. You may get the run around or no response through the traditional customer service channels, but if you post something negative for others to see, that often gets a speedy response. The power of negative word of mouth is strong, and brands know this. The NPS system of grouping these genuine detractors, people who are inclined to speak negatively about a brand and discourage others from using it, with those who do not like a brand but never say anything about it to anyone else overlooks this key distinction.

Tipping the Scales

Raise your hand if you have ever been to a car dealership and heard something like “In a few days, you will get a survey. If you are not going to give us a perfect score of 10, please contact us first.” How about if you have ever seen a pop up after you have purchased something online, “Refer a friend and get a coupon for your next purchase”? Intentional or not, these company and employee behaviors skew customers’ likelihood to recommend in their favor by applying pressure or offering rewards rather than by having a high-quality product or service that consumers genuinely want to talk about.

Moving Away from NPS

Undoubtedly, likelihood to recommend is an important metric in assessing consumer satisfaction and loyalty. However, NPS provides a metric devoid of context that is easily skewed by a variety of factors. This single metric, obtained and evaluated in a vacuum, cannot provide the insights needed to understand how consumers think and feel about your brand. To determine if your current brand initiatives work well or need to be improved, a more robust brand health tracking would provide a better road map forward.

The Aspen Finn Brand Health Metric is composed of four key intertwined measures that encompass both rational and emotional aspects of brand satisfaction and loyalty: expectations, preference, love/hate, and likelihood to recommend. While the framework remains the same, we realize each market is unique, and as such, the scoring is custom tailored to capture the right measures. Our brand health surveys also incorporate a series of market-specific behavioral calibration questions to prompt respondents to think about prior shopping situations and get them into the right frame of mind to evaluate brands. By tracking brand health over time, companies can evaluate what campaigns and product/service improvements have been able to move the needle in the eyes of consumers.

Digging a Little Deeper

It is nice to get a snapshot of how your brand stacks up against the competitive set, and the Brand Health Metric score cards provide that 10,000-foot view. However, the power of the Brand Health Metric lies in understanding why one brand scores better than another. The Competitive Action Blueprint digs deeper into the competitive space by relating various attributes of brands to the overall scoring. The resulting blueprint provides guidance on prioritizing potential areas for improvement that would have the most impact and would aid in improving your competitive position.

We can also dig deeper on the emotional connection consumers have to your brand through Ambivalence Analysis. Uncover what has the power to turn consumers who are neutral toward your brand into brand lovers, and what can tip those same ambivalent folks into being brand detractors. Ambivalence Analysis identifies attributes where improving performance will delight consumers, and attributes where poor performance is likely to turn them against you.

Interested in learning more about Brand Health Tracking? Reach out at