Strengths and Limitations of the Customer Effort Score

Unhappy customer on phone

The Harvard Business Review article “Stop Trying to Delight Your Customers” espouses the Customer Effort Score (CES) as a superior predictor of customer loyalty as compared with two other widely used metrics—Customer Satisfaction (CSAT) and the Net Promoter Score (NPS). While the article on the whole offers strong, if not always unique, perspectives on the importance of minimizing the burden on the customer, the CES metric itself warrants additional consideration.

CES does look to be a useful metric for evaluating the ease of customer interactions and its influence on loyalty, and it is understandable how its simplicity might be intriguing within the C-suite:

How much effort did you personally have to put forth to handle your request?

  • 1 = Very low effort
  • 2
  • 3
  • 4
  • 5 = Very high effort

Yet CES is not well suited towards evaluating the entirety of the customer experience or the myriad other factors that simultaneously influence customer loyalty. This is a point only softly acknowledged by the authors. In the end, CES may be a useful addition to a well designed and executed customer experience management program, but only to the extent that the metric’s strengths and limitations are fully understood and accepted.

Making It Easy Makes Sense

The underlying premise is that with respect to customer interactions, companies should focus on consistently meeting expectations by removing obstacles and “making it easy” for their customers, rather than on exerting effort and expense towards “exceeding expectations” through giveaways such as refunds, free shipping, and the like. Moreover, the authors rightly offer that there may be little to gain in terms of improved loyalty from customers by doing customer service very well, but much to lose when customer expectations are not met, or things don’t go as easily as they should. This concept, while not new (Market Strategies has long been a proponent of measuring ease of doing business with, ease of customer service resolution, etc.), is accompanied by some practical advice on how it might be applied, such as:

  • Proactively heading off future issues
  • Arming reps to address the emotional aspects of customer interactions
  • Minimizing the need for customers to switch from one service channel to another (e.g., website to telephone) to achieve resolution
  • Channeling negative feedback to drive effort-reduction initiatives
  • Empowering the front line to deliver a low-effort experience.

Similarly, the authors give due attention to the concept of holistic and proactive customer interaction management. This idea is more novel. Simply put, they recommend a well coordinated combination of online and offline customer service channels that provide each customer a seamless resolution of his or her problem, along with advice on how to avoid potential pitfalls. Indeed, when properly enacted, such a well integrated system might effectively drive more self-service volume and prove to be more cost-effective—all the while leaving customers feeling “rewarded” by their ability to easily resolve their own issues on their own time.

The CES Metric—What It Is and What It Isn’t

In many ways, the buzz around CES is not surprising. Elegantly simple
both in theory and with respect to the scale itself, one can certainly understand why customer-experience-minded executives are taking notice. “Effort” is an exciting metric, particularly in light of the expanding self-service environment and “instant” culture in which we live. Moreover, the metric certainly seems right in several regards:

  • CES measures the degree to which the basics are well executed. This aligns well with Barwise and Meehan, who noted that “customers usually choose the brand that they expect to give them the basics—the generic category benefits—better and more reliably than the competition.”
  • It is more specific than either overall customer satisfaction or NPS and, hence, more actionable within its realm (i.e., the customer service experience itself).
  • It enables easy linkage to internal operations metrics such as the number of calls or website visits within a given period, and the number of transfers on a given call.

However, there are some notable limitations to the metric that the authors have not fully exposed. Most strikingly among these is the inability of CES to measure aspects of the customer experience outside of customer service and other direct interactions.

In particular, the direct comparison between CES, NPS, and CSAT is misleading. Whereas CES is designed to evaluate the “micro-experience” of a given customer service event, both NPS and CSAT are measures of the “macro-experience,” the overall sentiment, and take the entirety of the customer experience into consideration—not just a single interaction or event. This results in a biased and inappropriate argument that CES is more predictive of customer loyalty than either NPS or CSAT. It is not surprising that this may be the case among customers who have had a recent customer service interaction—but they are in the minority for most industries. CES would likely fall short in its predictive nature when the entire customer population is considered, including the majority who have not had a recent interaction. While the importance of seamless customer service is without question, one must remember that there are other factors that typically matter even more in most industries:

  • Product (features, quality, uniqueness)
  • Price compared to acceptable alternatives
  • Competitive environment
  • Emotional connection to the brand and its products
  • Prior “goodwill” established through previous interactions
  • Other internal/external factors (scandals, economic and political environment, etc.)

Because CSAT and other more global loyalty measures naturally include these aspects, they can do a better job of predicting behavioral loyalty across the entire customer base.

So, claims with respect to “loyalty” prediction warrant deeper examination. The authors’ cited research evaluates the extent to which CES predicts customer-stated intent to repurchase or to increase spending. Neither of these outcomes is fully synonymous with actual behavioral loyalty. It remains unclear how useful CES is for predicting actual repeat purchase and/or increased spending behaviors, or whether it is superior to either CSAT or NPS in practice.

Additionally, CES will be more volatile and inherently less predictable than a multi-item component (such as an index composed of CSAT, likelihood to continue using, and likelihood to recommend).

Furthermore, the overall lack of transparency regarding the study’s methodological details is troubling. Good science is rooted in theory and substantiated with evidence. In this case, the evidence is virtually non-existent, requiring potential users of CES to take a “leap of faith” as to whether CES will hold up as a strong predictor of loyalty across all types of relationships, interactions and industries.

What Really Matters Is How You Use It

It must be noted that the usefulness of CES, or any other metric for that matter, is largely dependent on how it is used, and on the extent to which it is embraced within an organization. In our experience, organizations that have made the greatest strides in improving customer loyalty have done so with the following pieces in place:

  1. A top-to-bottom cultural embrace of the importance of improving the customer experience, and buy-in on the measurement program and metrics used
  2. Cohesion and harmonization between tactical (interaction) and strategic (relationship) customer experience measurement
  3. Clear linkage between survey results and other metrics, reflecting a consistent view of company goals
  4. A research design that includes sufficient granularity to support tactical decisions and line-level accountability
  5. Flexible measurement vision and strategy that stands up to business changes and evolving needs.


CES is emerging as the latest “it” metric and will likely continue to garner attention over the coming years as its use becomes more widespread. It has the potential to add value to a strong customer experience measurement program, but as with NPS before it, it is not the Holy Grail. Organizations should consider both its strengths and limitations in evaluating how and if CES might fit within their current customer experience measurement program.

While it provides a useful and easy-to-understand framework for measuring and understanding customer effort, CES is but a single lens focused on one aspect of the customer experience.

Aaron Turner is a former research director in the Healthcare Division at Market Strategies International and originally wrote this for Market Strategies’ customers. He is now the director of consumer insights at Blue Cross Blue Shield of Michigan.


Speak Your Mind