4 Steps to the Right Customer Effort Score (CES) Question

The Customer Effort Score is one of the newest and strongest indicators for customer satisfaction and loyalty . Used to its full potential, it helps uncover the weak spots in your customer experience.

CEB introduced a new paradigm for customer service in Stop Trying to Delight Your Customers in 2010. Their research shows that there isn’t much to win in exceeding customer expectations, but plenty to lose in disappointing them. Going above and beyond means high expenses with low rewards.

CEB introduced CES instead, a metric to quantify the customer's experienced effort. They found that customers were loyal when their experience was effortless, not when it was fancy.

graph showing relation of customer satisfaction and service performance and costs in post about finding the right customer effort score question

The metric proved extremely reliable. 96% of customers with a bad effort score were less loyal in the future. Only 9% with a good score showed similar tendencies.

According to its originators , CES is “1.8x more predictive of customer loyalty than customer satisfaction (CSAT) measures, plus it is two times more predictive than Net Promoter Score (NPS)”. Its customer loyalty predictions are also consistent across industries, regions and languages.

Here are four steps to get started with CES and measure your customers' effort levels.

1
Choose the Right Wording

CEB directed various possible customer effort score questions at almost 50,000 customers of different regions and industries. They found one single word combination to be about 25% more predictive of customer loyalty than the second best version:

“To what extent do you agree with the following statement: The company made it easy for me to handle my issue.”

Assuming the second best question wasn’t gibberish, 25% is a significant gap. It shows the importance of the right choice of words.

The original CES question of 2010 went “How much effort did you personally have to put forth to handle your request?”. It had some flaws, as CEB found over the years , like implicitly stating low effort is the customer’s responsibility. The winning question above this paragraph, “CES 2.0”, was CEB’s reaction.

Looking for better customer relationships?

Test Userlike for free and chat with your customers on your website, Facebook Messenger, and Telegram.

Read more

The researchers recommend their version if you have to decide for one single question. But they don’t consider theirs the “one question to rule them all”. Drawing from CEB’s own experience, we can identify attributes that make a question win or fail. A good Customer Effort Score question…

  • is in Likert (agree/disagree) format. It’s easy to think up and sounds natural.
  • is unambiguous, it asks about the effort and nothing else.
  • is simple, it doesn’t include the obvious and redundant.
  • is neutral and doesn’t favor an answer direction.
  • marks off an area of analysis, anything from the full customer experience to a singular item like the chat interaction that helped solve the issue.
  • is translatable for global comparability. It therefore avoids the word “effort”, whose meaning differs between languages.

Here are some example questions that follow above rules:

  • “How much do you agree to the following statement: The company’s service made it easy for me to make the purchase.”
  • “How much do you agree to the following statement: The company’s website makes shopping easy for me.”
  • “How much do you agree to the following statement: This service chat helped me resolve my issue easily.”

You can of course tailor the question to your area of analysis and target group. Still, the original format has gone through some serious testing and came out a winner. There’s no compelling reason to deviate too far from it.

2
Time Your Questions

The best moment to shoot your CES question is right after your customer concluded her experience of interest. At that moment, her memory is fresh and unbiased by irrelevant happenings. When you ask somewhat later the responses are likely to include mental bycatch, distorting the results.

You can pair your query with events that routinely mark the completion of an ‘experience cycle’. These could be the end of a service interaction, e.g. when a chat window is closed, or the reaching of a certain URL, like your order confirmation page.

Most survey tools allow you to build a simple CES query. The metric works on various channels: in-app - right on a website, desktop or mobile - or via email attached to a swift follow-up. An email constitutes extra effort for the customer, so aim for the survey to be a one-click-task.

Consider these tools:

<

3
Choose the Right Scale

CEB’s 2010 question came with a 5-point-scale. The scale was inverted, ‘1’ meaning good and ‘5’ bad. This caused some trouble since most people associate a higher score with a positive experience. Some respondents intuitively clicked the wrong number. Like this, the short inverted scale hurt the CES 1.0’s precision.

Again, the statistically approved CES 2.0 with its 7-point scale is the benchmark. The CES 2.0 scale includes choices from ‘1’ (strongly disagree) to ‘7’ (strongly agree). Not only is this a classic positive scale, it also offers the surveyed more options than the previous 5-point-scale. Which is a way to milden the effects of the acquiescence response bias , people’s tendency to agree rather than disagree, no matter the question. Even if they show this tendency, they’re still forced to distinguish between three possible answers.

Another common issue with Likert scales is The Error of Central Tendency . This one suggests an alteration to CEB’s scale. Especially when people don't feel strongly about the question, they tend to choose an answer that relieves them of picking a side, the safe middle ground. That’s number 3 in the 5-point-scale and number 4 in a 7-point-scale. The fix is an even-numbered scale. Consider this when you’re getting the “Neutral” answer all too often.

Even a 3-point-scale can make sense. Especially on mobile devices, spacious-looking surveys can deter users. Choosing from three options makes answering easier, yet results more inexact.

4
Final Step: Interpret the Results

5 is a good target for your CES on a 7-point-scale. According to CEB , “moving a customer from a ‘1’ to a ‘5’ boosts their loyalty by 22%”. After that, the returns dramatically drop to 2% increased customer loyalty between a ‘5’ and a ‘7’. That's in line with CEB’s main hypothesis that it’s all about relieving the customer’s pain, not about spoiling them.

It’s the lower outliers that should make you sit up. Anything below the neutral middle deserves a follow-up email exploring the cause of the effort. Also, as Virginie Kévers from emolytics suggests , the follow up is a way to filter out negative ratings that don’t concern you, like those from customers looking for a product you just don’t sell.

Like for any measurement with a result, the most important part is how you act on it. Read our post “The 9 Levers for Improving Customer Satisfaction” to learn how to reduce customer effort and raise satisfaction.