Customer Effort Score (CES)

A metric that measures how much effort a customer had to exert to get their issue resolved — a stronger predictor of loyalty than satisfaction alone.

Customer Effort Score (CES) measures how easy or difficult it was for a customer to get their issue resolved. Originally introduced in a 2010 Harvard Business Review study, CES has become one of the three core CX metrics alongside CSAT and NPS, and research consistently shows it is the strongest predictor of customer loyalty and churn.

What Is Customer Effort Score?

Customer Effort Score (CES) is a customer experience metric that measures how much work a customer had to do to resolve their issue. It answers a simple question: was it easy, or was it hard?

CES was introduced in a 2010 study which found that reducing customer effort is a stronger predictor of loyalty than increasing customer delight. The research showed that customers who experienced high-effort interactions — having to call back, repeat themselves, transfer between agents, or navigate confusing processes — were dramatically more likely to churn, regardless of how satisfied they were otherwise.

In short, customers don't need to be delighted to stay loyal. They need things to be easy.

How Is CES Measured?

CES uses a single survey question deployed immediately after a specific interaction. There are two common methods for calculating the score, both valid — the key is picking one and applying it consistently.

Standard survey question: "How easy was it to resolve your issue today?"

Scale: Most commonly a 7-point Likert scale, from 1 ("Very Difficult") to 7 ("Very Easy"). Some teams use a 5-point scale.

Method 1: Average score

CES = Sum of all scores ÷ Number of responses.

Method 2: Percentage of "easy" responses

CES (%) = (Responses of 5, 6, or 7 out of 7) ÷ (Total responses) × 100,

What's a Good CES Score?

CES benchmarks depend on the scale you use, and unlike CSAT, there are no widely published cross-industry averages to compare against. Use the ranges below as internal performance guidance rather than competitive benchmarks.

ScaleStrongAcceptableNeeds Improvement
7-point6.0+5.0–5.9Below 5.0
5-point4.0+3.5–3.9Below 3.5
% format90%+70–90%Below 70%

CES is best used as an internal improvement metric — track your own trend over time rather than comparing to an industry number.

Why CES Is a Strong Predictor of Loyalty

Of the three major CX metrics — CSAT, NPS, and CES — CES has the strongest correlation with actual customer behavior. The CEB/Gartner research behind CES produced some of the most compelling loyalty data in customer service:

96% of customers who experience high-effort interactions become more disloyal — even if they were eventually satisfied.

94% of low-effort interaction customers intend to repurchase; only 4% of high-effort customers do.

81% of high-effort customers plan to share negative word-of-mouth.

88% of low-effort customers say they will increase their spending.

Low-effort service interactions cost 37% less than high-effort ones.

The mechanism is straightforward: effort is exhausting. When customers have to work hard to get help — repeat themselves, call multiple times, navigate confusing escalation paths — it erodes the relationship even when the issue is eventually resolved.

CES vs. CSAT vs. NPS

All three metrics measure different dimensions of the customer experience. Understanding how they differ helps you deploy each at the right moment — and interpret what the data is actually telling you.

CSATNPSCES
Core question"How satisfied were you?""Would you recommend us?""How easy was it?"
MeasuresSatisfaction with an interactionOverall loyalty & advocacyFriction / effort
ScopeTransactionalRelationalTransactional
TimingPost-interactionPeriodic (quarterly)Post-interaction / post-task

What Creates High Customer Effort?

Not all friction is obvious. The original CEB research identified seven recurring patterns that consistently generate high-effort experiences:

Channel switching: Forcing customers from self-service to phone when their original channel can't resolve the issue.

Repeating information: Asking customers to re-explain their issue to a second or third agent.

Generic responses: Replies that don't acknowledge what the customer actually asked.

Transfers and escalations: Being passed between teams or agents.

Unclear next steps: Customer doesn't know what will happen after the interaction ends.

Follow-up contacts required: Issue isn't resolved in one touch; customer has to call back.

IVR / navigation friction: Complex phone trees, unhelpful bots, dead-end self-service.

Most of these are architectural problems, not agent performance problems. Fixing them requires systemic changes — unified customer data, intelligent routing, better self-service — not just coaching.

CES in an AI-First Support Model

Deployed well, AI dramatically reduces effort: instant responses, 24/7 availability, no hold time, no transfers for tier-1 issues. Deployed poorly, AI multiplies effort: loops that can't resolve anything, bots that don't escalate, humans who receive no context from the AI handoff.

Before deploying any AI-driven workflow, ask: does this reduce the work the customer has to do, or does it shift that work around?

CES Best Practices

CES data is only as useful as the action it drives. The following practices cover how to measure well, interpret accurately, and connect CES to the operational and revenue decisions that actually reduce customer effort.

1. Measure immediately after the specific task, not the relationship.

CES is a transactional metric. Survey the customer right after the support interaction, not at the end of their contract year. The question should reference the specific interaction: "How easy was it to resolve your issue today?" — not a vague overall assessment.

2. Always ask why.

A score of 3 out of 7 tells you the experience was hard. It doesn't tell you whether the customer repeated themselves three times, got transferred twice, or couldn't find the right knowledge base article. Add a mandatory follow-up: "What made this difficult?" or "What could we have done to make this easier?"

3. Identify your highest-effort interaction types.

Segment CES by issue type, channel, and contact reason. Billing disputes typically generate more friction than order status checks. Knowing which interaction categories produce the most high-effort scores tells you exactly where to invest in process improvement first.

4. Use CES to evaluate self-service, not just agent interactions.

A customer who couldn't resolve their issue via your knowledge base and had to contact support has already experienced effort before the agent even picks up. Survey after self-service attempts too — it surfaces gaps in your knowledge base and bot capabilities that never show up in agent-only metrics.

5. Correlate high-effort interactions with churn signals.

Layer CES data against renewal and churn data in your CRM. The goal is to identify whether customers who experienced high-effort interactions in the 60–90 days before renewal churned at higher rates. This turns CES from a service quality metric into a revenue risk signal.

6. Fix handoffs (the most common source of effort).

The most impactful single change most teams can make is ensuring that when a customer is transferred, the receiving agent already has the full context. Customers who don't have to repeat themselves score dramatically higher on CES. Build this into your escalation design before any other intervention.

Related Terms

  • CSAT (Customer Satisfaction Score)

    A metric that measures how satisfied customers are with a specific interaction, typically collected via a post-contact survey asking customers to rate their experience.

  • First Response Time (FRT)

    The time between a customer submitting a support request and receiving the first substantive reply from a human agent or AI — one of the most closely watched speed metrics in customer service.

  • Average Handle Time (AHT)

    The average total time a support agent spends on a customer interaction, including talk time, hold time, and after-call work — a key contact center efficiency metric.

  • Cost Per Contact

    The total cost of running customer support divided by the number of contacts handled — the primary financial efficiency metric for contact centers.

See these concepts in action with Kustomer.

Request a Demo