CSAT (Customer Satisfaction Score)

A metric that measures how satisfied customers are with a specific interaction, typically collected via a post-contact survey asking customers to rate their experience.

CSAT (Customer Satisfaction Score) measures how satisfied customers are with a specific support interaction, typically collected immediately after a contact closes. It is one of the most widely used customer experience metrics and a direct proxy for service quality at the interaction level.

What Is CSAT?

CSAT, or Customer Satisfaction Score, is a metric used to measure how satisfied a customer was with a specific interaction — a support ticket, a chat conversation, a return, or any other defined touchpoint. It is collected immediately after the interaction, while the experience is still fresh.

Unlike NPS (Net Promoter Score), which measures overall brand loyalty on a periodic basis, or Customer Effort Score (CES), which focuses on ease of resolution, CSAT is more immediate: it tells you how a customer felt about this interaction, right now.

How Is CSAT Calculated?

CSAT surveys typically ask one question: "How satisfied were you with your experience today?" Customers respond on a 1–5 scale, where 1 is "Very Dissatisfied" and 5 is "Very Satisfied."

Formula: CSAT (%) = (Number of responses rating 4 or 5 out of 5) ÷ (Total responses) × 100

Only the top two scores count toward the positive score. A 3 out of 5 is not a win.

CSAT Benchmarks by Industry (2025)

CSAT scores vary significantly by industry, shaped by the complexity of customer interactions, the nature of the product, and customer expectations coming in. The table below shows average scores across major sectors, based on ACSI data.

IndustryAverage CSAT Score
Consulting~84%
Full-Service Restaurants~84%
Hospitality / Hotels~82%
E-Commerce / Retail~80–82%
Healthcare~80–81%
Banking / Financial Services~79–80%
Software / SaaS~78%
Online Travel~76%
ISPs / Internet Providers~68%

Source: American Customer Satisfaction Index (ACSI) 2024–2025

The cross-industry average sits around 78%. A score above 80% is generally considered strong; below 70% warrants immediate attention.

CSAT vs. NPS vs. CES

CSAT is one of three metrics most commonly used to measure customer experience. Each captures something different, and all three are most effective when used together rather than in isolation.

MetricCore QuestionScopeBest Used For
CSAT"How satisfied were you?"Single interactionDiagnosing specific touchpoints
NPS"Would you recommend us?"Overall relationshipBrand health, loyalty tracking
CES"How easy was it?"Single interactionIdentifying process friction

Per the original CEB research published in Harvard Business Review, reducing customer effort is a stronger predictor of loyalty than increasing satisfaction — making CES and CSAT complementary rather than redundant. The recommended approach: run CSAT and CES post-interaction, NPS quarterly.

Why CSAT Matters

CSAT earns its place in a support stack because it connects day-to-day interaction quality to outcomes that actually matter to the business. Here is why it is worth tracking carefully.

It flags problems before they become churn.

A customer who rates an interaction 2 out of 5 is likely shopping alternatives. CSAT gives you a structured signal — channel it into a recovery workflow before the relationship breaks.

It's a coaching tool.

Segmenting CSAT by agent, queue, or channel reveals exactly where quality is breaking down. Rather than guessing, managers can coach based on real interaction data.

It connects to revenue.

88% of customers say good service makes them more likely to purchase again. Customers who report consistently high satisfaction have measurably higher lifetime value and lower churn rates.

It benchmarks across your organization.

CSAT creates a shared, measurable definition of "good" across teams, shifts, and support tiers.

What Drives CSAT Up (and Down)

CSAT isn't moved by a single factor — it's the product of several compounding variables. Understanding the main drivers on both sides helps teams prioritize where to focus improvement effort.

Drives CSAT up:

Fast first response time (FRT) — speed is the single most-cited driver of satisfaction

First contact resolution (FCR) — issues resolved in one interaction score significantly higher

Personalized responses that demonstrate knowledge of the customer's history

Seamless channel switching without the customer having to repeat themselves

Drives CSAT down:

Long wait times and slow responses

Being transferred multiple times

Agents who lack context and ask customers to repeat information

Interactions that require follow-up calls or emails to fully resolve

CSAT and AI Customer Service

AI-powered support has a complex relationship with CSAT. Badly deployed AI — bots that can't resolve issues and don't escalate gracefully — tanks satisfaction scores. But AI that handles routine inquiries instantly, surfaces full customer context for human agents, and routes intelligently to the right resource can improve CSAT while simultaneously reducing cost per contact.

The key is keeping a clear human-in-the-loop — knowing when an AI agent should hand off to a human, and making that handoff seamless.

CSAT Best Practices

Tracking CSAT is table stakes. What separates high-performing teams is how they act on it — how quickly they close the loop on bad scores, how granularly they analyze the data, and how they connect satisfaction metrics to operational and revenue outcomes. Here's what that looks like in practice.

1. Survey immediately, not later.

Send CSAT surveys within minutes of interaction close, not hours. Delayed surveys capture memory of an experience, not the experience itself. Response rates and accuracy both drop sharply after 24 hours.

2. Always include an open-ended follow-up.

A score tells you that something went wrong. A comment tells you why. Even a simple "What could we have done better?" field turns CSAT from a lagging indicator into an actionable coaching signal.

3. Route low scores immediately.

Any CSAT score of 1 or 2 should trigger an automatic recovery workflow — not a queue. Assign it to a senior agent within the hour. Customers who receive proactive follow-up on a bad experience have measurably higher retention than those who don't hear back at all.

4. Segment before you act.

A company-wide CSAT average hides more than it reveals. Segment by channel, agent, queue, issue type, and customer segment before drawing conclusions. A 78% average might mask a 60% score on billing interactions and a 92% score on shipping — two very different problems requiring very different responses.

5. Track trends, not just snapshots.

A 78% CSAT that's been declining for 8 weeks is more alarming than a 72% that's improving. Set a regular cadence for trend review — weekly for operational monitoring, monthly for strategic reporting — and flag directional changes before they become crises.

6. Pair CSAT with CES for a complete picture.

CSAT tells you if the customer was happy with the outcome. CES tells you if the process to get there felt easy. Teams that track both identify a class of problem CSAT alone misses: interactions where the customer was ultimately satisfied but the friction to get there is quietly eroding loyalty.

7. Never optimize CSAT in isolation from FCR.

High CSAT on a contact that required three follow-up calls is not a success — it's an expensive one. Always review CSAT alongside first contact resolution rate. The goal is high satisfaction and efficient resolution, not one at the cost of the other.

Related Terms

  • First Response Time (FRT)

    The time between a customer submitting a support request and receiving the first substantive reply from a human agent or AI — one of the most closely watched speed metrics in customer service.

  • Average Handle Time (AHT)

    The average total time a support agent spends on a customer interaction, including talk time, hold time, and after-call work — a key contact center efficiency metric.

  • Customer Effort Score (CES)

    A metric that measures how much effort a customer had to exert to get their issue resolved — a stronger predictor of loyalty than satisfaction alone.

  • Cost Per Contact

    The total cost of running customer support divided by the number of contacts handled — the primary financial efficiency metric for contact centers.

See these concepts in action with Kustomer.

Request a Demo