Everything CX Leaders Need to Know about Customer Satisfaction Metrics

Customer service leaders have a lot of metrics to track and interpret, with customer satisfaction data as some of the most important — and often underutilized. Satisfaction metrics aren’t just for evaluating the efficacy of your support agents; they also correlate strongly to customer lifetime value and loyalty, and can provide valuable insight to teams throughout your organization.

We’ve compiled some of the most frequently asked questions our CX team receives about customer satisfaction metrics. Use this guide as a quick reference point for CSAT, CES, NPS, and sentiment analysis.

What are the most common customer satisfaction metrics?

There are four core ways that customer service leaders track satisfaction:

  1. NPS
  2. CSAT
  3. CES
  4. Sentiment

Here’s a quick (and simplistic) way to think of them: NPS is a measure of loyalty, CES is a measure of effort, CSAT is a measure of satisfaction, and sentiment is a measure of emotion. Finding the right metrics for your customer service operation requires setting a clear purpose for the reporting. The metrics, questions, and frequency you select should align to high-level goals (e.g., do you primarily want to track brand loyalty, improve resolution time, provide product feedback, or monitor agent effectiveness?)

Quick Guide to CSAT (Customer Satisfaction Score)

Customer Satisfaction Score (CSAT) is most often used to measure a customer’s feelings about a specific interaction with your support team. It’s really measuring the agent interaction versus how difficult it was to accomplish something or feelings about the brand overall. For that reason, it’s typically sent immediately following an interaction with an agent.

“One of the benefits of CSAT surveys is that you can gather feedback from customers immediately after an interaction with your team. This helps you better understand customers’ experiences in real time, and can segment the results by agent, team and most importantly channel,” notes Kustomer’s Senior Product Manager John Merse. “In a true omnichannel environment it’s important to understand that each channel is unique and requires a specific communication style. For example, while you may have a 90%+ satisfaction via email, if you are not tracking chat or SMS, you might find that your communication is not as effective and your overall customer satisfaction not as high as you think.”

Relying on one overall CSAT calculation for an entire customer support operation often isn’t illustrative enough of a metric for an enterprise organization. It’s considered best in class to also be running segmentations to identify any outlier activity. For example, are you segmenting your CSAT scores by demographic or product? And how are you combining CSAT with other metrics more indicative of customer value or loyalty? Read on for more info about how these tools can be used together.

What is a good CSAT score?

The average CSAT rating is 8.4, according to provider Nicereply, who benchmark a strong CSAT average rating of 8 or higher. The ACSI also offers customer satisfaction benchmarks segmented by industry.

Quick guide to CES (Customer Effort Score)

CES is a newer metric that focuses on experiences with support, typically rating the amount of effort a customer had to go through to resolve their issue. “You can essentially think of CES as tracking the effort a customer puts into using your product or service. The more effort that is needed over time will likely erode their loyalty ,” summarizes Merse. A CES survey, for example, might ask to what extent a customer agrees with the statement X brand made it easy for me to handle my issue. This score helps measure overall effectiveness of support, as opposed to specific agent interactions.

Why should CX leaders focus on customer effort? “If you can only measure one thing, it should be effort,” says Sarah Dibble, executive advisor at Gartner (formerly CEB, which created the metric). “Our research finds that effort is the strongest driver to customer loyalty.” Monitoring CES can help support team leaders uncover high-effort pain points in customer interactions — for example, a common trend is lower CES scores when support is available only on limited channels or time periods.

When are CES surveys typically sent? CES surveys are also typically sent immediately following an interaction with the support team, although the duration should be customized to meet the objectives of your team.

What is a good CES score?

Your CES scores will obviously vary depending on the question asked and scale used (e.g., 1-5 vs. smiles/frowns). According to provider Nicereply, look for a bell curve with most responses around 5 or a 6. If your goal is a best-in-class operation, making support frictionless should be a top priority.

Can I use CES in combination with CSAT or NPS?

Yes, many companies find that combining CES and CSAT or CES and NPS gives them a more complete understanding of the customer support experience. Although a CES score tells you effort level, it doesn’t get to the why of the customer’s response or how they feel overall about your brand.

Quick guide to NPS (Net Promoter Score)

NPS is calculated with the percentage of a company’s true advocates (“9” and “10” on a 10-point recommendation scale) minus its detractors (“0” through “6” on this same scale). Based on research by Bain & Co, an NPS survey will always look the same—a scale from 0 to 10. The question itself can vary slightly, but most often reads as: “How likely are you to recommend X Brand to a friend or colleague?”


NPS is often used as a way to identify strong brand enthusiasts and also reach out to detractors. If a customer leaves a negative score, it’s considered a best practice to reach out for more information or to improve the situation with an offer or proactive support.

NPS as a metric also has its detractors (pun intended). In its calculation, a score of six is essentially equal to a zero — meaning improving a customer’s selection from a zero to a six would make no actual difference in the overall NPS score. While that is true in the aggregate, improving individual customer’s NPS scores has great value. Armed with the knowledge about why a customer gave a certain rating, customer service agents can directly address those issues and work with the customer to improve the situation. Companies can even compare CSAT and NPS scores to see how their support teams are helping to improve the individual and overall trends over time.

Quick guide to sentiment scoring for customer service

Sentiment analysis, also known as opinion mining, is the process of determining whether language reflects positive, negative, or neutral sentiment. For customer service, sentiment analysis looks at the emotion behind customer communications. Using natural language processing capabilities, customer experience agents and supervisors can gain automated insights into the emotions behind customer interactions.

Sentiment scores (which assign a value to the message, conversation, and customer) can be used in combination with tools like NPS to get a multi-dimensional picture of customer satisfaction. Generating reports based on sentiment changes or the themes of positive or negative sentiment (like a specific product or experience) can help you better understand your customers.

Why is customer satisfaction important?

There are a range of data points supporting the value of satisfied customers. The core reasons to care about customer satisfaction are obvious: customer loyalty, customer lifetime value, and word of mouth. However, there are also less obvious reasons. Customer satisfaction can also be correlated to agent happiness (ASAT, agent satisfaction); no one wants to make people unhappy all day, so there’s a lot of research showing when one goes up, so does the other. Higher agent happiness of course correlates to retention and lower business and recruiting costs.

Here are some additional stats about why investing in customer satisfaction delivers ROI:

  • A 5 percent increase in customer retention can increase profits from 25 percent to 95 percent, according to research from Bain & Company.
  • When service reps can provide better experiences to customers they feel better about their jobs, and their intent to stay increases up to 17%, according to Gartner.
  • According to CEB Global, 96% of consumers that reported having difficulty solving a problem were more disloyal.

 

What are some strategies for improving customer satisfaction?

There’s obviously a wealth of strategies and improvements CS and CX leaders can make to improve customer satisfaction. Here are a few of the focus areas that can have huge payoff:

  • If you don’t already have it, build executive buy-in and consensus for customer service as a brand differentiator. Sharing examples from leading people-first brands and category disruptors can help drive internal conversations about change. Many enterprise CX organizations are reinventing the names, skillsets, and trainings of their support teams because of the importance of the support experience to customer value.
  • Consider proactive support as a means to divert and avoid negative customer experiences. This can mean everything from pushing notifications about shipping delays to getting ahead of negative reviews with an offer or product exchange.
  • Evaluate whether your customer service technology is empowering your agents to quickly and efficiently resolve customer issues and deliver exceptional quality. Have high expectations for your technology partners to enable best-in-class solutions that have a unified omnichannel experience.
  • Segment your satisfaction scores by demographics, product, support channel, and more to see if there’s any underlying problems in specific areas.
  • Invest in self-service content that’s easy to find and navigate. A strong Knowledge Base or FAQ section can be the foundation for a more efficient customer support function, allowing customers to resolve their own questions without needing to contact support.

Got more questions about measuring and interpreting customer satisfaction metrics? Reach out to connect with a CX expert from Kustomer.

 

Tell Me How You Really Feel: The Best Metric For Finding What Your Customers Need

Brandon McFadden is Kustomer’s Customer Success & Support Manager, you can follow him on Twitter at @brandontonio. Read his post on using CES to help your product and service teams work better together here. The following was adapted from a workshop delivered at Support Driven Expo in Portland, OR.

After recently writing a piece about using CES to help your product teams, I received some questions asking, among other things, what CES even is. So I wanted to go over that here.

Customer Effort Scoring is one of the most effective ways to understand how your audience feels about their experience, and has some distinct advantages over methods like CSAT and NPS. The principle is simple: you’re asking your customers how difficult it was to solve their issue or complete a transaction. Like NPS or CSAT, it only takes one question to get the information you need. Below we can see two examples of CES survey questions:

So what makes a Customer Effort Score such a useful metric? The answer is rooted in human nature, specifically feelings. 96% of customers don’t complain when they’re unhappy, however they’re four times as likely to defect to a competitor if they have a problem. So while finding out if your customers enjoy their experience is critical, it doesn’t always tell the whole story. Here’s the kicker: 70% of buying experiences are based on how the customer feels they are being treated. So even if your service is best-in-class for your industry, if your customers have unknown, higher expectations and your service feels lacking, they’re going to retain that feeling going forward. So the real question for the data-driven team is: How do you quantify feelings?

That’s why CES is so useful—it can tell you how your customers really feel, where other methods focus on intent and how your customers see themselves instead of addressing the feelings that drive their actions. While your clients may give a high CSAT score, what they’re saying is “I really liked talking to your team, they are AMAZING!” (and who doesn’t want to hear that?) but what they might also be thinking (feeling) is, “Why did I even have to call in the first place?” Most people don’t want to speak badly about or hurt the career of an agent, especially when they solved the problem, but they will hold a negative experience against your brand as a whole when their expectation was that the fix should have been easier—or if they never expected to have this problem to start with. To make matters worse, this usually only manifests itself when it is time to recommend your service/product. Lesson? Your agents might be doing great work (of course they are, you hire great people), but that doesn’t always lead to more referrals and repeat customers.

Typically this is where NPS seems like it should provide the other half of the picture you’re missing from CSAT. If customers are satisfied but not willing to recommend you, then something in your experience is lacking, right?. There’s nothing wrong with that assumption, but NPS also has pitfalls of its own, once again sabotaged by feelings. Often, customers will say they would recommend you to their friends, but in practice, they don’t. Interestingly, the problem is found in the NPS question itself: “How likely are you to recommend this product to a friend?”. When we think of our friends, we think of people just like us, same skill aptitude, same patience, same willingness to put up with the “why did I even have to call about this” issues. But in reality, when it comes time to make the actual recommendation, they balk. They think “oh, they aren’t as technical as me” or “they likely don’t have the same patience with that issue like I did”. So while maybe they would recommend your product in general, on a one-to-one basis, they might have lingering doubts about a difficult experience and don’t feel their personal friends would have the patience to deal with your service.

What NPS and CSAT don’t do well is make it easy to identify your customers’ hidden frustrations and reluctance to advocate for you in the real world. Neither help you pinpoint the parts of your product or process that cause the most frustration, not simply have the most quantity. This is why 82% of US companies report that they are “customer-centric”, while only 18% of US customers agree. Clearly, there’s a disconnect between how companies see themselves, and how customers see them. But if their NPS and CSAT scores are high, why should they think otherwise?

Ultimately, this is because customers are thinking: “If you really cared about me, then why are you making it so hard to do something I think should be so easy?” It’s probably a question you’ve even asked yourself when you’ve been on the phone with customer support. Fortunately, with CES, these feelings are able to be captured and quantified.

Let’s look at an example of the Customer Expectation Gap in action. I recently had two experiences where my expectations and the reality were way off, giving me two very different opinions of the organizations I was dealing with after the fact. Those organizations were Amazon and the DMV—about as different as you can get. One is “tech” and optimized to solve your problems, and the other is the DMV.

I’m pretty sure that if I offered you the choice of getting a new license at the DMV or requesting a refund from Amazon—you would choose Amazon every time (and for good reason, their support is fantastic). While I didn’t have to choose in the moment, I did have to get a refund for a Netflix gift-card purchased through Amazon (silly me, didn’t coordinate with my brother). Given their renowned and very streamlined buying experiences, I thought the process would be just as easy. In a way, you could say that they trained me to think this would be just as easy as buying. This, frankly, is the blessing/curse of tech. We spend endless time making things easier, automating, reducing effort—meaning it hurts that much more when this doesn’t happen with Support resolutions. Inversely, around the same time, I needed to replace my license at the New York City DMV—a much-maligned experience and a staple of 90s stand up—albeit for good reasons. I expected this to be an all-day ordeal (ok, maybe half day), because it had been before in multiple states over the past 20 years for me. I had been trained to expect the worst.

However, getting my refund from Amazon was the real bureaucratic nightmare, stretching across four calls and two 15-minute chat sessions, and taking over 2 days to resolve. On the other hand, the DMV was a breeze. I booked ahead online, found an “express office”, checked-in on a screen, followed an express lane to an automated machine, and was done in less than 30 minutes. Now, I’ve been bragging about the NYC DMV to my friends (who think I’m crazy), and certainly haven’t recommended ever getting a gift card from Amazon. The funny thing is that If I had called up Amazon expecting a hassle, I wouldn’t have remarked on it, and if I had known that the DMV had become so cutting-edge (kind of), then maybe I wouldn’t have been wowed. So as you can see, it really is the combination of how I felt about the experience, my expectations, and the relative effort I had to expend that determined whether or not I became an advocate.

To be clear, none of this is to say that you shouldn’t measure NPS and CSAT. You absolutely should, and they are crucial metrics for understanding your business. But if you want to know how your customers really feel about your experience, they leave too many gaps. With CES, you can fill those gaps and get all the context you need to identify where your experience is weak, and how you can improve it. So maybe start by adding a 2nd CES question to your post-issue CSAT survey, you may just be surprised by the results. Remember, it’s not about what your customers say—it’s how they feel that creates impact at the moment of their referral, making repeat purchases, and when they decide to churn. If you would like to learn more about how you can act on this information, feel free to check out the companion piece: How CES Can Help Your CX and Product Teams Work Better Together.

To learn more about how Kustomer can help you better understand your customers, request a demo below!

Conversations with Kustomer Podcast: How can Marketing and Customer Support Create a Consistent Experience? Featuring Sue Duris

As Customer Experience overtakes product and price as the key differentiator for many brands, it’s increasingly important that all parts of the organization work together to deliver seamless communications and service.

Our Director of Marketing Chen Barnea sat down with Sue Duris, Director of Marketing and Customer Experience for M4 Communications and a leading CX strategist, to discuss the evolution and importance of CX for B2B and B2C companies across verticals. While their chat covered a lot of ground, we’ve highlighted some of the key points below.

Investing in CX pays off. This is especially true if you’re a leader. According to a Temkin report, CX Leaders see a 17% compound average growth rate, versus 3% for laggards. Customers that receive a great experience are likely to purchase again, and 11 times more likely to recommend a product or brand.

Consistency is key, especially for retail. But it’s also very important for B2B organizations too, especially those with a long sales cycle. Both kinds of organizations need to have a C-suite that is championing that vision of the customer experience and explaining why it’s so important to rally behind it, and how everyone fits in. Without that commitment, alignment, ownership, Customer Experience initiatives just won’t work.

CX is not a shiny new toy. You need to have a strategy and purpose for tackling CX. It can’t be done piecemeal, either, with the Contact Center pioneering an initiative, but then the experience dropping off once a customer contacts Sales or Marketing. Inconsistency is one of your greatest enemies to a great experience.

Don’t neglect the employee experience. Engaging your employees and communicating what your experience should look and feel like is crucial. They’re the ones who are making that experience a reality. It takes more than just surveys. You need to speak to your employees in person and get qualitative insight, backed up by hard metrics. Once you can take those insights, build them back into your experience, optimize your CX, then look for insights again, you can create a closed loop of constantly improving experience.

There are three kinds of metrics. Metrics based on perception, description, and outcome. Perception-based metrics are about your experience and how your customer understands it. They include metrics such as NPS, CES, and satisfaction. Description metrics are based on observable events, like FCR and AHT, and ensure you’re being efficient and effective. And outcome metrics are things like how many customers renewed their contracts or upgraded their package. Bottom line: you need all kinds of metrics to cover the entire scope of experience.

Experience is a mindset. It’s more than just a strategy or process. It’s who you are as a company, and as individuals. Customer centricity needs to start before a prospect even knows about you—it’s in your bones, your culture, and it’s how you truly create consistency. Maximizing Customer Lifetime Value is the goal of any CX effort, and the only way to do that is to have a mindset where you’re putting your customers first.

Start small. If you haven’t invested in CX at all, you can always begin by sending out an NPS survey and segmenting customers based on that score. From there, you can work in more complex layers of metrics and build up your understanding.

This is just a taste of the wide-ranging discussion on the podcast, so if this sounds relevant to your needs, be sure to have a listen.

To learn more about how Kustomer can help you deliver a more consistent and effective experience, request a demo with the form below!

How CES Can Help Your CX and Product Teams Work Better Together

Brandon McFadden is Kustomer’s Customer Success & Support Manager, you can follow him on Twitter at @brandontonio.

This post was adapted from a workshop delivered at Support Driven Expo in Portland. We had a blast sharing and learning with the Support Driven audience, check out their recap here, as well one from Jeremy Watkin at FCR that discusses our presentation as well!

While they may not always understand each other, your Customer Experience (CX) and Product teams actually do want the same things. However, they speak two different languages. With the right metrics, specifically using Customer Effort Scores, you can make informed, data-backed decisions from customer feelings that will ensure you’re making the right choice.

Product goals typically focus on adding new features, achieving parity with competitors, or fixing issues that are affecting adoption, ease of use, or the ability to wow your customers. Their job is to anticipate what the customer will want next.

On the other hand, CX is usually focused on what customers say they want now—because they hear from them every day, all day. CX wants faster handle times, lower email volumes, reduced complexity, and the power to wow your customers.

When these two teams work in sync, amazing things can happen. CX has especially deep insight into customers wants and needs based on thousands of firsthand interactions, while product has the full scope of your company’s technological capabilities, business goals, and product roadmap, and are great at coming up with new innovations before customers even know what they want. However, there’s often a recurring problem in the Product / CX dynamic. When Product has the window of time to ask CX for their input on what “problems to tackle next”, the two sides can disagree. When looking at where customers spend the most time using the platform, and where they’re having the most difficulties, CX will advocate for smoothing out a more complex problem that affects fewer users. Product will often lean towards reducing the highest quantity (because that represents a larger base of users and a more frequent touchpoint), so that a greater number of users will have an even faster experience.

While seemingly different, there is one key ingredient: Both teams want to wow customers! Finally, common ground!

Another common language we all speak are shared company goals. The aim of all these features and fixes are the same: more renewals, more referrals, more repeat customers, and faster resolutions. Making decisions about how to get there can be tricky. This is because it is hard to measure the feelings of your customers, yet feelings are how humans make decisions.

At this point most teams will most likely look to NPS or CSAT to help give direction towards the issues to focus on fixing, but those traditional metrics can often be very misleading. Scenarios wherein a customer gives you an NPS score of “10” may only actually recommend you when they find someone who they feel is just like them (as smart and with the patience to put up with the complex support issues they faced). Most of the time, when the moment comes for them to make the recommendation their NPS score said they would, they don’t do it. Likewise, CSAT may provide a very high 9/10 rating of your amazing agents, but what the customer is left feeling is “why did I even have to call in the first place?”. Feelings are the gateway to actions. So while they like spending time with your agents, it doesn’t mean they will feel comfortable continuing to deal with these issues (churn) or suggesting you to a friend. This is all because of the expectation or effort gap.

So, how do you get to the root of this disagreement in expectations AND quantify feelings? It seems like the correct course should be obvious. Product is in the right on this one surely, the fix that affects the most users (in this example it’s improving refund requests) should be completed first. Why would the CX team think otherwise?

This is where CES shines. As CX pros, we see a different side to the story in this chart. The problem that is only affecting a minority of users (plan correction, in this case), is where you’re letting customers down the most. Sure, it’s lower quantity/volume than the other issues, but those customers are having a far worse experience based on their expectations, and taking up just as much of CX’s attention/time as the other issues. CX hears their complaints, and their frustration is visceral. From your customers’ perspective, it seems like making their experience way better would only require you to “just change a bit of code” (cut to thousands of engineers slamming their heads against their desks). AHT is important, but only tells part of this story, but CES makes it much clearer.

Measuring CES puts the severity of the problem in stark relief, and puts a hard number next to what your CX team has been feeling all along. Now it’s easy to see that these customers are doing more than spending more time on the phone—they’re actively struggling to deal with your company, and you’re probably losing them as a result. This issue is even greater if you’re a startup designed to “save you time” or “simplify” our lives, you’re literally training your customers to expect everything (including service) to be smarter, faster, and effortless. This problem is even worse if you are in an industry where external factors can slow up resolutions (medical, financial, insurance, etc). Improving the other issues on this list shouldn’t be neglected, but prioritize the customers who are unhappy first. Most won’t notice if their attempt to get a refund was 15 seconds faster (a 25% efficiency gain!), but they will definitely appreciate when a more complex issue becomes a breeze when the “industry norm” is so much more—and will likely save your CX team more time in the long run.

There’s even a school of thought that says you shouldn’t fix those simple problems that your team is great at handling and consistently giving that wow experience because it is another chance to exceed expectations. This is because every interaction is a chance to build a deeper relationship with your customers, and if you’re delighting thousands of them with a simple call or email, you’re deepening each one of those connected feelings in the process. This is despite having a problem in the first place. Remember, you are often judged more on your resolution than you are on the problem itself. Of course, you want every experience to be as smooth as possible and for customers to never have a problem, but by not trying to eliminate these homerun issues entirely you get easy opportunities to impress and excite your customers. Certainly, something to consider when making the case to not always simply fix the highest volume issues. And, with CES, you’ll always know if those issues are beginning to wear your audience’s patience thin.

In my experience, Product and CX are on the same page 95% of the time, but they may not always be speaking the same language. So when there is a disconnect, it’s always down to looking at the data to clear up those disagreements. Ultimately, CX deals with feelings directly more than any other team, and are therefore tasked with quantifying the qualitative. For that reason, having a platform that measures CES can drive CX and Product teams to make your customers’ experience exceed their expectations.

The Why, How, and What of Measuring Customer Service Quality

This is a guest post by Jakub Slámka, CMO at Nicereply

As customer service professionals, we’re in the business of making sure our customers get the highest quality support. We strive to help them succeed with the highest caliber guidance we can provide, and to solve their problems with excellent solutions and service. When we do that, it feels good.

To create long term relationships with your customers, you need to understand how and why they act the way they do. There are three surveys that work really well for this: Customer Satisfaction, Net Promoter Survey, and Customer Effort Score. All of them involve surveying customers to get their opinion, but they ask different questions to find out different things.

Let’s break them down the Simon Sinek way so you know exactly Why, How, and What to measure when it comes to customer service quality.

Customer Satisfaction Score (CSAT)

Customer Satisfaction Score (CSAT) is most often used to measure customer’s feelings about a specific interaction with your support team. It can also refer to how happy a customer is generally, though in the customer service industry it usually refers to an agent or a customer support team.

WHY should you measure CSAT?

Measuring customer satisfaction means having a better idea of what works to keep customers satisfied – and what leaves them unhappy. This way you’ll know what to keep up and what to fix. You’ll also be able to gauge performance of not just support generally, but specific teams and individuals as well.

HOW do you measure CSAT?

Customers will receive a survey asking if they were happy or satisfied with the service they received, which they can respond positively or negatively to. The customer chooses their response on a scale from bad (or not satisfied) to good (or satisfied). To calculate the CSAT score, subtract the % of customers who were unhappy from 100%.

WHAT does a CSAT survey look like?

You can set your CSAT survey in one of two different ways. Either you can send out an email with a survey after a ticket is closed, or you can measure it in every email interaction with your customers in the form of “instant ratings”. Survey itself can have many different looks. Nicereply CSAT survey usually looks like 3 smileys portraying different emotions, 2 hand with fingers facing upward or downwards or a scale of 10 stars.

Net Promoter Score (NPS)

Net Promoter Score (NPS) was specifically developed to measure loyalty and to provide you with feedback about how well your products are received. This metric will tell you, how likely your customers will recommend your services or products.

WHY should you measure NPS?

NPS brings a simple solution to finding out who is your loyal customer and transform unhappy clients into satisfied promoters. You can use NPS to enhance your customer service, but it can also be used by your marketing department to gauge your customers feeling toward your product.

How do you measure NPS?

NPS is usually measured via a regular survey (bi-monthly, yearly etc…). In this survey, customers are asked the above question “How likely are you to recommend *|COMPANY|* to a friend or colleague?” and they respond on a scale from 1 (very unlikely) to 10 (very likely).

If a customer answers lower than 6, they are a detractor. If they respond 9 or higher, they are a promoter. Customers responding 7-8 are passives.

NPS is calculated by subtracting the % of customers who replied as detractors from the % of customers who answered as promoters. NPS scores are not a percentage and range from -100 (very bad) to +100 (very good).

WHAT does an NPS survey look like?

Due to it being based on a research by Bain & Co, NPS survey will always look the same—a scale from 0 to 10. The question itself can vary slightly. One such example could be a question “On a scale from 1-10, how likely are you to recommend this organization as a good place to work?”—this is also known as Employee Net Promoter Score.

Customer Effort Score (CES)

Customer Effort Score is a highly specific measure of how much work your customer felt they had to do to solve an issue. Support teams using CES are able to find and eliminate friction points that cause high-effort experiences.

WHY should you measure CES?

Imagine having a problem you need to solve. Now imagine you have to jump through several hoops and switch multiple channels to get hold of someone willing to help you fix it. Even though this support agent might be “super nice”, there’s a big chance you won’t ever want to go through the same experience again.

The idea of CES is that customers enjoy doing business with companies, that are easy to work with. It means, that CES measure the amount of effort customers were experiencing with your company as a whole.

HOW do you measure CES?

CES is often sent as part of a post-service survey and it’s measured by surveying customers after the resolution of their customer service conversation (usually 24 hours after a ticket is closed).

Similar to the NPS before, customers are asked to rate one simple statement: “The *|COMPANY|* made it easy for me to handle my issue.” a standard 1(low) – 7(high) scale whether they agree or disagree with the statement.

Your CES will then be the averages of these ratings, although we recommend to look not just at your average score, but at their distribution as well. Afterall, if your scores are bunch of 7s and 1s, it still means your experience is confusing a lot of people.

WHAT does a CES survey look like?

Being based off of a research paper by CEB, CES survey will always ask the same question. Original CES used a scale of 5 different answers, while updated CES 2.0 uses a scale of 7.

Measure, Manage, and Improve

As the old saying goes, “Whatever gets measured gets managed.” Measuring quality and using what you learn to better meet customer expectations is what will propel your efforts to truly serve your customers and drive your business forward.

Try Nicereply for Kustomer for free and measure any and all of these metrics to get more feedback out of your customer interactions.

Jakub Slámka is CMO at Nicereply.

Schedule a demo