archive

CSAT is Not Enough—8 Metrics to Better Understand Your Customers

This post originally appeared on Business 2 Community.

There are plenty of metrics, scores, and benchmarks to track your customer experience, but some are better than others. From tracking efficiency and keeping down costs, to measuring the effectiveness of your experience and the happiness of your customers, it’s important to track a medley of different metrics so you can have a complete understanding and advance all of your goals. Here are the ones I think are absolutely crucial for delivering memorable, meaningful CX.

  1. Average Handle Time (AHT): Customers want their issues resolved—fast. Every extra minute they’re on the phone detracts from their experience and your bottom line. Some issues require more time to solve than others, but you need to measure AHT to have a working benchmark to figure out the sweet spot for your customers and agents.
  2. First Contact Resolution (FCR): If your customers don’t like waiting, they definitely don’t like having to call back multiple times to resolve an issue. FCR works well for determining the effectiveness of your service and the efficiency of your agents at solving customer issues—bad FCR is a sign that other things are going wrong.
  3. Zero Contact Resolution (ZCR): Even better than resolving your customers’ issues with a single touch is resolving them completely before they even have to contact you. Even good FCR may be misleading, as some customers have probably already tried to get an answer from your site or product manual, meaning that first call or email isn’t really the first touch. Proactive outreach that delights customers represents an even higher quality of experience, and is well worth tracking.
  4. Customer Satisfaction (CSAT): CSAT is extremely important for figuring out how happy your customers are with your service—it’s a crucial piece of information and any customer-centric company needs to have. However, keep in mind that customers are usually just responding to the interaction they just had, not what they think of your brand as a whole. They may be very satisfied with an agent’s service, but still feel like they shouldn’t have had to call in the first place.
  5. Net Promoter Score (NPS): NPS will tell you whether your customers would recommend your business to someone else. This is another great way to learn how happy your customers are with your service, and whether you’re generating positive word-of-mouth. However, like CSAT, it can be misleading. Customers may give you an NPS score of “10”, but only actually recommend you to people they feel are as smart and patient as them to put up with the complex support issues they faced. Also, only a minority of customers will take NPS or CSAT surveys, often only after a highly positive or negative interaction—skewing results even further.
  6. Customer Effort Score (CES): CES shines at helping prioritize the sticking points in your experience and where you can smooth out problems to exponentially improve your customer interactions. Since it asks your customers to rate an experience as a whole and not point fingers at individual agents, the results are much more impartial. You can see how your customers really feel, and make decisions about your CX accordingly.
  7. Sentiment: Using natural language processing to track sentiment across all text-driven channels can give you large-scale, unbiased insight. New analytics technologies makes analyzing almost everything your customers say about your brand a reality. You still won’t know what your customers aren’t saying, but this level of insight is becoming easier to achieve and more accurate as technology continues to improve.
  8. Lifetime Value (LTV): Lifetime Value is the ”Holy Grail” of customer metrics. Once you know how your customers are adding to your bottom line, and how experience plays into that, you can truly orient your business around your customer experience. A 5% increase in customer retention can increase a company’s profitability by 75%, according to Bain & Company—meaning that LTV should really become the crown jewel of your analytics suite.

By analyzing a spectrum of metrics, from granular performance-based ones to big-picture insights into the health of your CX as a whole, you can figure out where and how your experience needs to improve, and how it can lead your business towards explosive growth.

Tell Me How You Really Feel: The Best Metric For Finding What Your Customers Need

Brandon McFadden is Kustomer’s Customer Success & Support Manager, you can follow him on Twitter at @brandontonio. Read his post on using CES to help your product and service teams work better together here. The following was adapted from a workshop delivered at Support Driven Expo in Portland, OR.

After recently writing a piece about using CES to help your product teams, I received some questions asking, among other things, what CES even is. So I wanted to go over that here.

Customer Effort Scoring is one of the most effective ways to understand how your audience feels about their experience, and has some distinct advantages over methods like CSAT and NPS. The principle is simple: you’re asking your customers how difficult it was to solve their issue or complete a transaction. Like NPS or CSAT, it only takes one question to get the information you need. Below we can see two examples of CES survey questions:

So what makes a Customer Effort Score such a useful metric? The answer is rooted in human nature, specifically feelings. 96% of customers don’t complain when they’re unhappy, however they’re four times as likely to defect to a competitor if they have a problem. So while finding out if your customers enjoy their experience is critical, it doesn’t always tell the whole story. Here’s the kicker: 70% of buying experiences are based on how the customer feels they are being treated. So even if your service is best-in-class for your industry, if your customers have unknown, higher expectations and your service feels lacking, they’re going to retain that feeling going forward. So the real question for the data-driven team is: How do you quantify feelings?

That’s why CES is so useful—it can tell you how your customers really feel, where other methods focus on intent and how your customers see themselves instead of addressing the feelings that drive their actions. While your clients may give a high CSAT score, what they’re saying is “I really liked talking to your team, they are AMAZING!” (and who doesn’t want to hear that?) but what they might also be thinking (feeling) is, “Why did I even have to call in the first place?” Most people don’t want to speak badly about or hurt the career of an agent, especially when they solved the problem, but they will hold a negative experience against your brand as a whole when their expectation was that the fix should have been easier—or if they never expected to have this problem to start with. To make matters worse, this usually only manifests itself when it is time to recommend your service/product. Lesson? Your agents might be doing great work (of course they are, you hire great people), but that doesn’t always lead to more referrals and repeat customers.

Typically this is where NPS seems like it should provide the other half of the picture you’re missing from CSAT. If customers are satisfied but not willing to recommend you, then something in your experience is lacking, right?. There’s nothing wrong with that assumption, but NPS also has pitfalls of its own, once again sabotaged by feelings. Often, customers will say they would recommend you to their friends, but in practice, they don’t. Interestingly, the problem is found in the NPS question itself: “How likely are you to recommend this product to a friend?”. When we think of our friends, we think of people just like us, same skill aptitude, same patience, same willingness to put up with the “why did I even have to call about this” issues. But in reality, when it comes time to make the actual recommendation, they balk. They think “oh, they aren’t as technical as me” or “they likely don’t have the same patience with that issue like I did”. So while maybe they would recommend your product in general, on a one-to-one basis, they might have lingering doubts about a difficult experience and don’t feel their personal friends would have the patience to deal with your service.

What NPS and CSAT don’t do well is make it easy to identify your customers’ hidden frustrations and reluctance to advocate for you in the real world. Neither help you pinpoint the parts of your product or process that cause the most frustration, not simply have the most quantity. This is why 82% of US companies report that they are “customer-centric”, while only 18% of US customers agree. Clearly, there’s a disconnect between how companies see themselves, and how customers see them. But if their NPS and CSAT scores are high, why should they think otherwise?

Ultimately, this is because customers are thinking: “If you really cared about me, then why are you making it so hard to do something I think should be so easy?” It’s probably a question you’ve even asked yourself when you’ve been on the phone with customer support. Fortunately, with CES, these feelings are able to be captured and quantified.

Let’s look at an example of the Customer Expectation Gap in action. I recently had two experiences where my expectations and the reality were way off, giving me two very different opinions of the organizations I was dealing with after the fact. Those organizations were Amazon and the DMV—about as different as you can get. One is “tech” and optimized to solve your problems, and the other is the DMV.

I’m pretty sure that if I offered you the choice of getting a new license at the DMV or requesting a refund from Amazon—you would choose Amazon every time (and for good reason, their support is fantastic). While I didn’t have to choose in the moment, I did have to get a refund for a Netflix gift-card purchased through Amazon (silly me, didn’t coordinate with my brother). Given their renowned and very streamlined buying experiences, I thought the process would be just as easy. In a way, you could say that they trained me to think this would be just as easy as buying. This, frankly, is the blessing/curse of tech. We spend endless time making things easier, automating, reducing effort—meaning it hurts that much more when this doesn’t happen with Support resolutions. Inversely, around the same time, I needed to replace my license at the New York City DMV—a much-maligned experience and a staple of 90s stand up—albeit for good reasons. I expected this to be an all-day ordeal (ok, maybe half day), because it had been before in multiple states over the past 20 years for me. I had been trained to expect the worst.

However, getting my refund from Amazon was the real bureaucratic nightmare, stretching across four calls and two 15-minute chat sessions, and taking over 2 days to resolve. On the other hand, the DMV was a breeze. I booked ahead online, found an “express office”, checked-in on a screen, followed an express lane to an automated machine, and was done in less than 30 minutes. Now, I’ve been bragging about the NYC DMV to my friends (who think I’m crazy), and certainly haven’t recommended ever getting a gift card from Amazon. The funny thing is that If I had called up Amazon expecting a hassle, I wouldn’t have remarked on it, and if I had known that the DMV had become so cutting-edge (kind of), then maybe I wouldn’t have been wowed. So as you can see, it really is the combination of how I felt about the experience, my expectations, and the relative effort I had to expend that determined whether or not I became an advocate.

To be clear, none of this is to say that you shouldn’t measure NPS and CSAT. You absolutely should, and they are crucial metrics for understanding your business. But if you want to know how your customers really feel about your experience, they leave too many gaps. With CES, you can fill those gaps and get all the context you need to identify where your experience is weak, and how you can improve it. So maybe start by adding a 2nd CES question to your post-issue CSAT survey, you may just be surprised by the results. Remember, it’s not about what your customers say—it’s how they feel that creates impact at the moment of their referral, making repeat purchases, and when they decide to churn. If you would like to learn more about how you can act on this information, feel free to check out the companion piece: How CES Can Help Your CX and Product Teams Work Better Together.

To learn more about how Kustomer can help you better understand your customers, request a demo below!

Conversations with Kustomer Podcast: How can Marketing and Customer Support Create a Consistent Experience? Featuring Sue Duris

As Customer Experience overtakes product and price as the key differentiator for many brands, it’s increasingly important that all parts of the organization work together to deliver seamless communications and service.

Our Director of Marketing Chen Barnea sat down with Sue Duris, Director of Marketing and Customer Experience for M4 Communications and a leading CX strategist, to discuss the evolution and importance of CX for B2B and B2C companies across verticals. While their chat covered a lot of ground, we’ve highlighted some of the key points below.

Investing in CX pays off. This is especially true if you’re a leader. According to a Temkin report, CX Leaders see a 17% compound average growth rate, versus 3% for laggards. Customers that receive a great experience are likely to purchase again, and 11 times more likely to recommend a product or brand.

Consistency is key, especially for retail. But it’s also very important for B2B organizations too, especially those with a long sales cycle. Both kinds of organizations need to have a C-suite that is championing that vision of the customer experience and explaining why it’s so important to rally behind it, and how everyone fits in. Without that commitment, alignment, ownership, Customer Experience initiatives just won’t work.

CX is not a shiny new toy. You need to have a strategy and purpose for tackling CX. It can’t be done piecemeal, either, with the Contact Center pioneering an initiative, but then the experience dropping off once a customer contacts Sales or Marketing. Inconsistency is one of your greatest enemies to a great experience.

Don’t neglect the employee experience. Engaging your employees and communicating what your experience should look and feel like is crucial. They’re the ones who are making that experience a reality. It takes more than just surveys. You need to speak to your employees in person and get qualitative insight, backed up by hard metrics. Once you can take those insights, build them back into your experience, optimize your CX, then look for insights again, you can create a closed loop of constantly improving experience.

There are three kinds of metrics. Metrics based on perception, description, and outcome. Perception-based metrics are about your experience and how your customer understands it. They include metrics such as NPS, CES, and satisfaction. Description metrics are based on observable events, like FCR and AHT, and ensure you’re being efficient and effective. And outcome metrics are things like how many customers renewed their contracts or upgraded their package. Bottom line: you need all kinds of metrics to cover the entire scope of experience.

Experience is a mindset. It’s more than just a strategy or process. It’s who you are as a company, and as individuals. Customer centricity needs to start before a prospect even knows about you—it’s in your bones, your culture, and it’s how you truly create consistency. Maximizing Customer Lifetime Value is the goal of any CX effort, and the only way to do that is to have a mindset where you’re putting your customers first.

Start small. If you haven’t invested in CX at all, you can always begin by sending out an NPS survey and segmenting customers based on that score. From there, you can work in more complex layers of metrics and build up your understanding.

This is just a taste of the wide-ranging discussion on the podcast, so if this sounds relevant to your needs, be sure to have a listen.

To learn more about how Kustomer can help you deliver a more consistent and effective experience, request a demo with the form below!

How CES Can Help Your CX and Product Teams Work Better Together

Brandon McFadden is Kustomer’s Customer Success & Support Manager, you can follow him on Twitter at @brandontonio.

This post was adapted from a workshop delivered at Support Driven Expo in Portland. We had a blast sharing and learning with the Support Driven audience, check out their recap here, as well one from Jeremy Watkin at FCR that discusses our presentation as well!

While they may not always understand each other, your Customer Experience (CX) and Product teams actually do want the same things. However, they speak two different languages. With the right metrics, specifically using Customer Effort Scores, you can make informed, data-backed decisions from customer feelings that will ensure you’re making the right choice.

Product goals typically focus on adding new features, achieving parity with competitors, or fixing issues that are affecting adoption, ease of use, or the ability to wow your customers. Their job is to anticipate what the customer will want next.

On the other hand, CX is usually focused on what customers say they want now—because they hear from them every day, all day. CX wants faster handle times, lower email volumes, reduced complexity, and the power to wow your customers.

When these two teams work in sync, amazing things can happen. CX has especially deep insight into customers wants and needs based on thousands of firsthand interactions, while product has the full scope of your company’s technological capabilities, business goals, and product roadmap, and are great at coming up with new innovations before customers even know what they want. However, there’s often a recurring problem in the Product / CX dynamic. When Product has the window of time to ask CX for their input on what “problems to tackle next”, the two sides can disagree. When looking at where customers spend the most time using the platform, and where they’re having the most difficulties, CX will advocate for smoothing out a more complex problem that affects fewer users. Product will often lean towards reducing the highest quantity (because that represents a larger base of users and a more frequent touchpoint), so that a greater number of users will have an even faster experience.

While seemingly different, there is one key ingredient: Both teams want to wow customers! Finally, common ground!

Another common language we all speak are shared company goals. The aim of all these features and fixes are the same: more renewals, more referrals, more repeat customers, and faster resolutions. Making decisions about how to get there can be tricky. This is because it is hard to measure the feelings of your customers, yet feelings are how humans make decisions.

At this point most teams will most likely look to NPS or CSAT to help give direction towards the issues to focus on fixing, but those traditional metrics can often be very misleading. Scenarios wherein a customer gives you an NPS score of “10” may only actually recommend you when they find someone who they feel is just like them (as smart and with the patience to put up with the complex support issues they faced). Most of the time, when the moment comes for them to make the recommendation their NPS score said they would, they don’t do it. Likewise, CSAT may provide a very high 9/10 rating of your amazing agents, but what the customer is left feeling is “why did I even have to call in the first place?”. Feelings are the gateway to actions. So while they like spending time with your agents, it doesn’t mean they will feel comfortable continuing to deal with these issues (churn) or suggesting you to a friend. This is all because of the expectation or effort gap.

So, how do you get to the root of this disagreement in expectations AND quantify feelings? It seems like the correct course should be obvious. Product is in the right on this one surely, the fix that affects the most users (in this example it’s improving refund requests) should be completed first. Why would the CX team think otherwise?

This is where CES shines. As CX pros, we see a different side to the story in this chart. The problem that is only affecting a minority of users (plan correction, in this case), is where you’re letting customers down the most. Sure, it’s lower quantity/volume than the other issues, but those customers are having a far worse experience based on their expectations, and taking up just as much of CX’s attention/time as the other issues. CX hears their complaints, and their frustration is visceral. From your customers’ perspective, it seems like making their experience way better would only require you to “just change a bit of code” (cut to thousands of engineers slamming their heads against their desks). AHT is important, but only tells part of this story, but CES makes it much clearer.

Measuring CES puts the severity of the problem in stark relief, and puts a hard number next to what your CX team has been feeling all along. Now it’s easy to see that these customers are doing more than spending more time on the phone—they’re actively struggling to deal with your company, and you’re probably losing them as a result. This issue is even greater if you’re a startup designed to “save you time” or “simplify” our lives, you’re literally training your customers to expect everything (including service) to be smarter, faster, and effortless. This problem is even worse if you are in an industry where external factors can slow up resolutions (medical, financial, insurance, etc). Improving the other issues on this list shouldn’t be neglected, but prioritize the customers who are unhappy first. Most won’t notice if their attempt to get a refund was 15 seconds faster (a 25% efficiency gain!), but they will definitely appreciate when a more complex issue becomes a breeze when the “industry norm” is so much more—and will likely save your CX team more time in the long run.

There’s even a school of thought that says you shouldn’t fix those simple problems that your team is great at handling and consistently giving that wow experience because it is another chance to exceed expectations. This is because every interaction is a chance to build a deeper relationship with your customers, and if you’re delighting thousands of them with a simple call or email, you’re deepening each one of those connected feelings in the process. This is despite having a problem in the first place. Remember, you are often judged more on your resolution than you are on the problem itself. Of course, you want every experience to be as smooth as possible and for customers to never have a problem, but by not trying to eliminate these homerun issues entirely you get easy opportunities to impress and excite your customers. Certainly, something to consider when making the case to not always simply fix the highest volume issues. And, with CES, you’ll always know if those issues are beginning to wear your audience’s patience thin.

In my experience, Product and CX are on the same page 95% of the time, but they may not always be speaking the same language. So when there is a disconnect, it’s always down to looking at the data to clear up those disagreements. Ultimately, CX deals with feelings directly more than any other team, and are therefore tasked with quantifying the qualitative. For that reason, having a platform that measures CES can drive CX and Product teams to make your customers’ experience exceed their expectations.

The Why, How, and What of Measuring Customer Service Quality

This is a guest post by Jakub Slámka, CMO at Nicereply

As customer service professionals, we’re in the business of making sure our customers get the highest quality support. We strive to help them succeed with the highest caliber guidance we can provide, and to solve their problems with excellent solutions and service. When we do that, it feels good.

To create long term relationships with your customers, you need to understand how and why they act the way they do. There are three surveys that work really well for this: Customer Satisfaction, Net Promoter Survey, and Customer Effort Score. All of them involve surveying customers to get their opinion, but they ask different questions to find out different things.

Let’s break them down the Simon Sinek way so you know exactly Why, How, and What to measure when it comes to customer service quality.

Customer Satisfaction Score (CSAT)

Customer Satisfaction Score (CSAT) is most often used to measure customer’s feelings about a specific interaction with your support team. It can also refer to how happy a customer is generally, though in the customer service industry it usually refers to an agent or a customer support team.

WHY should you measure CSAT?

Measuring customer satisfaction means having a better idea of what works to keep customers satisfied – and what leaves them unhappy. This way you’ll know what to keep up and what to fix. You’ll also be able to gauge performance of not just support generally, but specific teams and individuals as well.

HOW do you measure CSAT?

Customers will receive a survey asking if they were happy or satisfied with the service they received, which they can respond positively or negatively to. The customer chooses their response on a scale from bad (or not satisfied) to good (or satisfied). To calculate the CSAT score, subtract the % of customers who were unhappy from 100%.

WHAT does a CSAT survey look like?

You can set your CSAT survey in one of two different ways. Either you can send out an email with a survey after a ticket is closed, or you can measure it in every email interaction with your customers in the form of “instant ratings”. Survey itself can have many different looks. Nicereply CSAT survey usually looks like 3 smileys portraying different emotions, 2 hand with fingers facing upward or downwards or a scale of 10 stars.

Net Promoter Score (NPS)

Net Promoter Score (NPS) was specifically developed to measure loyalty and to provide you with feedback about how well your products are received. This metric will tell you, how likely your customers will recommend your services or products.

WHY should you measure NPS?

NPS brings a simple solution to finding out who is your loyal customer and transform unhappy clients into satisfied promoters. You can use NPS to enhance your customer service, but it can also be used by your marketing department to gauge your customers feeling toward your product.

How do you measure NPS?

NPS is usually measured via a regular survey (bi-monthly, yearly etc…). In this survey, customers are asked the above question “How likely are you to recommend *|COMPANY|* to a friend or colleague?” and they respond on a scale from 1 (very unlikely) to 10 (very likely).

If a customer answers lower than 6, they are a detractor. If they respond 9 or higher, they are a promoter. Customers responding 7-8 are passives.

NPS is calculated by subtracting the % of customers who replied as detractors from the % of customers who answered as promoters. NPS scores are not a percentage and range from -100 (very bad) to +100 (very good).

WHAT does an NPS survey look like?

Due to it being based on a research by Bain & Co, NPS survey will always look the same—a scale from 0 to 10. The question itself can vary slightly. One such example could be a question “On a scale from 1-10, how likely are you to recommend this organization as a good place to work?”—this is also known as Employee Net Promoter Score.

Customer Effort Score (CES)

Customer Effort Score is a highly specific measure of how much work your customer felt they had to do to solve an issue. Support teams using CES are able to find and eliminate friction points that cause high-effort experiences.

WHY should you measure CES?

Imagine having a problem you need to solve. Now imagine you have to jump through several hoops and switch multiple channels to get hold of someone willing to help you fix it. Even though this support agent might be “super nice”, there’s a big chance you won’t ever want to go through the same experience again.

The idea of CES is that customers enjoy doing business with companies, that are easy to work with. It means, that CES measure the amount of effort customers were experiencing with your company as a whole.

HOW do you measure CES?

CES is often sent as part of a post-service survey and it’s measured by surveying customers after the resolution of their customer service conversation (usually 24 hours after a ticket is closed).

Similar to the NPS before, customers are asked to rate one simple statement: “The *|COMPANY|* made it easy for me to handle my issue.” a standard 1(low) – 7(high) scale whether they agree or disagree with the statement.

Your CES will then be the averages of these ratings, although we recommend to look not just at your average score, but at their distribution as well. Afterall, if your scores are bunch of 7s and 1s, it still means your experience is confusing a lot of people.

WHAT does a CES survey look like?

Being based off of a research paper by CEB, CES survey will always ask the same question. Original CES used a scale of 5 different answers, while updated CES 2.0 uses a scale of 7.

Measure, Manage, and Improve

As the old saying goes, “Whatever gets measured gets managed.” Measuring quality and using what you learn to better meet customer expectations is what will propel your efforts to truly serve your customers and drive your business forward.

Try Nicereply for Kustomer for free and measure any and all of these metrics to get more feedback out of your customer interactions.

Jakub Slámka is CMO at Nicereply.

Schedule a demo.