Customer Experience (CX) Metrics in the Contact Center

Giorgina Gottlieb | February 15, 2019

Various graphs and charts are displayed on a digital tablet and pieces of paper

As CX evolves over time, so too does the language used to describe certain aspects of it. For example, the traditional call center is now generally referred to as a contact center, which reflects the substantial changes that have occured within the past 15 years in how customers communicate with businesses. While communication once took place almost exclusively by phone, the list of channels has now expanded to include email, text messaging, chat, social media, and more.

“Forward-thinking companies are recognizing that if they are to deliver the interactive, multi-faceted customer-driven experience that their customers expect, they have to rethink the channels through which they connect with customers,” explains North Highland in its aptly-titled white paper Shifting from Call Center to Customer Relationship Center.

Despite the change in name and growth in the number of communication channels, the purpose of a contact center remains the same: provide support to customers who reach out with a question or concern, achieving an accurate and satisfactory resolution as quickly as possible. But how can front-line agents — and their bosses — know if they are working efficiently or resolving an issue to a customer’s satisfaction? The answers can all be found in one word: metrics.

Metrics 101

Metrics might be a single word, but its meaning can feel complex, especially for those who are less mathematically inclined. So, let’s use the super-simple definition provided by Google as the foundation for the remainder of our discussion:

Metrics is a method of measuring something, or the results obtained from this.

What is the purpose of conducting such measurements? Well, as the famous management consultant Peter Drucker once said, “If you can’t measure it, you can’t manage it.” Implicit in this quotation is the notion that if you can and do measure it, then you can manage it. In other words, metrics provide a way to quantify various aspects of a business, enabling employees and executives to evaluate what’s working and what’s not so they can double-down on the successes and make changes to the areas in need of improvement.

The most commonly used metrics for CX are Net Promoter Score (NPS), Customer Satisfaction (CSAT), and Customer Effort Score (CES). But what about for contact centers specifically? Jeremy Watkin argues that CX metrics and contact center metrics are virtually one and the same. “The modern contact center understands that many, if not all, metrics are customer experience metrics — or at least they should be viewed through the lens of the customer experience,” writes Watkin, Director of Customer Experience at FCR (a provider of outsourced call center solutions).

We agree with his sentiment. That said, there are some metrics that can be applied across all aspects of customer experience, while others are best suited to the role played by contact centers. We’ll take a closer look at each category, beginning with the “big three” CX metrics.

Most Common CX Metrics: NPS, CSAT, CES

If you ask someone, “How do I measure customer experience?,” they will most likely respond with one or more of the following metrics: NPS, CSAT, and CES.


NPS asks, “How likely are you to recommend the product or service to a friend or colleague?” Customers respond by selecting a number from 0 to 10, where 0 is “not likely at all” and 10 is “extremely likely.” Those who give a score of 0-6 are considered “detractors,” 7 or 8 are labeled “passives,” and 9 or 10 are “promoters.”

To calculate the company’s overall NPS, you must first determine the percentage of customers who fall into each of the three categories. Then, subtract the percentage of “detractors” from the percentage of “promoters.” For example, let’s say you surveyed 50 customers and received 10 scores in the 0-6 range, 15 in the 7-8 range, and 25 in the 9-10 range. You would have 20 percent detractors, 30 percent passives, and 50 percent promoters. So, your NPS would equal 30 (50 percent – 20 percent).


Rather than ask about the likelihood of recommending the product or service to others, CSAT focuses directly on a customer’s satisfaction. Specifically, the one-question survey typically asks, “How would you rate your experience with X?” where X is a product that the customer has purchased or a service they just received.

A customer typically responds on a scale ranging from “Very dissatisfied” to “Very satisfied” with less enthusiastic options in between. To determine the overall CSAT, Qualtrics recommends focusing only on the number of “satisfied” responses. So, for example, if you used a 5-point scale that included “Very dissatisfied” as 1, “Dissatisfied” as 2, “Neutral” as 3, “Satisfied” as 4, and “Very satisfied” as 5, you would only tally the number of respondents who answered with a 4 or 5. Divide that sum by the number of total survey responses and then multiply by 100 to get the percentage of satisfied customers.


In contrast to NPS and CSAT, CES focuses on how much effort a customer had to expend in order to get their needs met by a company. This metric generally involves posing the following statement to customers: “The company made it easy for me to resolve my issue.” Their choice of response is similar to that used for CSAT, in this case ranging from “Strongly disagree” to “Strongly agree.”

Some companies choose a 5-point scale, while others prefer a 7-point range. In either case, the middle score corresponds to a “neutral” rating while low scores represent varying degrees of disagreement and high scores represent varying degrees of agreement with the statement. To calculate CES, add the number of respondents who selected the scores above neutral (i.e., 4 and 5 on a 5-point scale and 5-7 on a 7-point scale) and then divide by the total number of customers who responded to the survey.

Pros and Cons

Just like pretty much anything else in life, each of these three metrics comes with its own pros and cons. Here’s how CX Accelerator co-founder Nate Brown broke it down in a recent webinar:


  • Pros: Great way to create a baseline; universal across touchpoints and industries
  • Cons: Low correlation to customer loyalty; question is losing relevance with modern customers


  • Pros: Universal in nature (can be used for any journey point); great baseline information available
  • Cons: Limited correlation to loyalty; results may not be actionable


  • Pros: High correlation to customer loyalty; transactional and specific in nature
  • Cons: Can be overly transactional and specific in nature; hard to find baseline information

“No metric is inherently bad,” Brown concludes. “The trick is not selling out to any one metric because then we blind ourselves to the bigger picture.”

Blue Ocean’s Director of Sales and Marketing Amy Bennet agrees, writing “NPS, CSAT, and other scores like Customer Effort Score (CES) are still only a fraction of the big picture when it comes to improving the end-to-end customer experience. These metrics are most valuable when you can segment and filter them by type of customer (especially by tenure and lifetime value), type of interaction (related to the channel they use or teams they interact with), and the customer’s score over time.”

Contact Center Metrics

In order to see the “big picture” that Brown and Bennet reference, contact centers must expand beyond NPS, CSAT, and CES and consider metrics that specifically quantify the work and impact of front-line agents. Below we’ll examine the top 10 of these metrics, which we’ve divided into two categories for ease of discussion.

Connecting with an Agent

This category includes contact center metrics that measure a customer’s ability to reach a support agent and the length of time it takes to do so:

  • Average Call Abandonment Rate — Percentage of customers who initiate a support call but hang up before being connected with an agent
  • Percentage of Calls Blocked — Percentage of customers who initiate a support call but receive a busy tone in response that prevents them from connecting with an agent
  • Average Speed of Answer (ASA) — Amount of time it takes for an agent to answer a customer’s call. To calculate the average, add up the total time it took an agent to answer all calls and divide by the number of calls.
  • Average Time in Queue — Amount of time that customers spend waiting from the moment they initiate a support interaction until they are connected with an agent. To calculate the average, take the total time that all customers spent waiting and divide by the number of customers.

Why do these metrics matter for customer experience? “Shorter waiting time keeps the customers happy and ends up making them a lot more satisfied with the services,” explains Deepanshu Gahlaut, a digital marketer and technical writer for Call Center Hosting.

FCR’s Watkin recommends that contact centers “use the native queue callback feature in your phone system or a third-party system to improve the wait experience for customers and even out spikes in call volume.”

Issue Resolution

This category includes contact center metrics that measure whether a customer’s issue was resolved and how long (both in terms of time and number of contacts or channels) it took to do so:

  • Average Handle Time (AHT) — Amount of time from the moment the agent first connects with the customer until the interaction ends. To calculate the average, add up the total time an agent spent interacting with customers and divide by the number of customers.
  • Average After-Call Work Time — Amount of time it takes an agent to do follow up work on a customer case once the interaction itself has ended (such as submitting notes to a manager or filing a bug fix with the development team). To find the average, add up the time an agent spent on this post-interaction work and divide by the number of cases.
  • Time to Resolve — Time to Resolve is similar to AHT and Average After-Call Work Time, but it includes all time from the moment a customer initiates a service request until the issue is fully resolved.
  • First Call Resolution (FCR) — Percentage of cases that are resolved by a single interaction between a customer and agent.
  • Channel Switching — CX Accelerator’s Brown defines this metric as the “ability to resolve the issue in the channel by which help was requested.”
  • Self-Service Deflection — Percentage of customers who are able to resolve their issue through an online knowledge base, community forum, or other self-serve resources, thereby eliminating the need to create a case and interact with an agent at all.

Of the six metrics above, AHT and FCR are the most commonly used and most frequently commented upon by industry thought leaders.

For example, in an article for PlayVox, digital marketing specialist Jade Longelin makes an important point about AHT, writing “Average handle time is a tricky metric, because it needs to be squarely within the range you set. When your agent’s handle time is too long, it may mean that they’re struggling with customer requests. Yet, if the agent’s average handle time is too short, it may mean that they aren’t offering any real assistance.”

Longelin recommends the use of quality assurance software in order to monitor agents’ interactions with customers and ensure they’re striking the right balance. Chris Woodard of Tenfold adds, “Another great way to help lower AHT is … [software with a] user-friendly interface that supplies agents with useful customer information they need, when they need it.”

Meanwhile, a poll published by Call Centre Helper found that more than 60% of contact centers track FCR. As Pointillist’s Swati Sahai explains, “FCR has gained a lot of importance among customer experience professionals as a high FCR typically indicates high customer satisfaction.” In fact, SQM found that FCR can have tangible impacts on customer experience and the business’s bottom line in five key areas:

  • Reducing operating costs — For every 1% improvement in FCR, you reduce your operating costs by 1%.
  • Improving customer satisfaction — For every 1% improvement in FCR, there is a 1% improvement in CSAT.
  • Improving employee satisfaction — For every 1% improvement in FCR, there can be a 1-5% improvement in ESAT.
  • Increasing opportunities to sell — When a customer’s call is resolved, the customer cross-selling acceptance rate increases by up to 20%.
  • Reducing customers at risk — 98% of customers will continue to do business with the organization as a result of achieving FCR.

Word of Caution

Now that you know more about the most common and useful CX metrics in the contact center, hopefully you are excited to begin putting your learnings to the test. Once you start setting up your new measurements and analyzing the results, however, just remember our previous warning about not losing sight of the big picture.

As ForSee Regional Director Mike Redmond cautions, “Using these one number metrics as standalone contact center metrics won’t provide you with what every organization is seeking — the ability to measure customer experience with certainty, and to enable you to receive credible, reliable and accurate insights in order to pinpoint and predict what your customers are going to do next.”