With data becoming a commodity, there is very little to differentiate between many operators’ pricing and standard data packages on offer. So operators must be able to stand apart from their competitors on service, and increasingly as operators sell more and more digital services, this service is made up of many more diverse elements. In order to get an overall view of customer satisfaction, operators need to get a holistic view of all the service elements a customer uses, the QoE, the resultant customer activity (e.g. increase usage, decrease usage, impact on usage in social circles, etc).
NPS (Net Promoter Score) has long been an established method of measuring customer experience within the telecoms world. So how does NPS work? When a customer has an interaction with an Operator, a question is sent to the customer. The question asked is usually along the lines of “How likely are you to recommend our company to a friend or colleague?”. The customer can then reply back with a score between 0-10 with 10 being the most likely to recommend.
A lot of operators are happy with NPS and feel it’s the best way to measure their customer experience activities. However, following on from recent discussions with operators, some are starting to challenge its effectiveness. Some questions include:
- Is one question enough to determine a wide range of people/cultures view on a service?
- While somebody may rate a 9/10 as happy another person who is equally as happy could just give a 7/8?
- How actionable is this information?
- Does likeness to recommend equal to likeness to buy?
The question also boils down to; will a customer of 2 days or so be in the best position to recommend your product or service? Chances are they might not know enough about the product or service yet. They therefore are more than likely not in the best position to recommend it just yet. This does not mean they may or may not in the future though. Equally, it doesn’t mean that they are unhappy with the product too.
Operators need to ask what they did to get that score. Because the results are based on one question, there could be multiple questions an operator could ask of the findings. Some include:
- Was the customer unhappy with the agent they dealt with?
- Accuracy of pricing?
- Was the customer sold the right product or service for their needs?
- Did the score relate to the actual service? Did the customer use the service during peak times? Was there service degradation or outage to explain a poor score?
You could see NPS working within the hospitality industry as you have used the service fully by the time you receive the survey – e.g. as you’re checking out of a hotel. So you are in the best position to recommend the service, but in the telecom industry it doesn’t work with the likes of 24 month contracts. A customer’s lifespan of using the product is significant so they may answer the question a lot differently at the beginning, middle or end of their customer journey. What can the operator actually do with the information apart from reaching out to each respondent? This could prove costly and timely which defeats the purpose of the survey in a way.
The uptake of these offers and impact on usage and revenues could well be the best indicator of customer satisfaction rather than relying on a single question survey.