Investigating the Financial Value of Customer Satisfaction

A few years ago, one leader at my company was really into customer satisfaction. We had our own internal measure for customer satisfaction, and for the sake of simplicity, let’s say it’s a scale of 1–5 with 1 being the least satisfied, and 5 being the most satisfied. Although we knew what trends looked like historically, we never know what the financial value of each score was. Also, we didn’t know what the financial impact was of someone moving scores. That’s what this leader wanted to know.

In order to do this analysis, I first needed to get the data. I worked with the person who sends out these customer satisfaction surveys to get responses from tens of thousands of customers who we surveyed over the years. After a little bit of cleaning, I had each customer ID, the date they were surveyed, and the score they gave us on the 1–5 scale.

The next step was to profile these customers by score to figure out if there was anything that stood out. I looked at different demographic data points, engagement, financial data, and a few other things. The one thing that stood out right away was the financial data. My hypothesis was that customers who were more satisfied with us would be willing (and would actually) pay us more money on a monthly basis. I expected to see average revenue per customer increase as the customer satisfaction score increased. However, I saw the complete opposite. Average revenue decreased as customer satisfaction increased. In some cases, that is perfectly logical if customers are very price sensitive. However, we’ve learned from previous surveys that that’s not a primary concern for our customer base. The next step was to dig into the financial data further to figure out what was happening.

At the time of doing this analysis, the money customers paid us depended on two things, 1) how many products they used, and 2) How many of their contacts they were communicating with through our products. I first looked at the average number of products and product mix by satisfaction score. Those were pretty similar, so I ruled out that the issue was related to a specific product. I then looked at average revenue by score at 3 different points in time, 1) the customers’ first month with us, 2) the month right before the customer was surveyed for their satisfaction score, and 3) the most recent month at the time of the analysis. The graph below helped reveal what was happening.

When looking at 1st month average revenue by customer satisfaction score, the decline in revenue was much flatter as scores increased. The big decreases in revenue only occured when looking at the month before people were surveyed, and the most recent month before the analysis was done. There was something happening early on in the customer experience that was a problem.

Like I mentioned before, part of what determines how much revenue we get from our customers is how many of their contacts they communicated to with our products. I made the same graph as above, but I replaced average revenue data with data on the # of contacts. The graphed looked exactly the same. The decline in the 1st month of being a customer wasn’t that steep, but the decline was very steep as customers were surveyed.

The next step was figuring out why some customers seemed to have a hard time bringing a high volume of contacts into our product. I talked to our marketing and product teams to see if they had anything specific related to this. I thought that maybe we had a poor product flow, or confusing/lack of content on how to add contacts. Those teams didn’t have anything that would explain the story, but I did get pointed to our compliance team. Without going into too much detail, these customers were being flagged due to suspicion of spam, and their accounts were frozen. In many cases, this action was legitimate and the right thing to do. However, there were cases where customers were mistakenly flagged, which left a really bad taste in their mouths. Now that we had a better understanding of how financials were related to customer satisfaction, the next step was to forecast what the impact could be if the compliance team tightened up their flagging process.

One of the other data points that I looked early on in the analysis was average customer tenure by score. As I expected, customers with higher satisfaction scores stayed with us much longer. For a basic forecast, the math here is pretty simple. Assuming we could get some of the customers giving us low scores (1 and 2) to higher scores (4, 5) by changing the flagging process, the revenue would still be the same. However, the customers might stay with us longer. The financial impact calculation is…

Average Revenue of low score X (Average months tenure of a high scoring customer — Average months tenure of a low scoring customer)

If a customer with a low score was paying us $50 per month on average, but will now stay 3 months longer, we would get an incremental $150 for that customer over their lifetime.

After presenting the forecast data to some of my team, the head of compliance, and the leader who was originally asking about customer satisfaction, we ended up not making the change to the compliance process. It turns out, the volume of customers giving us extremely low scores wasn’t high enough to make an impact. Even though we would have seen a financial benefit, the cost to change the process would have outweighed the benefit. Although it’s unfortunate, analyst recommendations don’t always get carried out. In this case, the reasoning made sense as to why. I would have liked to follow up on this analysis more recently, but priorities for the company have since changed.