Actionable insights are a key part of support-driven growth. Read our full guide on the topic here.
--
Insights without action are a waste.
As customer support leaders, it's our job to make sure other department leaders can confidently listen to the insights we hand them. Or, we get this sticky situation where we think we have the purest form of customer feedback out there but no one's listening to us.
To make them listen, the information we hand our colleagues must be insightful, timely, granular, unbiased and not based on a tiny-weeny sample size. If we provide managers with actionable insights we empower the contact centre (who now directly contributes to continuous improvement) and the entire company. It creates energy: encouraging agile ways of working and making customer-centricity easy.
“It drives the right focus on product direction and digital services. Most importantly, it allows leaders to pivot customer service from being a low-value cost centre to a higher value, commercially focused, business-aligned unit.” - Richard Jeffreys, Support-Driven Growth Ebook
Six features of actionable insight
To make the data you uncover actionable, it needs to be insightful and reliable:
Contextualised insight
Without context on priority and impact of a customer painpoint, it's hard to know how important it is to take action on.
Insight should clearly show data around its priority. Some insights cause more customer friction than others, and some cause a little bit of friction but for many more people. After surveying your customers, a driver analysis will tell you how important each topic or touchpoint has been those customer's satisfaction level.
For example, knowing a customer's satisfaction rating is 'unsatisfied' is not actionable alone. But, knowing the drivers of dissatisfaction (say, 'late delivery' or 'damaged item') and then which of those drivers is most painful for the customer (you could analyse correlations between the drivers and customer churn to see which driver really impact the customer), is easier to prioritise and take action on.
Related content: Read our customer journey mapping guide here.
Insightful vs. non-insightful insight
New is everything. Insightful means uncovering something of which people weren’t yet aware. For example, knowing that CSAT is low is non-insightful. But, knowing CSAT is low in people +65 years old and they frequently complain about being confused at the checkout process is insightful. The insightful data can be used to make the checkout process easier for this group of customers which in turn increases conversion rates and drives real business value.
Insight timeliness
The fresher the data the better. Organisations frequently will run voice of the customer programmes for a few months, then take a few months to sift through the data, only to provide customer insight six months later. The speed at which consumer needs change makes this insight largely useless—by then the damage to customer satisfaction is already done.
Customer support conversations are high frequency, so by nature, they’re always fresh. Using a ticket tagging or artificial intelligence software, you’re now able to uncover customer insights from that data in near real-time. So the collection and analytics of the data are near-instantaneous.
Reporting these insights with department leaders across your organisation is transformative. They’ll quickly see what's causing customer churn and your fast speed-to-insight will make solutions more proactive.
Granular insights—the devil is in the detail
While timely insights are important, a granular level of detail is the shortcut to a solution. When receiving customer insights, leaders across your organisation want to be able to understand the root cause of those insights so they can quickly curtail them.
Returning to our example of an eCommerce furniture retailer, non-granular insights look like, ‘customer couldn’t checkout’. Whereas, granular insights look like, ‘Paypal isn’t working’. This is a real example from a SentiSum customer who was then able to tackle the problem head-on before other customers were affected.
To uncover granular insights, we suggest using multi-level data tagging, for example, "Checkout problem" → "Payment Issue" → "Paypal Not Working". Hierarchical tagging of customer tickets, feedback and reviews allows any user to start at a high level and then dig deeper into the root cause of the problem.
Statistically significant insight
Qualitative feedback (live chat logs, reviews and free-text survey fields) is the most valuable for product improvement. It’s easy to get hung up on quantitative measures like CSAT or NPS, but they’re likely non-insightful without understanding the drivers of the measure.
“Finding ways to get qualitative feedback and surfacing qualitative trends, in my experience, has been the most valuable for actually making changes to a product or an operational process to improve the experience.” — Megan Bowen.
However, qualitative feedback suffers from the ‘small sample’ problem that tends to create biased and untrustable insight. As a customer support leader, we suggest turning your qualitative customer feedback into something statistically significant. Whether that’s tagging the topic of each support ticket at scale as they come in, or using software that does so, knowing that 1,000 customers face the same issue with a touchpoint is ten times more valuable than one or two survey answers.
Unbiased insight
Survey bias is a well-researched topic. In the context of customer feedback surveys, bias is a “systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.” That “encouragement” towards a specific outcome is what leads to survey bias, where you may only be getting one type of customer's perspective.
Any type of bias that has crept into your customer feedback analysis results should make you wary to report them. There are two main buckets of customer survey bias to avoid so that you don’t fall into the trap of basing business decisions off of skewed survey results:
Selection bias, where the results are skewed a certain way because you’ve only captured feedback from a certain segment of your audience. We see this frequently with customer surveys because when on average just 1% of customers fill out your survey, we must question who that 1% is. If they have similar characteristics then your results are no longer representative.
Response bias, where there’s something about how the actual survey questionnaire is constructed that encourages a certain type of answer, leading to measurement error. One example is when a questionnaire is so long that your customers answer later questions inaccurately, just to finish the survey quickly and claim their reward.
Following these five rules will improve the quality of the customer feedback you and your team reports. By consistently delivering actionable customer feedback, the contact-centre will quickly contribute to business improvement and begin to cement itself as a critical revenue driver.
How to make your customer feedback surveys actionable
If you insist on running customer feedback surveys (those of you who follow SentiSum will know we advocate strongly against them), then we need to tackle actionability.
Fruitfully for us, there has been a significant amount of academic research on the subject. Every research project, whether it's pharmaceuticals or political, tends to include a degree of surveying and tackling bias is important.
- Open- and closed-ended questions: Choose wisely
How you ask your question affects how people answer that question. For customer insight, open-ended questions likely create the most insightful response—respondents can provide a response in their own words and go 'off-script', often revealing a lot of unexpected insight. Whereas, closed-ended questions limit the customer's response to the question-makers worldview.
People respond very differently depending on whether the question is open or closed. For example, after the 2008 US elections, voters were asked two versions of this question, "What one issue mattered most to you in deciding how you voted for president?"
One version had a list of answers, the other had a free-text field that voters could fill out in the way they chose. 58% of respondents choose the answer 'the economy' when offered it as an option, but just 35% volunteered 'the economy' as an answer in the open-ended version.
The difficulty with open-ended questions comes from the post-survey analysis. Unstructured text is harder to comprehend at a level that meets the five characteristics of actionable customer feedback mentioned above. However, done correctly, you remove your 'encouragement' toward a particular answer which may stem from your own bias. We suggest collecting unstructured text from places where your customers weren't asked any question, like support tickets, and using automated methods to uncover insights at scale.
- Question-wording
Research shows that the words you choose in your questions impact the outcomes considerably. Before people answer questions they aim to understand the intent behind the question so they can give the best response, and so small changes to the wording can impact how different groups of respondents interpret the question.
One example comes from a Pew Research Center Survey on the Iraq war. When people were asked if they favoured 'taking military action in Iraq to end Saddam Hussein’s rule', 68% were in favour. However, when asked if they favoured 'taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties', that number dropped to 43%.
You can see from this example of why it's so hard to trust survey results from particular sources. Luckily, when collecting actionable customer insights to inform business decision making, it's in your interest to remove this bias.
- Question order and survey length
Post-survey development, the way you frame and order your questions impacts the response.
"Order effects" is a proven concept that indicates that the earlier questions can impact the response to later questions. For example, if you offer closed-ended questions asking the respondent, 'what matters to you most when shopping online: website load speed, product quality, product variety, or price?', they may choose product quality. If the following question asks an open-ended question like, 'what did you like most about your last order from us?', the respondent may be inclined to answer 'product quality' to remain consistent or because it's most fresh in their memory. Even if, in reality, another answer was what they liked most, seriously skewing your results.
Survey length falls under this category because it's structural. Long surveys create survey fatigue, which means your question number 35 is answered in a different way than it would have been if it was question number one. People get bored and start clicking just to get to the end.
Actionable customer survey questions
To make your customer survey questions more actionable, follow these rules.
- Keep the survey short, perhaps even one question.
- Ensure the survey is timely: Send the survey as soon after the event as possible (you could use an in-product survey pop-up with one question about that particular page)
- Ask one question at a time: Remove double-barrelled questions like, 'what could be improved about this web page and your experience?'
- Remove confusing negative questions like 'should this service not be free?'
- Use simple language that matches the education level of your audience (and no abbreviations or jargon).
- Closed-ended questions should be exhaustive: don't give 5 options if there are 10 potential options, even with an 'other' box you are biasing answers to the 5 options easily available to the respondent.
- Purpose: Know why you're asking a question and have a plan to take action on the results.
- Reduced emotionally provocative or controversial terminology that creates a negative or positive bias in the answer
- No restrictive questions: If you offer a multiple choice of 'agree' and 'disagree', you are leaving a whole scale of agreement on the table. Include more detail in your range like 'slightly agree' and 'slightly disagree'.
- No leading questions like "By how much do you think prices will increase?", which frames the response to a world where prices are already increasing.
- No loaded questions: A question like 'what is your favourite alcoholic drink?' makes the assumption that the respondent drinks alcohol, which not everyone does believe it or not.
- Remove absolute questions: A question like 'do you use SentiSum for customer insights?' would provide better insight if asked, 'how often do you use SentiSum for customer insights?'
Without these attributes, you'll likely tire the respondent before they can give you the detail you need, or your results will be useless entirely.