“Alexa, play 'Digger, Digger'.” An inconspicuous command of a toddler requesting a song to Amazon’s Alexa was met with an unexpectedly racy response, and a panicked reaction from the parents. It’s one of the funnier examples of a failing AI. But unfortunately, there are also less innocent instances of an AI gone wrong. Suffice it to say it’s a good idea to keep humans in the loop when automating your CX.
If you’ve been following our 10 Commandments series - and taking the commandments to heart - chances are your customer experience is already enriched with data. Once you’re delivering data-driven CX, automation is a logical next step to scale up. However, algorithms are not perfect, and should always be supervised by humans. Here’s why.
Human-in-the-loop machine learning
To understand why humans are still necessary in virtually any automated process, we need to talk about how algorithms generally work. Although many people think machine learning models are completely self-taught, this is rarely the case. Human-in-the-loop (HITL) machine learning is the most common way to train an algorithm.
There are three requirements for a machine to learn:
- the ability to make a prediction
- a way to measure if the prediction is correct
- the ability to improve the predictions
Automating customer experience
There are two methods for the second step in the learning loop: validating a model’s predictions. It either uses validated data (an already tagged dataset), or it needs a human to validate the prediction. The latter is commonly used when no tagged dataset is available.
Long story short: unless you have an extensive dataset that is tagged and validated, any automation tool needs a human to validate whether its predictions are correct. Let’s say you’re using a chatbot to interact with customers. For standard requests like “my package wasn’t delivered”, the dataset will be quite accurate. The chatbot will understand the request and respond appropriately.
But what if the customer has a question that is not yet included in the dataset? Best case scenario: the chatbot will admit it can’t help. But in the worst case, it will make an educated guess. Sometimes the results are straight-up hilarious, but there are also examples of chatbots turning to hate speech or inappropriate comments. In either case, the customer experience will suffer, which in turn reflects poorly on your company.
Why you need a human in the feedback loop
As we know, the improvement of customer experience is never quite ‘finished’. In any machine learning context, there is an important role for humans in the feedback loop. When automating, there are always exceptions where a human needs to assess, bypass the automation and then finetune the model. Excluding human intervention from this feedback loop is a recipe for disaster.
In a famous example of automation-gone-wrong, a supermarket had automated the data analysis of its customer profiles. It sent an automated message after the purchase of baby products, congratulating the customer on the pregnancy. But this message was sent to a family with only one computer. The pregnancy turned out to be the daughter’s, and the parents had no clue. Sensitive issues like these would likely be prevented with human intervention.
Keeping automation in check with your business goals
Moreover, humans are also needed to assess whether the business goals are being reached. An automated feedback loop is always based on a set of rules. Machines can help reach a specific and well-defined goal. But as your business develops and your clients change, the goals can also change.
So by definition, part of the process of automating CX should include humans to ensure these changes are executed to reach those ever-evolving business objectives. Keeping humans in the loop is not only beneficial to the effectiveness of your automation model, it’s a safeguard to avoid disaster and ruin your customer experience altogether.