Customer service is a crucial part of many businesses, so it comes as no surprise that digital technologies are increasingly being applied to this field. By its very nature, a customer service system is about people, and often stores and processes highly-personal information. As a result, the new generation of advanced, computer-aided customer service systems raise important issues about privacy.
For example, Smooch, recently acquired by Zendesk, aims to connect a business’s systems “to any messaging channel, unifying all customer interactions into a persistent omnichannel conversation.” As Warren Levitan, co-founder and CEO of Smooch, wrote on The Next Web:
imagine a business was able to access everything a customer has ever said to them. Their emails to customer service, live chats with sales, phone calls with the order help line, SMS messages, interactions with the Facebook bot, social posts with brand mentions, etc. Imagine they could all be organized in a unified conversation timeline, combined with the transactional and behavioral data and formatted in a way that was easily processed by a company’s NLP or AI assets.
The use of NLP (neuro-linguistic programming) or AI is one of the key features of these new customer service systems. The problem is that in bringing together many disparate sources of information, and applying AI and other advanced techniques, it may be possible to glean extra, possibly highly-personal details that the customer had no intention of revealing. Inferences may be made, and decisions suggested, all without the customer knowing, or being able to challenge them.
Another feature that a unified pool of messaging data makes possible is “sentiment analysis”, to provide human agents with the context they need to personalize future customer interactions. That’s the focus of another company in this new field, Behavioral Signals, which applies advanced analysis to the voices of customers:
Having analyzed tens of thousands of calls associated with specific outcomes, (e.g., customer promised and in fact made payment), we apply our award-winning technology to pick up on subtle voice patterns in real-time from the audio signal of a call. Using a state-of-the-art deep learning approach, we have built powerful predictive models linking the rich set of behavioral properties tracked during each interaction, with the customer’s underlying propensity to pay and an agent’s overall performance.
Clearly, this approach requires companies to carry out constant monitoring of what customers say in their calls. The industry-standard practice of recording calls for “security and training” is one thing; recording them and then applying advanced AI techniques to extract information about what a customer is supposedly feeling and thinking, is quite another. Many will find this kind of fine-grained poring over not just every word, but every inflection, slightly disturbing.
The results of that opaque analysis will have real-world consequences for people in terms of how customer service representatives view and treat them. None of those details are revealed by company representatives when they speak to customers. During calls to a company using these kind of technologies, the public is dealing with two protagonists: the obvious one they can hear on the phone, and another, lurking invisibly in the digital background. Highly-personal information may be used against people without them even being aware of that fact.
The crucial nature of the relationship between the caller and a company representative is the driving force behind another AI-based company in this sector, Afiniti. When a customer calls a service center, Afiniti analyzes sales records and personal information that it holds on that person, using algorithms designed to identify what factors made previous interactions successful or a failure in some way. Afiniti’s system then seeks to find a service representative whose own characteristics are, according to this analysis, most likely to result in a successful interaction with the customer, and best able to handle problems. Afiniti says this pairing is done at the time of the call:
We cannot know when a customer will call in, and equally we cannot know which agents will be available. This requires us to run the algorithm in real-time whenever a customer contacts our clients, in order to determine which of the available pairs of customers and reps is most likely to lead to a successful outcome. Afiniti is able to execute this process in under 200 milliseconds making us imperceptible to clients and customers alike.
After determining the most optimal pairing, Afiniti assigns a pair on this basis.
Once more, the interaction is recorded so that at the end of each day the Afiniti algorithm seeking best matches between customers and company representatives can be optimized further.
Such systems have certain commonalities. They all gather personal data from multiple sources in order to build up a profile of a customer that is as detailed as possible. Various kinds of AI approaches are then used to extract actionable information from that data. This is fed to company representatives that are handling the customer call, but without the caller being aware of the analysis or its implications, which makes the conversation tilted in favor of the agent. This aggregation of personal data is problematic in itself, because it may expose implicit information that a customer would not normally want to reveal. It also creates the possibility that it will be combined with the aggregated data from other companies – something that is already happening on a massive scale, as this blog has reported. This will allow even more detailed profiles to be created, which may be used in ways that people would find unacceptable if they knew about it.
There’s an interesting possibility arising from this approach. A year ago, Google announced Google Duplex, a technology for conducting natural conversations to carry out “real world” tasks over the phone. Although Google Duplex can only work in very circumscribed domains, where it would not need to deal with complex or irrelevant input, it is likely that future systems will become more adept at handling general, everyday situations.
All the kinds of algorithmic analysis of personal data and voice patterns described above could be fed not to a human representative, but to a computer-based one along the lines of Google Duplex. Its voice, “character”, and approach could be exactly tailored to each individual, even their current mood, as indicated by continuous AI analysis of their voice patterns during a call. This kind of real-time personalization is likely to reinforce the well-known ELIZA Effect, whereby people already tend to ascribe human-like capabilities to computers. The fact that such an AI-based system would seem strangely sympathetic, and to know everything about us – even our deepest fears and unspoken secrets – is likely to reinforce that sense. As a result, some people may unconsciously trust such an interlocutor, and discuss their wants and needs more openly, without reserve. Views about whether that is a good thing are likely to vary…
Featured image by CurrencyFair.