AI avatars will be the new customer service reps

AI avatars will be the new customer service reps

Jackie C.K. Cheung

October 12, 2023

Jackie C.K. Cheung is an associate professor of computer science at McGill University and Canada CIFAR AI Chair at Mila.

(This illustration was created by Maclean’s art director Anna Minzhulina using the generative AI image program Imagine. Minzhulina spent weeks feeding prompts into the program, inspired by the essay.)

As far as I can tell, most AI systems that currently interact with the general public—in banks, in travel bookings, in retail—are based on templates. For example, if I log into my online bank and ask a question, the bank’s AI will identify patterns and keywords in what I ask and produce some predefined response based on what it detects. It’s not responding on the fly. In the next 10 years, this will change. The general public will encounter avatars integrated with generative AI in most service interactions—when they want to get something done, or when they want to find information. You’ll be able to chat with them and ask them questions, and they will generate original and hopefully correct answers. These models  are trained on massive amounts of pre-existing data to learn which words appear near each other, then further trained to operate in a particular scenario like banking. They’ll then be able to produce new outputs that are appropriate to a particular context.

Some avatars might help you navigate a particular product and its features. Others might help you manage your day, like by scheduling meetings and giving you reminders. A few weeks ago, I double-booked something for this Monday and didn’t notice until the day of. An AI agent could notice that and ask, “Are you sure you want to do this?” At a higher level of involvement, we could trust AI to do even more, like arranging your schedule for the next quarter independently. That’s an application that is possible if we choose to give AI that power. 

MORE: The future of AI—and Canada’s place in it

To me, the most beneficial way companies can use these avatars is to create more personalized interactions with consumers. Suppose you’re looking for a new lamp for your home. An avatar would ask which room the lamp would be in, and then you’d give your answer. Then it would ask what style of lamp you are looking for, what the colour scheme is in that room, whether you want integrated LED lights, how much space you have? You, as a consumer, may not even have thought about these things. But the avatar has been trained to.

Larger corporations will be the first to use these generative-AI avatars, maybe in the next  five years. These companies will have the resources to hire people to make sure that generative AI adapts to the company’s products, offerings and policies. They’ll also have the bandwidth to generate training data: the AI system needs to know what products are offered, their characteristics, and how those products should be recommended, all of which takes training. In the short term, the technology will likely be out of reach for small businesses, until we develop systems that need less training data. 

In this initial wave of deployment, companies may use AI systems as a cost-saving measure, so they don’t need to employ as many human customer service representatives. I think that would be a mistake. Although these systems are very powerful and can generate very fluent-sounding responses, those responses can often be incorrect. This is sometimes called the problem of hallucination. For example, an AI avatar might suggest products that don’t exist. Or, if a customer has a complaint, it might generate a solution that’s too strict or too generous—perhaps it would promise to reimburse the consumer even when that doesn’t follow company policy. For the moment, at least, hallucination means you cannot trust AI systems to make decisions.

Right now, these systems also have an incomplete understanding of language variation, so minor  differences in what customers say—even if they mean the same thing—can produce vastly different outputs. The tech sometimes even generates the exact opposite of what it’s supposed to, because negation can be expressed in many ways and they might not properly read the negation marker. A colleague of mine asked a system what to do if you have pain in your chest, and the generated answer included a bunch of recommendations. But that snippet missed out on the negation, and it turned out these were all things you should not do.

MORE: We’ll develop new drugs in months, not decades

I worry that this technology will be the next self-checkout counter: it will save companies labour costs while downloading the work to shoppers, forcing them to interact with AI before they can reach a human. This is already happening, and it’s happening with clunky, hard-to-use systems. In the coming years, we will need to make sure that systems are more intuitive to use, and we need to address issues of safety, correctness and hallucination. The power of consumer protection bureaus will need to expand to help deal with AI-related issues, and there might be additional regulations about how AI interacts with consumers. If the AI makes a promise, for example, does the company have to honour it? 

We’ll also need careful human auditing to make sure these systems treat everybody equally. These systems are trained on human-generated data, which  contain biases and inequities. In some instances, AI systems trained on general text make stereotypical associations, such as between gender and certain occupations. AI might also generate different responses for different demographics based on their speech patterns. For example,  English spoken by second-language speakers can have pronunciation, grammatical and lexical differences from  Standard English. An AI trained on biased data could make assumptions based on these nuances.

In the next few decades, we must develop technology we’re confident in—transparent AI that works well, that treats users fairly, that is tested for safety by third parties. Those systems’ performance will improve as they adapt to how we use them. By the same token, we’ll get much better at interacting with the technology, just like we’re better at querying search engines and detecting spam than we were two decades ago. Eventually, we’ll know when to trust the systems and when not to. 

We reached out to Canada’s top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fiction—but they’re coming at you sooner than you think. To stay ahead of it all, read the other essays that make up our AI cover story, which was published in the November 2023 issue of Maclean’s.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Macleans.ca – https://macleans.ca/society/technology/ai-customer-service/

Exit mobile version