As generative AI becomes prevalent in enterprise software, CIOs must give thought to the data it’s trained on.
ServiceNow is making generative AI accessible from more areas of its low-code development platform, putting it front and center in the chatbots enterprises are starting to use to interact with their ServiceNow applications.
But as software vendors like ServiceNow, Salesforce, or SAP offer new ways to take advantage of generative AI capabilities, such as summarizing text or generating new text or images from a simple prompt, there are risks CIOs need to consider before giving the technology free rein with their data.
Only last month ServiceNow rolled out its first generative AI tools: the ServiceNow Generative AI Controller for connecting large language models (LLMs) to its software automation platform, and Now Assist for Search, which uses those LLMs and an enterprise’s own data to generate natural language responses to queries made in a virtual agent.
Now Assist for Virtual Agent
The latest addition, Now Assist for Virtual Agent, builds on that foundation to make it easier for enterprises to employ generative AI more broadly in designing and running business processes.
Like Salesforce with its Einstein GPT product, ServiceNow has chosen to adopt generative AI in a modular way, allowing CIOs to choose which LLM provider they integrate with.
In ServiceNow’s case, the choice is initially somewhat limited to either OpenAI, the creator of GPT and other publicly available models, or Microsoft Azure, which also uses OpenAI technology. However, the company recently partnered with Nvidia to help enterprises develop custom LLMs trained on their own data, and has also worked with Hugging Face on an open-access LLM that enterprises will be able to use to build private models to match their own needs.
That public-private distinction is crucial, said Neil Ward-Dutton, a VP at analyst IDC covering AI and intelligent process automation.
“We see a lot of confusion between the public foundation models promoted by the likes of OpenAI—GPT-4 and so on—and the generative AI models that we see ultimately delivering value to corporates, which will not necessarily be public,” he said.
Many uses of generative AI will only become attractive to enterprises when they can access specialized models, protected from public access, that are trained and tuned for their industry or even just for their organization alone. Other applications, with no need for company-specific data or high levels of accuracy, can be built on public models.
“Vendors like Salesforce, ServiceNow, and others don’t always do a good job of clearly distinguishing between these two approaches,” he said. “They’re all hedging their bets, partnering with the likes of OpenAI, Google, or Anthropic for access to public models, but also partnering with Nvidia, Hugging Face, and Cohere to help them implement specialized models for customers.”
ServiceNow runs shared services internally on its Now Platform, and recently began piloting the use of generative AI in virtual agent conversations, according to the company’s CIO, Chris Bedi (pictured). It’s being used by go-to-market teams to access knowledge bases about policies and processes to facilitate contract renewals, he said.
The idea is that, rather than the virtual agent producing links to a stack of knowledge base articles that workers have to read for themselves, “We’re saying ‘here’s the bite-sized pieces of content that can help you’ at each different point in this conversation, which should boost productivity and speed,” he said.
Garbage in, garbage out
Bedi, too, sees a risk in giving generative AI tools access to the wrong data—but it’s not a new one.
“It’s the garbage in, garbage out problem, which IT people have faced forever,” he said—but with a twist. With traditional search tools, “If you had bad data, it would get surfaced,” he said. “But generative AI is shining a much brighter light on it because it makes information so findable and digestible for humans. You’ve really got to make sure what you’re indexing in the large language models is up to date.”
The risk of bad data showing up—which is not unique to ServiceNow’s implementation—is why IDC’s Ward-Dutton recommends CIOs question their software suppliers about the origin of any generative AI elements they include, and the data they’re trained on.
Enterprises will want to know whether the underlying model is public, or will be private to their organization, on what data it was pretrained, and how they can protect against bias in training data, he said.
Some software vendors are starting to add layers to their generative AI platforms to make the information they surface more trustworthy.
In time, said Bedi, even that could be handled by large language models. “You can have models looking at the models,” he said. “That technology is being build up very rapidly.”
How it’s done makes a difference, said Ward-Dutton, who advised CIOs to ask how a vendor’s trust layer, if there is one, actually ensures data quality. “Is it through managing how models are trained in the first place,” he said, “or in attempting to correct or minimize problems with content that the models create after the fact?”
He advised CIOs to set up lab environments where they can test generative AI technologies safely, exploring use cases and examining the claims of vendors.
That’s what Bedi at ServiceNow did with his own technology, to see how it performed. CIOs looking to do the same could get better results by following his diversity play: running pilots with a mix of tenured employees and new starters.
“We thought that was important because people who joined recently will ask questions tenured people will not because they just know it through tribal knowledge,” he said. “And tenured people will spot things that feel or look funny a lot better than people who have just joined.”
That enabled him to get started without worrying about whether he had a perfect content repository for the virtual agent to draw on.
A limited set of ServiceNow customers already has access to Now Assist for Search and, now, Now Assist for Virtual Agent. For the others, the company plans to make the features generally available as part of its Vancouver platform release in September 2023.
Related content
feature
MoneyGram profits from mainframe move to multicloud
To stave off digital upstarts and capitalize on opportunities enabled by the cloud, the international money transfer company’s digital transformation is paying dividends and opening up new lines of revenue.
By Paula Rooney
Jun 16, 2023
8 minsFinancial Services Industry
Multi Cloud
Digital Transformationfeature
8 famous analytics and AI disasters
Insights from data and machine learning algorithms can be invaluable, but mistakes can cost you reputation, revenue, or even lives. These high-profile analytics and AI blunders illustrate what can go wrong.
By Thor Olavsrud
Jun 16, 2023
12 minsTechnology Industry
Machine Learning
Artificial Intelligencebrandpost
4 steps for building a new DEX strategy
Hybrid work has become the norm, making the digital employee experience (DEX) a key value driver for most businesses. Here are four steps they can follow to update their DEX strategy and solve many of their hybrid work challenges.
By Carol Venezia
Jun 15, 2023
8 minsDigital Transformation
news analysis
The top 10 IT outsourcing service providers
Accenture maintained its hold at the top of the IT service provider market, as TCS inched to No. 2 and CapGemini leapfrogged to No. 6 in a year marked by a more incremental approach to modernization, according to Everest Group’s 7th annual rank
By Stephanie Overby
Jun 15, 2023
6 minsTechnology Industry
Managed Cloud Services
Outsourcing
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Please enter a valid email address