Salesforce’s Einstein 1 platform to get new prompt-engineering features

Salesforce’s Einstein 1 platform to get new prompt-engineering features

The two new features, still in the research phase, are expected to help developers do prompt engineering faster, thereby speeding up the development of generative AI applications.

Salesforce is working on adding two new prompt engineering features to its Einstein 1 platform to speed up the development of generative AI applications in the enterprise, a top executive of the company said.  

The two new features, namely a testing center and the provision of prompt engineering suggestions, are the fruit of significant investment in the company’s AI engineering team, said Claire Cheng, vice president of machine learning and AI engineering at Salesforce.

The features are expected to be released in the next few days, Cheng said, without giving an exact date.  

Salesforce’s Einstein 1 platform, released in September last year, is an open platform that the company developed to enable enterprises to unify their data before developing generative AI-based applications and use cases via a low-code and no-code interface.

The platform also brings in large language model (LLM) providers such as OpenAI, Google, Cohere, and Hugging Face along with independent software vendors.

In essence, the platform is a combination of the Salesforce Data Cloud, its Einstein Copilot, and the Einstein Trust Layer, earlier released as part of the Salesforce AI Cloud. While the Data Cloud enables enterprises to bring in various data types and datasets, the Trust Layer serves to keep customer data within Salesforce by masking it from external LLMs, warning users of potentially toxic prompts or responses, and keeping an audit trail.

How do Einstein 1’s new prompt-engineering features work?

The Einstein 1 needs the new features around prompt engineering because it currently lacks an “automatic reinforcement learning” module, according to Cheng.

This means that if an enterprise developer or admin wants to make tweaks to the prompts or change the data underlying the prompt, it has to be done manually, which is feasible but time consuming.

“What the research team is actually working on for quite a while is how we learn from the feedback and then ultimately recommend the advancement of prompt engineering,” Cheng said.

The first of the two features, the testing center tool, will provide an alternative to the manual and time-consuming process of making tweaks or changes to prompts, the top executive said.

The interface, which is likely to be low code in nature, will enable enterprise developers to see how various iterations of prompt engineering yield a different performance from the end user perspective, thereby making manual checks unnecessary, Cheng explained. Additionally, the tool can pick the best option from the different iterations.

The second feature, currently referred to as prompt engineering suggestions, will recommend to developers what it identifies as the most effective way to refine their prompt engineering activity.

Both the features, according to Cheng, are based on data analytics carried out over already live deployments, where the company received data in the form of feedback on generated answers or queries from the underlying LLMs for a specific application or use case.

The upcoming features, according to Amalgam Insights’ principal analyst Hyoun Park, are necessary as there is “no one standardized way to determine the quality of output associated with AI models at this point.”

“For AI to become broadly used throughout the enterprise, businesses need to have a better idea of how to configure and customize models to their specific processes, users, and needs,” Park said, adding that by creating a more consistent model and testing process, developers can focus more on creating optimal models from a development and operationalization perspective, rather than having to play guessing games with their key users and process owners.  

These features, according to Keith Kirkpatrick, research director at The Futurum Group, will provide Salesforce an edge with its enterprise customers.

“Providing low-code tools to refine and contextualize generative AI experiences helps Salesforce ensure that its solutions can be tailored for each individual customer efficiently, while leveraging the power of a generalized platform,” Kirkpatrick said.

However, Constellation Research’s principal analyst Holger Mueller claims that the new features are more geared towards satiating its marketing around “trustable AI” and doesn’t help the company too much with its competition.

“While the testing center, for example, is good on delivering on the marketing message, it does not help customers as they need to move their Salesforce instances to the public cloud to make AI happen,” Mueller said.

When your AI is an upgrade or migration away, a quality assurance capability for prompt engineering is nice to have, but not immediately useful, the analyst added.

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/1306845/salesforces-einstein-1-platform-to-get-new-prompt-engineering-features.html

Exit mobile version