The standards entity’s ARIA program attempts to establish guidelines on large language model (LLM) risks — a ‘delicate and challenging concept,’ according to industry experts.
The National Institute of Standards and Technology (NIST) on Tuesday announced an extensive effort to try to test large language models (LLMs) “to help improve understanding of artificial intelligence’s capabilities and impacts.”
NIST’s new Assessing Risks and Impacts of AI (ARIA) program will “assess the societal risks and impacts of artificial intelligence systems,” the NIST statement said, including ascertaining “what happens when people interact with AI regularly in realistic settings.”
The NIST effort will include a “testing, evaluation, validation and verification (TEVV) program intended to help improve understanding of artificial intelligence’s capabilities and impacts.”
In an email sent to CIO.com in response to questions, a NIST representative said that three LLM capabilities will be initially explored.
The first will be what NIST described as “controlled access to privileged information. Can the LLM protect information it is not to share, or can creative users coax that information from the system?”
The second area will be “personalized content for different populations. Can an LLM be contextually aware of the specific needs of distinct user populations?”
The third area will be “synthesized factual content. [Can the LLM be] free of fabrications?”
The NIST representative also said that the organization’s evaluations will make use of “proxies to facilitate a generalizable, reusable testing environment that can sustain over a period of years. ARIA evaluations will use proxies for application types, risks, tasks, and guardrails — all of which can be reused and adapted for future evaluations.”
Industry impacts
HackerOne co-founder Michiel Prins said one good thing about the NIST move is that its end goal is to deliver guidance — not regulation.
“NIST’s new ARIA program focuses on assessing the societal risks and impacts of AI systems in order to offer guidance to the broader industry. Guidance, over regulation, is a great approach to managing the safety of a new technology,” Prins said. “When you don’t know the full extent of the risks and opportunities of something like AI, it’s impossible to know what regulation will be most effective, but guidance can help build consensus around general — but flexible — security and safety best practices.”
Prins added that NIST’s efforts are not significantly different than what many in the industry are already doing.
“In some ways, [NIST’s] goals for the industry at scale are very similar to the outcomes many aim for at the individual organizational level when engaging AI red-teaming services. Both focus on evaluating AI’s real-world functionality and safety,” Prins said. NIST’s “methodologies focus on testing AI within societal contexts, while red teaming focuses on identifying vulnerabilities and biases within a particular system. Both help ensure AI systems’ security and trustworthiness.”
The Biden Administration “is focused on keeping up with constantly evolving technology,” which is something that many administrations have struggled with, arguably unsuccessfully, said Brian Levine, a managing director at Ernst & Young. Indeed, Levine said that he sees some current efforts — especially with generative AI — potentially going in the opposite direction, with US and global regulators digging in “too early, while the technology is still very much in flux.”
In this instance, though, Levine said that he saw the NIST efforts as promising, given NIST’s long and illustrious history of accurately conducting a wide range of technology testing. One of NIST’s first decisions, he said, will be to figure out “the type of AI code that is the best to test here.” Some of that may be influenced by which organizations volunteer to have their code examined, he said.
Some AI officials said that it would be difficult to analyze LLMs in a vacuum, given that risks are dictated by their use. Still, Prins said that evaluating just the code is valuable.
“In security, a workforce needs to be trained on security best practices, but that doesn’t negate the value of anti-phishing software. The same logic applies to AI safety and security: These issues are big and need to be addressed from a lot of different angles,” Prins said. “How people abuse AI is a problem and that should be addressed in other ways, but this is still a technology — any improvements we can make to simplify how we use safe and secure systems is beneficial in the long run.”
Crystal Morin, a cybersecurity strategist at Sysdig and a former analyst at Booz Allen Hamilton, said she saw the NIST effort as “a very delicate and challenging concept to undertake.”
“It requires evaluators to remain completely unbiased as they evaluate applications in context with data input and output as they search for a means to accurately report findings. I have no doubt that getting this right will be time-consuming,” she said, adding that although the NIST findings may prove helpful to IT leaders, the results will have to evaluated and retested in the enterprise’s environment with their own systems.
“AI security is not a one-size-fits-all endeavor. What is considered a societal risk will vary depending on the region, sector, and size of a given business,” Morin said. “The [NIST] program is a wonderful stepping stone for organizations looking to implement AI applications, but AI risk and impact assessments must also be done internally to meet individual risk and mitigation needs.”
The NIST statement also quoted some government officials stressing the importance of getting a handle on LLM risks and benefits.
“In order to fully understand the impacts AI is having and will have on our society, we need to test how AI functions in realistic scenarios — and that’s exactly what we’re doing with this program,” said US Commerce Secretary Gina Raimondo. “With the ARIA program, and other efforts to support Commerce’s responsibilities under President Biden’s Executive Order on AI, NIST and the U.S. AI Safety Institute are pulling every lever when it comes to mitigating the risks and maximizing the benefits of AI.”
Reva Schwartz, NIST Information Technology Lab’s ARIA program lead, spoke of the criticality of testing AI functions in controlled laboratory settings and applying real-world factors.
“Measuring impacts is about more than how well a model functions in a laboratory setting,” Schwartz said. “ARIA will consider AI beyond the model and assess systems in context, including what happens when people interact with AI technology in realistic settings under regular use. This gives a broader, more holistic view of the net effects of these technologies.”
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/2130426/nist-launches-ambitious-effort-to-assess-llm-risks.html