News
Feb 07, 20244 mins
Artificial IntelligenceGenerative AIGovernment
Many questions still loom around the UK government’s evaluation of models voluntarily submitted for scrutiny by Microsoft, OpenAI, Google, and Meta.
Key AI companies have told the UK government to speed up its safety testing for their systems, raising questions about future government initiatives that too may hinge on technology providers opening up generative AI models to tests before new releases hit the public.
OpenAI, Google DeepMind, Microsoft, and Meta are among companies who have agreed to allow the UK’s new AI Safety Institute (AISI) to evaluate their models, but they aren’t happy with the current pace or transparency of the evaluation, according to a published report in the Financial Times, which cited sources close to the companies.
Despite their willingness to amend the models if the AISI finds flaws in their technology, the companies are under no obligation to change or delay releases of the technology based on the test outcomes, the sources said.
The companies’ pushback on the AISI evaluation includes wanting more details of the tests that are being conducted, how long they will take, and how the feedback process will work, according to the report. It’s also unclear whether the testing will need to be submitted every time there is even a slight update to the model, which is something AI developers may find too onerous to consider.
Murky process, murky outcomes
The AI vendors’ reservations appear to be valid, given how murky details are on how exactly the evaluation actually works. And with other governments considering similar AI safety evaluations, any current confusion with the UK process will only grow as additional government bodies make the same, for now, voluntary demands on AI developers.
The UK government said that testing of the AI models already has begun through collaboration with their respective developers, according to the Financial Times. The testing is based on access to capable AI models for pre-deployment testing — even unreleased models, such as Google’s Gemini Ultra — which was one of the key agreements companies signed up for at the UK’s AI Safety Summit in November, according to the report.
Sources told the Financial Times that testing has focused on the risks associated with the misuse of AI, including cybersecurity and jailbreaking — the formulation of prompts to coax AI chatbots into bypassing their guardrails. Reverse-engineering automation may also be among the testing criteria, based on recently published UK government contracts, according to the report.
None of the AI companies nor the AISI could be reached for immediate comment on Wednesday.
Other governments eye AI oversight
The outcome of that aforementioned November summit was something called the Bletchley Declaration on AI Safety (Bletchley being the summit’s location), by which
28 countries from across the globe agreed to understand and collectively manage potential AI risks by ensuring it’s developed in a safe, responsible way.
So far various governments around the world have launched specific initiatives and agencies to monitor the development of AI amid growing concerns of the pace of its development and leaving that in the hands of technology businesses, which may be more focused on the bottom line and innovation than global safety.
In the US, there is the US Artificial Intelligence Safety Institute, which, according to its website, is aimed at helping “equip and empower the collaborative establishment of a new measurement science” to identify “proven, scalable and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.” As this testing framework has not yet been developed, the institute is currently seeking collaborators for its mission.
Australia, too, said it soon will establish an expert advisory group to evaluate and develop options for “mandatory guardrails” on AI research and development. It is also working with the industry to develop a voluntary AI Safety Standard and options for voluntary labeling and watermarking of AI-generated materials to ensure greater transparency.
A bit further along than those nascent initiatives in the US and Australia, the EU became the first region to introduce a comprehensive set of laws to ensure that AI is being used for the economic and social benefit of the people.
Then there the mission of UK Prime Minister Rishi Sunak to make his country a leader in, as the Financial Times reports, taking on existential risks of the rapid rise of AI. This will likely inform the country’s current testing of AI models, though it remains to be seen how this will affect their future development.
Related content
news
Salesforce’s Einstein 1 platform to get new prompt-engineering features
The two new features, still in the research phase, are expected to help developers do prompt engineering faster, thereby speeding up the development of generative AI applications.
By Anirban Ghoshal
Feb 09, 2024
5 mins
Salesforce.com
Generative AI
Enterprise Applications
feature
How AI is helping the NFL improve player safety
The NFL’s Digital Athlete platform, built with partner AWS, uses computer vision and machine learning for predictive analytics to identify plays and body positions most likely to lead to player injury.
By Thor Olavsrud
Feb 09, 2024
6 mins
Media and Entertainment Industry
Digital Transformation
Artificial Intelligence
tip
5 ways CIOs can help gen AI achieve its lightbulb moment
Despite the current level of hype and mainstream adoption, gen AI still needs to experience the trough of disillusionment before embarking on a path to peak productivity.
By Nicholas D. Evans
Feb 09, 2024
6 mins
CIO
Generative AI
Artificial Intelligence
news
Microsoft in talks over cloud licensing complaint in the EU
In an effort to avoid another anti-trust probe, Microsoft is in talks with a trade body in the European Union, which has lodged a complaint against the way it licenses its software for use by rival cloud providers.
By Anirban Ghoshal
Feb 08, 2024
3 mins
Regulation
Desktop Virtualization
Legal
PODCASTS
VIDEOS
RESOURCES
EVENTS
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/1306424/report-ai-giants-grow-impatient-with-uk-safety-tests.html