US and UK sign agreement to test the safety of AI models

US and UK sign agreement to test the safety of AI models

The US and the UK have signed an agreement to test the safety of large language models (LLMs) that underpin AI systems.

The agreement or memorandum of understanding (MoU) — signed in Washington by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan on Monday — will see both countries working to align their scientific approaches and working closely to develop suites of evaluations for AI models, systems, and agents.

The work for developing frameworks to test the safety of LLMs, such as the ones developed by OpenAI and Google, will be taken by the UK’s new AI Safety Institute (AISI) and its US counterpart immediately, Raimondo said in a statement.

The agreement comes into force just months after the UK government hosted the global AI Safety Summit in September last year, which also saw several countries including China, the US, the EU, India, Germany, and France agree to work together on AI safety.

The countries signed the agreement, dubbed the Bletchley Declaration, to form a common line of thinking that would oversee the evolution of AI and ensure that the technology is advancing safely.  

The agreement came after hundreds of tech industry leaders, academics, and other public figures signed an open letter warning that AI evolution could lead to an extinction event in May last year.

The US has also taken steps to regulate AI systems and related LLMs. In November last year, the Biden administration issued a long-awaited executive order that hammered out clear rules and oversight measures to ensure that AI is kept in check while also providing paths for it to grow.  

Earlier this year, the US government created an AI safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development.

The advisory group named the US AI Safety Institute Consortium (AISIC), which is part of the National Institute of Standards and Technology, was tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.

Several major technology firms, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, joined the consortium to ensure the safe development of AI.

Similarly, in the UK, firms such as OpenAI, Meta, and Microsoft have signed voluntary agreements to open up their latest generative AI models for review by the country’s AISI, which was set up at the UK AI Safety Summit.

The EU has also made strides in the regulation of AI systems. Last month, the European Parliament signed the world’s first comprehensive law to govern AI. According to the final text, the regulation aims to promote the “uptake of human-centric and trustworthy AI, while ensuring a high level of protection for health, safety, fundamental rights, and environmental protection against harmful effects of artificial intelligence systems.”

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/2076986/us-and-uk-sign-agreement-to-test-the-safety-of-ai-models.html

Exit mobile version