The European Union and the US have agreed to increase co-operation in the development of technologies based on artificial intelligence (AI), placing a particular emphasis on safety and governance.
The announcement came at the end of a meeting of the EU-US Trade and Technology Council in Leuven, Belgium, on Friday, and followed this week’s broadly similar pact between the US and UK on AI safety.
The EU and US want to foster scientific information exchange between AI experts on either side of the Atlantic in areas such as developing benchmarks and assessing potential risks. The emphasis is on developing “safe, secure, and trustworthy” AI technologies.
Developing compatible regulatory environments
The two parties agreed to minimise divergence in their respective emerging AI governance and regulatory systems.
In a statement, the EU and US sketched out area of existing collaboration on AI applications: “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction.”
Working well together requires agreement on the meaning of terms, and to that end the two parties released an updated edition of their EU-US Terminology and Taxonomy for Artificial Intelligence, now available for download.
The European Union is seeking to regulate the development of artificial intelligence in the region with a recently approved AI Act.
Despite industry calls for AI regulations in the US from industry heavyweights such as Google, Microsoft and OpenAI, partisan splits in Congress make it unlikely that agreement will be reached before fresh Congressional elections in November.
The US government has, however, taken steps to put its own house in order by developing a strategy on the use of AI for federal agencies.
AI guardrails
Experts quizzed by CIO.com broadly welcomed the agreement between the EU and US as a positive development for the fast-moving field of artificial intelligence technologies.
Gaurav Pal, CEO and Founder of stackArmor, an IT security consulting company and also a member of the US AI Safety Institute Consortium, told CIO.com, “This is an important step in helping develop a common set of AI guardrails and frameworks between the EU and the US.”
Pal continued: “This will hopefully avoid creating silos and friction in conducting business between the US and EU for US AI companies.”
Business leaders should keep abreast of the rapidly emerging regulatory framework around AI because it is likely to impact business operations across multiple sectors, perhaps akin to how GDPR has impacted US firms conducting business in the EU.
The desire to steer away from clashing AI regulatory regimes on either side of the Atlantic is therefore welcome, according to Pal.
“The co-operation agreement is very important as it seeks to develop a common set of regulatory standards and frameworks thereby reducing the cost and complexity of compliance,” Pal explained.
Researchers gave the development of US-EU coordination on AI a cautious welcome, while looking for more detail on the specifics.
“AI regulation necessitates joint efforts from the international community and governments to agree a set of regulatory processes and agencies,” Angelo Cangelosi, professor of machine learning and robotics at the University of Manchester in England, told CIO.com.
“The latest UK-US agreement is a good step in this direction, though details on the practical steps are not fully clear at this stage, but we hope that this will continue at a wider international level, for example with integration with the EU AI agencies, as well as in the wider UN framework,” he added.
Risks of AI misuse
Dr Kjell Carlsson, head of AI strategy at Domino Data Lab, argued that focusing on the regulation of commercial AI offerings loses sight of the real and growing threat: the misuse of artificial intelligence by criminals to develop deep fakes and more convincing phishing scams.
“Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are designed to effectively tackle these threats as they mostly focus on commercial AI offerings that criminals do not use,” Carlsson said. “As such, many of these regulatory efforts will damage innovation and increase costs, while doing little to improve actual safety.”
“At this stage in the development of AI, investment in testing and safety is far more effective than regulation,” Carlsson argued.
Research on how to effectively test AI models, mitigate their risks and ensure their safety, carried out through new AI Safety Institutes, represents an “excellent public investment” in ensuring safety whilst fostering the competitiveness of AI developers, Carlsson said.
Legal challenges
Many mainstream companies are using AI to analyze, transform, and even produce data – developments that are already throwing up legal challenges on myriad fronts.
Ben Travers, a partner at legal firm Knights and specializes in AI, IP and IT issues, explained: “Businesses should have an AI policy, which dovetails with other relevant policies, such as those relating to data protection, IP and IT procurement. The policy should set out the rules on which employee can (or cannot engage with AI).”
Recent instances have raised awareness of the risks to employers when employees upload otherwise protected or confidential information to AI tools, while the technology also poses issues in areas such as copyright infringement.
“Businesses need to decide how they are going to address these risks, reflect these in relevant policies and communicate these policies to their teams,” Travers concluded.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/2083973/eu-and-us-agree-to-chart-common-course-on-ai-regulation.html
Unveiling 2024 Community Health Assessment: Join the Conversation and Collaborate for a Healthier Future!