House of Lords Report Urges UK Government to Reevaluate AI Safety Focus

House of Lords Report Urges UK Government to Reevaluate AI Safety Focus

A report released by the Parliamentary House of Lords’ Communications and Digital Committee suggests that the UK government needs to broaden its perspective on AI safety to avoid falling behind in the rapidly evolving AI landscape.

The report, following extensive evidence gathering from various stakeholders, including big tech companies, academia, venture capitalists, media and government, highlighted the need for the government to focus on more immediate security and societal risks posed by large language models (LLMs).

The committee’s chairman, Baroness Stowell, asserted that the rapid development of AI large language models is comparable to the introduction of the internet, stressing the importance of the government adopting a balanced approach.

Stowell said: “We need to address risks in order to be able to take advantage of the opportunities — but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI gold rush.”

The report’s findings align with concerns raised by the National Cyber Security Centre (NCSC), a division of GCHQ, which released a report indicating that artificial intelligence (AI) is poised to escalate the global ransomware threat over the next two years.

The NCSC pointed out the current exploitation of AI in malicious cyber activities and the anticipated amplification of cyberattacks, particularly in the realm of ransomware.

Responding to the heightened threat, the government, in collaboration with the private sector, declared the Bletchley Declaration during the AI Safety Summit at Bletchley Park in November.

This initiative aimed to manage the risks of frontier AI and ensure its safe and responsible development. The NCSC urged organisations and individuals to follow its ransomware and cybersecurity advice to strengthen defences against cyberattacks.

According to the National Crime Agency (NCA), cybercriminals are already developing criminal Generative AI (GenAI) and offering ‘GenAI-as-a-service,’ making improved capability accessible to those willing to pay.

A key aspect of the House of Lords’ Communications and Digital Committee’s report revolves around the debate over regulating AI technology, with Meta’s chief AI scientist Yann LeCun and others advocating for openness in AI development.

The report addresses concerns about “closed” versus “open” ecosystems, highlighting the competition dynamics that will shape the AI and LLM market and influence regulatory oversight. The committee stressed upon the need for an explicit AI policy objective to prevent regulatory capture by current industry incumbents.

The report recognised the nuanced nature of the debate, acknowledging the perspectives of major tech players such as Microsoft and Google.

While expressing support for “open access” technologies, these companies underscore the significant security risks associated with openly available large language models.

Microsoft, for example, contends that not all actors using AI have benign intentions and stresses the importance of guarding against intentional misuse and unintentional harm.

The tension between openness and security is further pointed out by contrasting views on the accessibility of open LLMs.

While open models facilitate transparency and broader access, concerns are raised about the potential misuse and the need for robust safeguards.

Irene Solaiman, global policy director at AI platform Hugging Face, accentuates the importance of disclosure and transparency in assessing risks.

Ian Hogarth, chair of the UK government’s AI Safety Institute, raises concerns about the current situation where private companies define the frontier of LLMs and generative AI, leading to a potential conflict of interest.

As such, the committee recommends “enhanced governance measures in DSIT [Department for Science, Innovation and Technology] and regulators to mitigate the risks of inadvertent regulatory capture and groupthink”.

A recurring theme in the report is the assertion that the AI safety debate has been overly focused on catastrophic risks, diverting attention from more immediate issues. While advocating for mandatory safety tests for high-risk models, the report deems concerns about existential risks as exaggerated distractions. It suggests that the government should prioritise addressing pressing issues, such as the ease with which misinformation and disinformation can be generated using large language models.

The report stressed the need for prompt action on certain fronts, including the use of copyrighted material to train LLMs. It calls for addressing this issue promptly, asserting that while LLMs require massive datasets to function, using copyrighted material without permission or compensation is a concern that the government can address swiftly.

Despite recognising credible security risks associated with powerful AI models, the report rejected the idea of an outright ban, deeming it disproportionate and likely ineffective. Instead, it recommended empowering the government’s AI Safety Institute to develop new ways to identify and track models deployed in real-world scenarios, striking a balance between monitoring and mitigating potential impacts.

“It is almost certain existential risks will not manifest within three years, and highly likely not within the next decade,” the report concluded.

“As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline. The Government retains a duty to monitor all eventualities — but this must not distract it from capitalising on opportunities and addressing more limited immediate risks.”

“The National Cyber Security Centre assesses that large language models will ‘almost certainly be used to generate fabricated content… and that deep fake campaigns are likely to become more advanced in the run-up to the next nationwide vote, scheduled to take place by January 2025’,” it said.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : IBTimes – https://www.ibtimes.co.uk/house-lords-report-urges-uk-government-reevaluate-ai-safety-focus-1723237

Exit mobile version