ex_flow – stock.adobe.com
The government wants the UK to lead AI regulation, but technology is developing at a breakneck pace
By
Cliff Saran,
Managing Editor
Published: 07 Jul 2023 15:45
The House of Lords has put out a call for evidence as it begins an inquiry into the seismic changes brought about by generative AI (artificial intelligence) and large language models.
The speed of development and lack of understanding about these models’ capabilities has led some experts to warn of a credible and growing risk of harm. For instance, the Center for AI Safety has issued a statement with several tech leaders as signatories that urges those involved in AI development and policies to prioritise mitigating the risk of extinction from AI. But, there are others, such as former Microsoft CEO Bill Gates, who believe the rise of AI will free people to do work that software can never do such as teaching, caring for patients, and supporting the elderly.
According to figures quoted in a report by Goldman Sachs, generative AI could add roughly £5.5tn to the global economy over 10 years. The investment bank’s report estimated that 300 million jobs could be exposed to automation. But others roles could also be created in the process.
Large models can generate contradictory or fictious answers, meaning their use in some industries could be dangerous without proper safeguards. Training datasets can contain biased or harmful content, and intellectual property rights over the use of training data are uncertain. The ‘black box’ nature of machine learning algorithms makes it difficult to understand why a model follows a course of action, what data were used to generate an output, and what the model might be able to do next, or do without supervision.
Baroness Stowell of Beeston, chair of the committee, said: “The latest large language models present enormous and unprecedented opportunities. But we need to be clear-eyed about the challenges. We have to investigate the risks in detail and work out how best to address them – without stifling innovation in the process. We also need to be clear about who wields power as these models develop and become embedded in daily business and personal lives.”
Among the areas the committee is looking for information and evidence on is how large language models are expected to develop over the next three years, opportunities and risks and an assessment of whether the UK’s regulators have sufficient expertise and resources to respond to large language models.
“This thinking needs to happen fast, given the breakneck speed of progress. We mustn’t let the most scary of predictions about the potential future power of AI distract us from understanding and tackling the most pressing concerns early on. Equally we must not jump to conclusions amid the hype,” Stowell said.
“Our inquiry will therefore take a sober look at the evidence across the UK and around the world, and set out proposals to the government and regulators to help ensure the UK can be a leading player in AI development and governance.”
Read more on Artificial intelligence, automation and robotics
Generative AI: Data privacy, backup and compliance
By: Stephen Pritchard
Gartner: Exploring the short- and mid-term implications of ChatGPT
How AI ethics is coming to the fore with generative AI
By: Aaron Tan
Problems with generative AI for firms such as JPMorgan Chase
By: Esther Ajao
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Computer Weekly – https://www.computerweekly.com/news/366544137/House-of-Lords-launches-an-investigation-into-generative-AI