Lord Chris Holmes explains why he has introduced a Private Member’s Bill on AI regulation for discussion in Parliament – and the important precedents he hopes it will set
By
Lord Chris Holmes
Published: 28 Nov 2023
It was a privilege to introduce my Private Member’s Bill on artificial intelligence (AI) into the House of Lords this month. In the Bill I have tried to incorporate much of what I believe we need to sort, in short order, when it comes to AI.
Every King’s Speech offers members of both Houses of Parliament the opportunity to put a potential law forward for parliamentary consideration. It’s a ballot – so Lady (or indeed Lord) Luck needs to be on your side. If you come in the top 25 or so, then your Bill has a good chance of getting a full hearing at “second reading” and from there a chance – slim but still a chance – of making it onto the statute book.
This year I was one of the lucky ones and my proposed private members’ bill – The Artificial Intelligence [Regulation] Bill – was drawn near the top and has been introduced, with second reading to be scheduled in the new year.
I drafted the Bill with the essential principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability running through it.
The AI Authority
The first section sets out the requirements for an AI Authority. In no sense do I see this as the creation of an outsized, do-it-all, regulator. Rather, the role is one of coordination, assuring all the relevant, existing regulators address their obligations in relation to AI.
Setting up AI regulation in this horizontal rather than vertical fashion should give a better chance of alignment. Ensuring a consistent approach across industries and applications rather than the, potentially, piecemeal approach likely if regulation is left only to individual regulators. This horizontal view should also allow for a clearer gap analysis to be drawn out and addressed.
The proposed AI Authority should also undertake a review of all relevant existing legislation, such as consumer protection and product safety, to assess its suitability to address the challenges and opportunities presented by AI.
It is critical that the AI Authority is both agile and adaptable and very much forward facing. To enable this, it must conduct horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends.
Building on the UK’s sound basis for principles-based regulators, AI regulation should deliver safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
Transparency and testing
Turning to business, any business which develops, deploys, or uses AI should be transparent about it; test it thoroughly and transparently; and comply with applicable laws, including in relation to data protection, privacy, and intellectual property.
To assist in this endeavour the concept of sandboxes can be brought positively to bear. We have seen the success of the fintech regulatory sandbox, replicated in well over 50 jurisdictions around the world. I believe a similar approach can be deployed in relation to AI developments and, if we get it right, it could become an export of itself.
Building on amendments I put forward to the recently passed Financial Services and Markets Act 2023 the AI Bill also proposes a general responsibility on every business developing, deploying, or using AI to have a designated AI officer.
The AI officer will be required to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business and to ensure, so far as reasonably practicable, that data used by the business in any AI technology is unbiased.
Turning to intellectual property (IP), I suggest that any person involved in training AI must supply to the AI Authority a record of all third-party data and IP used in that training, and assure the AI Authority that they use all such data and IP by informed consent. We need to ensure, as it is in the non-AI world, that all those who create and come up with ideas can be assured that their creations, their IP, their copyright is fully protected in this AI landscape.
Public engagement
Finally, none of this is anything without effective, meaningful, and ongoing public engagement. No matter how good the algorithm, the product, the solution, if no one is “buying it”, then, again, none of it is anything or gets us anywhere.
In 2020 our Lords Select Committee on Democracy and Digital Technologies warned that the proliferation of misinformation and disinformation would “result in the collapse of public trust, and without trust democracy as we know it will simply decline into irrelevance.” The mainstreaming of AI tools may well only accelerate this process, but we have the opportunity here to use the same technology to engage the public in a way that builds trust.
To this end, it is essential that the Authority implements a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI; and consults the general public as to the most effective frameworks for this engagement, having regard to international comparators.
As everyone has now descended safely from the government’s AI Safety Summit, perhaps we are left to ponder what emerged. For me, as important as anything is the fact that it shone a light on something truly worthy of our national pride.
Two generations ago, a diverse team at Bletchley Park gathered at one of the darkest hours in our history. Together, they developed and deployed the leading-edge technology of their time to defeat one of the greatest threats humanity has ever faced. We sadly face similar challenges today. If we get it right, human-led AI can once again defeat the darkness and enable so much light. I hope that, through mass support, my Bill can play its small part in enabling the opportunities while staring down the risks.
Lord Chris Holmes of Richmond is a member of the House of Lords, where he sits on the Select Committee on Science and Technology. He is also a passionate advocate for the potential of technology and the benefits of diversity and inclusion and is co-chair of parliamentary groups on fintech, artificial intelligence, blockchain, assistive technology and the 4th Industrial Revolution. An ex-Paralympic swimmer, he won nine gold, five silver and one bronze medal across four Games, including a record haul of six golds at Barcelona 1992.
Read more on Artificial intelligence, automation and robotics
MPs say UK at real risk of falling behind on AI regulation
By: Sebastian Klovig Skelton
U.S. doesn’t need a new AI regulator, experts argue
By: Makenzie Holland
UK government introduces Digital Markets Bill to Parliament
By: Sebastian Klovig Skelton
ICO responds to UK government AI regulation plans
By: Sebastian Klovig Skelton
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Computer Weekly – https://www.computerweekly.com/opinion/Making-artificial-intelligence-fit-for-all-our-human-futures