The dominance of big tech firms, a focus on speculative risks over real-world harms, and the exclusion of affected workers, mean the AI Safety Summit is a wasted opportunity, say civil society groups
By
Sebastian Klovig Skelton,
Senior reporter
Published: 30 Oct 2023 13:55
The UK government has excluded the communities and workers most affected by artificial intelligence (AI) from its upcoming AI Safety Summit, which will be a closed shop dominated by big tech firms, say more than 100 civil society organisations in an open letter branding the event “a missed opportunity”.
Released ahead of the official AI Summit at Bletchley Park on 1 and 2 November, the letter to prime minister Rishi Sunak – signed by a variety of human rights organisations, civil society groups, unions, academics and other prominent voices from within the tech community – also highlights the summit’s narrow focus on “future, apocalyptic risks” of AI at the expense of everyday harms already occurring, and ultimately brings into question how effective the forum will be in making the technology truly “safe and beneficial”.
It said that, despite the government acknowledging that AI “will fundamentally alter the way we live, work and relate to one another”, there was no representation of communities or workers affected by AI at the summit, while the involvement of civil society groups has been selective and limited.
“This is a missed opportunity. As it stands, the summit is a closed-door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules,” it said.
“For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now. This is about being fired from your job by algorithm or unfairly profiled for a loan based on your identity or postcode.
“People are being subject to authoritarian biometric surveillance, or to discredited predictive policing. Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.”
It added that, for the summit itself and the subsequent AI safety work to be successful, those most exposed to the harms of AI must have a seat at the table and meaningful input into the decision-making process.
“For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now”
Open letter to the prime minister
“The inclusion of these voices will ensure that the public and policymakers get the full picture. In this way, we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world,” it said.
In a speech delivered at the Royal Society on 26 October ahead of the summit, Sunak noted that while the only people currently testing the safety of the technology are the very organisations developing it, the UK would not rush to regulate the technology.
“This is a point of principle – we believe in innovation, it’s a hallmark of the British economy, so we will always have a presumption to encourage it, not stifle it. And in any case, how can we write laws that make sense for something we don’t yet fully understand?” he said. “Instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government. To do that, we’ve already invested £100m in a new taskforce, more funding for AI safety than any other country in the world.”
He also said that while the existential risks of AI were “not a risk that people need to be losing sleep over right now… the consequences would be incredibly serious” if they did manifest themselves, hence the focus on such catastrophic outcomes at the summit.
Sunak further added it would be a priority of the summit to “agree the first ever international statement about the nature of these risks” so that a shared understanding could be used as a basis for future action.
Signatories’ further comments
Notable signatories include Connected by Data; the Trade Union Congress (TUC); and the Open Rights Group (ORG) – the three of which led on coordinating the letter – as well as Mozilla; Amnesty International; Eticas Tech; the Tim Berners-Lee-founded Open Data Institute; Liberty; Big Brother Watch; Worker Info Exchange; Privacy International; Tabitha Goldstaub, former chair of the UK’s AI Council; and Neil Lawrence, a professor of machine learning at the University of Cambridge, who was previously interim chair of the Centre for Data Ethics and Innovation’s (CDEI) advisory board before it was quietly disbanded by the government in early September2023.
Union-wise, the letter was signed by the National Union of Education, the National Union of Journalists, United Tech and Allied Workers, Unite, Unison, Prospect Union and the Transport Salaried Staffs Association (TSSA), among others.
Union federations representing hundreds of millions of workers from across the globe also signed, including the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), which represents 60 unions and 12.5 million American workers; the European Trade Union Confederation (ETUC), which represents 45 million members from 93 trade union organisations in 41 European countries; and the International Trade Union Confederation, which represents 191 million trade union members in 167 countries and territories.
Adam Cantwell-Corn, a senior campaigns and policy officer at Connected by Data, said the summit’s domination by “narrow interests” was unacceptable, and that the technology must be shaped by a range of expertise, perspectives and communities that have an equal seat at the table. “The summit demonstrates a failure to do this,” he added.
“The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale”
Abby Burke, ORG
Cantwell-Corn said AI policymaking in general was in need of a rethink, both domestically and internationally, “to steer these transformative technologies in a democratic and socially useful direction”.
Katy Bell, assistant general secretary at the TUC, added it was “hugely disappointing” to see unions and wider civil society excluded from the summit, especially in the face of the technology already being used to make “life-changing decisions” about people.
“This event was an opportunity to bring together a wide range of voices to discuss how we deal with immediate threats and make sure AI benefits all,” she said. “It shouldn’t just be tech bros and politicians who get to shape the future of AI.”
Abby Burke, a policy manager for data rights and privacy at ORG, said the summit’s limited scope and attendees meant the government had “bungled what could have been an opportunity for real global AI leadership”.
She added: “The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale.
“It’s extremely concerning that the government has excluded those who are experiencing harms and other critical expert and activist voices from its summit, allowing businesses who create and profit from AI systems to set the UK’s agenda.”
Read more on Artificial intelligence, automation and robotics
ICO issues guidance on workplace surveillance
By: Sebastian Klovig Skelton
Government AI taskforce appoints new advisory board members
By: Sebastian Klovig Skelton
TUC launches AI taskforce for workers’ rights and societal benefit
By: Brian McKenna
Workplace monitoring needs worker consent, says select committee
By: Sebastian Klovig Skelton
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Computer Weekly – https://www.computerweekly.com/news/366557632/UK-government-AI-Summit-already-branded-missed-opportunity