News
Apr 29, 20243 mins
ComplianceData PrivacyGenerative AI
When the LLM hallucinates incorrect personal information, there’s no way to get it fixed, they say.
European privacy rights group noyb filed a complaint against OpenAI with the Austrian Data Protection Authority on Monday, accusing the company of breaching the European Union’s General Data Protection Regulation (GDPR).
The EU’s strict privacy rules require that companies allow individuals access to personal information held about them, as well as ensuring that such data is accurate. This requires a long audit trail to every piece of information stored about European citizens. When it comes to AI-generated content, such trails often go cold.
With regard to information generated by ChatGPT, noyb alleges there is no legal redress for so-called “hallucinations,” or wrong answers provided by artificial intelligence (AI) large language models, when it comes to personal information. In some cases, these errors can be dramatic, as when journalist Tony Polanco learned from ChatGPT that he was dead. (A year after Polanco’s discovery, ChatGPT is still repeating the error, and he is still filing stories.)
The impact of this on enterprises using the tool could be huge. While the case filed by noyb is specifically against OpenAI for the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals, the implications are wide-ranging.
OpenAI also offers a platform allowing enterprises to create their own customized chatbots, tools similar to ChatGPT but “tuned” with their own data or style guides. Using personal data to do this in the EU could create a minefield of problems if specificities of individuals’ data cannot be corrected. And penalties for breaching the GDPR can reach up to 4% of an enterprise’s global annual revenue.
Noyb has filed its case against OpenAI on behalf of an individual complainant but the non-profit and its founder Max Schrems have a strong track record of winning cases. Together, they have filed some of the most influential privacy complaints in the EU, with success against Facebook, Spotify, Criteo and Google amongst others.
And Schrems shows no sign of letting up. Asked about the possible knock-on effects for business use of AI in general, he told CIO.com it wasn’t about targeting businesses specifically. “We think it’s not about stopping AI but ensuring compliance with basic rights of users. If you can’t comply you may need to limit your product to not process data about individuals,” he said.
“When it comes to generating data about individuals the GDPR applies — so data must be accurate or the company generating false information is liable for it. A company must explain the sources of information, but also ensure that false information can be corrected,” said Schrems.
This is not the first time OpenAI has faced a major challenge in the EU. It was forced to shut down briefly last year and make changes after a complaint by Italy’s data protection authority.
Given Schrems’ history of slaying Goliaths, businesses in all sectors will be watching the outcome and weighing up how much it will cost to ensure compliance versus the hoped-for productivity gains when using generative AI tools.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/2096414/data-protection-activists-accuse-chatgpt-of-gdpr-breach.html