What Will It Take for Us to Trust Artificial Intelligence?

What Will It Take for Us to Trust Artificial Intelligence?

With comedian Sarah Silverman suing OpenAI and Meta, the public discourse on AI is well underway, shedding light on the technology’s many issues while ensuring it doesn’t slip out of our control too soon

New tech always ruffles some feathers. Heck, any invention does. Our first reaction to discovering fire would perhaps have been full of skepticism too. Call it basic human instinct, or generational trauma, trust issues seem to be at an all-time high with Artificial Intelligence (AI) making inroads into newer industries. Dial in business leaders’ apprehensions about the responsible development of AI, and Hollywood’s obsession with AI-driven dystopias, it seems like it’s going to take a long time before humans can completely trust such sophisticated levels of AI. 

Sarah Silverman sues Meta, OpenAI for copyright infringement https://t.co/E1cY1EhBtp pic.twitter.com/ijEIUtFAxn

— Reuters (@Reuters) July 10, 2023

To add to these concerns, comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey recently filed copyright infringement lawsuits against OpenAI and Meta in a US District Court, alleging that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally obtained datasets containing their works. These datasets were allegedly acquired from “shadow library” websites like Bibliotik, Library Genesis, and Z-Library, available in bulk through torrent systems. On the same day, the law firm Clarkson filed a class-action lawsuit against OpenAI. It alleges that the creators of ChatGPT violated copyrights and privacy by using scraped internet data to train their technology. However, this is just the tip of the iceberg. 

Blurred Lines 

The lawsuits add to the growing legal challenges faced by AI companies. In November, for instance, OpenAI and Microsoft were sued for using computer code from GitHub to train AI tools. GitHub’s CEO, Nat Friedman, argues that Copilot, which is an AI coding assistant that generates code for basic software functions, can use any open-source code under “fair use” provisions. Other AI companies and researchers make similar claims. 

Lawsuit says OpenAI violated US authors’ copyrights to train AI chatbot https://t.co/2dJIxcqDIN pic.twitter.com/6nJ6IGlC9I

— Reuters (@Reuters) June 29, 2023

Programmer Matthew Butterick and the Joseph Saveri law firm disagree. They filed a class-action lawsuit against Microsoft GitHub and OpenAI, claiming that they profit from open-source programmers’ work by violating open-source licenses. In simpler terms, they accuse Microsoft and others of software piracy by using code without permission. Butterick warns of more significant intellectual property theft in the future, extending beyond code to images, writing, and data. He believes Microsoft and OpenAI aim to train on any data, without consent or limitations. Case in point: AI image generators like DALL-E 2 which already trains on web-sourced images. 

 Long, Lawsuit-filled Road Ahead? 

Copyright infringement aside, every new lawsuit sheds light on the myriad issues plaguing technology. In February, Getty Images sued Stability AI for illegally using photos to train its image-generating bot. OpenAI faced a defamation lawsuit in June over ChatGPT’s text falsely accusing a radio host of fraud. As a result, Universal Music Group asked Apple and Spotify to block scrapers in April. Reddit stopped data access due to years of scraping by Big Tech. And more recently, the content-viewing limit imposed on unpaid users on Twitter is another desperate measure to curb AI from spreading its wings. 

The new class-action lawsuit against OpenAI alleges lack of transparency with users. It claims OpenAI fails to inform users that their input data may be used to train new profitable products, like Plugins. It also accuses OpenAI of not adequately preventing children under 13 from using its tools, a similar accusation faced by Facebook and YouTube.  

OpenAI is one of many companies, among Google, Facebook, Microsoft and a growing number of other companies, that use data scraped from the open internet to train their AI models. However, OpenAI is being sued by the law firm Clarkson because it was the first company to use this method and spur its rivals to do the same. 

 In a statement, Ryan Clarkson, managing partner of the law firm Clarkson, said that the lawsuit is a “landmark” case that warns about the dangers of AI. He said that OpenAI and Microsoft do not fully understand the technology they are using, and that they have released it into the world anyway.  

But that is where the beauty of open-source tech lies – it levels the playing ground by being as accessible to an individual sitting in a tier 3 city in India as it is for the billion-dollar corporations in first world countries. And while we may not completely trust the technology just yet, we get to be part of the discourse brought along by AI, thus shaping its future. While it’s still too early to predict whether it will turn out to be a dystopia where the tech enslaves us or a utopia where it helps us maximise our potential, cheap thrills like listening to The Weeknd belt out Pasoori or Eminem rapping in Punjabi are sure to keep us hooked. 

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : MansWorldIndia – https://www.mansworldindia.com/tech/artificial-intelligence-open-ai-chatgpt-meta-copilot-copyright-lawsuit-sarah-silverman/

Exit mobile version