SuPatMaN/Shutterstock
In November 2022, the tech world was upended as OpenAI released ChatGPT, an AI chatbot with capabilities that seemed almost unbelievable. For years, AI technology had been developing on the periphery with early versions able to generate gibberish text or barely coherent photos. The whole thing seemed cool, but not inherently useful. By comparison, ChatGPT and later Microsoft’s Bing, could converse in near-perfect English, and seemed able to research and look up information in ways we’d never seen before. It was a massive leap forward, and the world took note. Within a mere two months, ChatGPT was seeing one hundred million users. Ever since, ChatGPT has shaken up the world.
But with that massive rise to fame came an equal and opposite backlash as fears grew about what this new AI age would mean. Would AI replace us at our jobs? Would AI go rogue a-la “The Matrix,” or “The Terminator”? And would we be able to tell whether something was written by a human ever again?
As it turns out, some of the worst fears about AI haven’t come to pass yet. However, that doesn’t mean ChatGPT or its competitors are totally safe. There are plenty of realistic concerns surrounding their technology, known as large language models (LLMs), ranging from data privacy to the spread of dangerous misinformation, and even the misuse of AI by malicious actors. So, here are the true dangers of AI chatbots like ChatGPT, so you know what you’re getting into when you use them.
What is ChatGPT and how does it work?
Blackjack3d/Getty Images
Before we can understand the dangers and safety concerns posed by ChatGPT and other, similar chatbots, we must understand how they work at a basic level. ChatGPT is a type of AI called an LLM, which means it was trained on massive amounts of text, ranging from books to academic papers, internet content, and everything in between. It analyzes these sources for patterns, and when you type a prompt, it creates a response based on statistical probabilities derived from its training data.
For example, if you ask, “What is 2+2?” ChatGPT does not know math in the way most humans do. Instead, it relies on the patterns it’s seen during training. Since it has frequently detected equations like, “2+2=4” in its data, it is highly likely to respond with “4.” However, because that is not always the case, it will sometimes tell you that 2+2 is equal to the state of Utah, or something similarly, inscrutably wrong (or some other creepy thing).
Far from being capable of human reasoning, ChatGPT and similar chatbots are simply writing one word at a time, each based on the statistical likelihood of the one that came before. This usually works well enough, but it means LLMs will get things wrong often enough too, and the AI likely won’t know the difference.
There’s of course a lot more to how AI works, but hopefully, this overview helps you understand.
Cloud-based AI products carry privacy concerns
Win Mcnamee/Getty Images
Before even considering the societal ramifications of LLMs themselves, it’s important to remember that many of them operate over the internet.
Services like ChatGPT are typically run out of massive data centers, rather than on our devices. That means every time you type a prompt into ChatGPT, Bing Chat, Google Gemini, or any other LLM, your words are transported over the internet, to the respective company’s computer systems.
This may be a good time to ask what ChatGPT does with your data. OpenAI’s privacy policy makes clear that it intends to use your data for training its models further, and that it “may provide your personal information to third parties without further notice to you, unless required by law.” The company’s employees also look at user data to tune responses, flag misuse, and more.
But it’s not just OpenAI who has your information. Its partners, which include Microsoft and cloud storage company Snowflake, have access too. Of course, the same goes for many competing AIs as well. Google Gemini currently displays a message that reads, “Your conversations are processed by human reviewers… Don’t enter anything you wouldn’t want reviewed or used.”
These privacy concerns are not theoretical. In July 2023, The Washington Post reported that the US Federal Trade Commission had begun investigating OpenAI over a data leak and the technology’s general inaccuracy. Meanwhile, a privacy researcher in the EU filed a complaint alleging that OpenAI violates Europe’s privacy rules.
ChatGPT and other chatbots are still prone to outputting false information
TStudious/Shutterstock
Aside from privacy concerns, another danger of LLMs such as ChatGPT is that they’re prone to creating and spreading misinformation. Incorrect responses are often referred to as “hallucinations,” but that’s a misnomer. LLMs do not understand what they’re saying. As Marco Ramponi at AssemblyAI put it, “their objective function is a probability distribution over word sequences (or token sequences) that allows them to predict what the next word is in a sequence.” In other words, regardless of whether those statistical models are wrong, the AI will confidently state falsehoods.
What that means for you is that, if you enter a query such as, “How long can milk be left out of the fridge?” Because any mention of leaving milk on the counter is statistically near to phrases like, “No longer than two hours,” it will tell you that’s the longest you should leave milk out. But what about when you query something less well-known or settled? When asked about hiking trails at Red Rocks in Colorado, ChatGPT got things mostly right, but also named trails that don’t exist.
Despite the many safeguards companies like OpenAI are building into their products to prevent AI from giving users dangerous advice or misinformation, it is possible to circumvent those guardrails, or for the AI to simply forget about them. Reddit’s r/PromptEngineering forum is filled with people constantly find new ways to trick the bots into rejecting their programming.
ChatGPT could get you in trouble at your school or job
Moor Studio/Getty Images
When it comes to misuse, we’ve seen a rash of panic around ChatGPT. Neil Clarke, editor-in-chief of the respected science-fiction and fantasy magazine Clarkesworld, reported being inundated with AI-generated submissions. There have also been a string of incidents in which lawyers cited nonexistent case law fabricated by ChatGPT, seemingly unaware of potential hallucinations.
Some people are fighting back against this rapid shift. A professor of agriculture at Texas A&M University attempted to fail an entire class after asking ChatGPT whether his students’ papers were written by an AI. This may sound like a reasonable response until you learn that detecting AI is hard. In fact, when we presented ChatGPT with the previous, human-written paragraph in this very article, the AI claimed to have written it.
These issues haven’t stopped some workplaces from attempting to replace writers with ChatGPT, as reported by The Washington Post. The result, of course, is bad writing that’s often inaccurate. That hasn’t stopped sites like CNET and BuzzFeed from experimenting with the technology though.
Bias and misuse are nascent dangers of LLMs
Sansert Sangsakawrat/Getty Images
Thus far, we have focused on the dangers ChatGPT can present to you and your work. But let’s take a moment to explore the safety concerns for society. After all, how safe are you in a society where LLMs are quickly becoming mainstream?
To be clear, the robots don’t appear to be capable of rising up against humanity yet. But humans have and will continue to misuse the technology in frightening ways, and because they’re trained on human data, LLMs pick up some of our worst flaws while enabling others.
Companies have attempted to solve some of these issues. Keep in mind that AI is trained on basically everything on the internet, including mounds of racism, sexism, and other harmful biases. So, companies have attempted to program their AIs to actively avoid those problems in its responses. That sounds good in theory, but in February 2024, Google’s Gemini LLM created images depicting Nazis as people of color, causing an uproar across the internet.
LLMs are being used to read job applications, too, which is a problem when research has already found human hiring managers exhibiting biases against name, race, gender, and other factors. In fact, when given resumes from equally qualified applicants, researchers found LLMs making biased decisions too. It’s a good reminder that if we adopt current-generation AIs too quickly, nightmare scenarios could start to crop up.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : SlashGear – https://www.slashgear.com/1568777/is-chatgpt-safe-what-to-know-ai-chatbots/