By
Alex Blake
September 20, 2023 5:25AM
Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.
The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.
Sanket Mishra / Pexels
When it came to specific concerns, 82% of people were worried about deepfakes and “other artificial engineered content,” while 80% feared how this technology might be used in malware attacks. A majority of respondents worried about AI’s use in identity theft, harvesting personal data, replacing humans in the workplace, and more.
In fact, the survey indicates that people are becoming more wary of AI’s impact across various demographic groups. While 90% of boomers are worried about the impact of deepfakes, 72% of Gen Z members are also anxious about the same topic.
Although younger people are less suspicious of AI — and are more likely to use it in their everyday lives — concerns remain high in a number of areas, including whether the industry should do more to protect the public and whether AI should be regulated.
Strong support for regulation
Shutterstock
The declining support for AI tools has likely been prompted by months of negative stories in the news concerning generative AI tools and the controversies facing ChatGPT, Bing Chat, and other products. As tales of misinformation, data breaches, and malware mount, it seems that the public is becoming less amenable to the looming AI future.
When asked in the MITRE-Harris poll whether the government should step in to regulate AI, 85% of respondents were in favor of the idea — up 3% from last time. The same 85% agreed with the statement that “Making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia,” while 72% felt that “The federal government should focus more time and funding on AI security research and development.”
The widespread anxiety over AI being used to improve malware attacks is interesting. We recently spoke to a group of cybersecurity experts on this very topic, and the consensus seemed to be that while AI could be used in malware, it is not a particularly strong tool at the moment. Some experts felt that its ability to write effective malware code was poor, while others explained that hackers were likely to find better exploits in public repositories than by asking AI for help.
Still, the increasing skepticism for all things AI could end up shaping the industry’s efforts and might prompt companies like OpenAI to invest more money in safeguarding the public from the products they release. And with such overwhelming support, don’t be surprised if governments start enacting AI regulation sooner rather than later.
Editors’ Recommendations
ChatGPT: the latest news, controversies, and tips you need to know
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
ChatGPT is violating your privacy, says major GDPR complaint
Google Bard could soon become your new AI life coach
ChatGPT may soon moderate illegal content on sites like Facebook
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Zoom backpedals, says it will no longer use user content to train AI
Like everyone else, Zoom has added AI features to improve its app and videoconferencing service. We all love the ease and speed AI provides, but there are often concerns about the data used to train models, and Zoom has been at the center of the controversy. It’s backpedaling now, saying it won’t use user content to train its AI models.
News leaked in May 2022 that Zoom was working on emotion-sensing AI that could analyze faces in meetings. Beyond the potential for inaccurate analysis, the results could be discriminatory.
Read more
GPT-4.5 news: Everything we know so far about the next-generation language model
OpenAI’s GPT-4 language model is considered by most to be the most advanced language model used to power modern artificial intelligences (AI). It’s used in the ChatGPT chatbot to great effect, and other AIs in similar ways. But that’s not the end of its development. As with GPT-3.5, a GPT-4.5 language model may well launch before we see a true next-generation GPT-5.
Here’s everything we know about GPT-4.5 so far.
Read more
Newegg wants you to trust ChatGPT for product reviews
Newegg, the online retailer primarily known for selling PC components, has pushed AI into nearly every part of its platform. The latest area to get the AI treatment? Customer reviews.
On select products, Newegg is now showing an AI summary of customer reviews. It sifts through the pile, including the review itself and any listed pros and cons, and uses that to generate its own list of pros and cons, along with its own summary. Currently, Newegg is testing the feature on three products: the Gigabyte RTX 4080 Gaming OC, MSI Katana laptop, and Ipason gaming desktop.
Read more
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Digital Trends – https://www.digitaltrends.com/computing/most-adults-distrust-ai-new-survey/