By Marty Swant • February 12, 2024 • 4 min read •
Ivy Liu
AI providers and government entities announced a flurry of efforts aimed at bolstering the internet’s defenses from AI-generated misinformation.
Last week, major AI players announced new transparency and detection tools for AI content. Hours after Meta detailed plans for labeling AI images from outside platforms, OpenAI said it will start including metadata for images generated by ChatGPT and its API for DALL-E. Days later, Google announced it will join the steering committee of Coalition for Content Provenance and Authenticity (C2PA), a key group setting standards for various types of AI content. Google will also start supporting Content Credentials (CC) — a sort of “nutrition label” for AI content that was created by C2PA and the Content Authenticity Initiative (CAI). Adobe, which founded CAI in 2019, debuted a major update for CC in October.
The updates were especially noteworthy on a few fronts by bringing major distribution platforms into the standardization process. Bringing platform-level participation could also help with driving mainstream adoption of AI standards and helping people better understand how to know if content is real or fake. Andy Parsons, senior director of CAI, said giants like Google help with the “snowball effect” needed to improve the internet’s information ecosystem. It also requires alignment across companies, researchers and various government entities.
That major AI model providers are designing and using C2PA standards also helps with driving uniform adoption across both content creation and distribution platforms. Parsons noted that Adobe’s own Firefly platform was already C2PA compliant when it launched last year.
“Model providers want to disclose what model was used and ensure that in cases where they need to determine whether their model produced something — whether it’s newsworthy or a celebrity or something else — they want to be able to do that,” Parsons told Digiday.
Government agencies are also looking for ways to prevent AI-generated misinformation. Last week, the Federal Communication Commission banned AI-generated voices in robocalls — making them illegal under the Telephone Consumer Protection Act — following recent AI deepfake robocalls resembling President Joe Biden. Meanwhile, the White House announced more than 200 have joined a new AI consortium, including numerous universities, companies and other organizations. The European Commission is also gathering views for its DSA guidelines about election integrity.
AI-powered political micro-targeting is a real concern. State legislatures have passed new laws related to AI-related political advertising. Members of Congress have also introduced legislation, but so far nothing has gained traction. According to a recent study reported by Tech Policy Press, large language models can be used to develop easy and effective micro-targeted political ads campaigns on platforms like Facebook. Last week, even Meta’s own semi-independent oversight board urged the company to “quickly reconsider” its manipulated media policies for content made with AI — and even without AI.
Authenticating AI content helps to foster trust and transparency, but experts say it’s even more important to block bad actors from distributing misinformation across social and search. However, accurately detecting AI deepfakes and text-based scams isn’t easy.
Curbing the distribution of AI misinformation is critical, said Josh Lawson, Director of the Aspen Institute’s AI and Democracy initiative. While AI content creation standards are “great hygiene” for major platforms, he said it doesn’t stop bad actors from creating problematic AI content with open-source and jail-broken AI models. He likened misinformation supply and demand to an hourglass.
“We see generative AI as a force that will balloon supply, but it still has to make its way down to people,” Lawson said. “If you can’t get it to people, it won’t be able to affect the elections.”
Worries about AI might distract from ongoing concerns about online privacy. In a post on X last week, Meredith Whittaker, president of the privacy messaging app Signal, mentioned the focus on election year deepfakes is “a distraction, conveniently ignoring the documented role of surveillance ads.” She also noted that companies like Meta and Google — which have in recent years also rolled back political ads restrictions — could benefit from the distractions.
“Put another way, a deep fake is neither here nor there unless you have a platform + tools to disseminate it strategically,” Whittaker wrote.
Prompts and products: AI news and announcements
Google rebranded its Bard chatbot as Gemini as part of a major expansion for its flagship large language model. It also announced new AI capabilities across various Google products and services.
Tech companies used Super Bowl LVIII’s mainstream audience to market new AI features while non-tech advertisers used generative AI to create campaigns for the Big Game in an attempt to stand out. Super Bowl advertisers with commercials marketing AI features included Microsoft, Google, Samsung, Crowdstrike and Etsy.
Mixed reality apps powered by generative AI are already arriving on the Apple Vision Pro headset. Early examples include ChatGPT, Wayfair and Adobe Firefly.
A new report from William Blair examines the impact of generative AI for enterprise.
Advertising, social and cloud providers have continued touting generative AI in various earnings results and investors calls. Last week’s examples include Omnicom, IPG, Snap Inc., Pinterest, Cognizant and Confluent. However, Amazon’s own cloud CEO warns generative AI hype could reach Dotcom bubble proportions.
https://digiday.com/?p=534642
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : DigiDay – https://digiday.com/media/ai-briefing-tech-giants-adopt-ai-content-standards-but-will-it-be-enough-to-curb-fakes/?utm_campaign=digidaydis&utm_medium=rss&utm_source=general-rss