With AI-generated content spreading across social media, Meta yesterday announced plans to add new policies and detection tool to improve transparency and prevent harmful content. However, some question if the efforts will take effect soon enough or be effective enough to prevent harm.
Facebook and Instagram’s parent company said it will start labeling content generated by other companies’ AI platforms. Along with requiring that people disclose when content includes generative AI features, Meta also will use its own AI technology to identify generative AI content and enforce policies. Changes planned for the “coming months” include Meta labeling images from companies including Google, Adobe, Microsoft, OpenAI, Midjourney and Shutterstock.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Nick Clegg, Meta’s president of global affairs, wrote in a blog post. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.”
Meta’s own AI content tools already automatically add visible watermarks that include the text “Imagined with AI.” The company also already adds invisible watermarks and embedded metadata. However, as Clegg noted, there’s still work to be done to ensure watermarks can’t be removed or altered. Meta also plans to put its weight behind creating new industry standards for identifying AI-generated images, video and audio. It’s also working with forums like the Partnership On AI, Coalition for Content Provenance and Authenticity (C2PA) and International Press Telecommunications Council.
Hours after Meta’s news, OpenAI announced plans to start including metadata using C2PA’s specifications for images generated by ChatGPT and its API serving its DALL-E model. OpenAI also acknowledged metadata is “not a silver bullet” for addressing content authenticity and can be “easily removed either accidentally or intentionally.”
Meta’s updates come amid increased concern about how AI-generated misinformation could impact politics in the U.S. and around the world. Just last month, robocalls in New Hampshire included AI deepfake audio resembling U.S. President Joe Biden urging residents not to vote in the state primary.
On Monday, Meta’s semi-independent Oversight Board suggested the company “quickly reconsider” its manipulated media policies for content made with AI and even without AI. The Oversight Board’s comments were part of an opinion related to a video of Biden that wasn’t edited with AI but still edited in misleading ways. The board also noted the importance of improving the policies ahead of various elections in 2024.
“The Board is concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent (for example, to electoral processes),” according to the Board.
While Meta’s efforts are starting with images, Clegg said the goal is to later include video and audio as other AI platforms start labeling other types of content. However, for now, Meta is relying on voluntary disclosures when labeling AI content beyond just images. According to Clegg, users that don’t properly label their content could prompt Meta to “apply penalties.”
“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” Clegg wrote.
In a 2023 consumer survey conducted by Gartner, 89% of respondents said they’d struggle to identify AI content. The flood of generative AI content — combined with consumers not with knowing what’s real or not — makes transparency even more important, said Gartner analyst Nicole Greene. She also noted three fourths of respondents said it’s “very important” or of “upmost importance” for brands that use generative AI content to properly label it. That’s up from two thirds of respondents in a previous survey.
“We’re facing a challenging environment for trust as we head into an upcoming election cycle and Olympics year where influencers, celebrities and brands will likely be facing the threat of deepfakes at an unprecedented scale,” she said. “Figuring out what’s authentic is going to be even more important as it’s harder for people to know due to the sophistication of the tech to make things look so real.”
This isn’t the first time Meta has announced policy changes related to generative AI content. In November, the company said it would start requiring political advertisers to disclose content created or edited with generative AI tools. However, researchers already are finding evidence of harmful generative AI content slipping through made with Meta’s own tools. One new report showed examples of using Meta’s own tools to create ads targeting teens with harmful content promoting drugs, alcohol, vaping, eating disorders and gambling. The report, released by Tech Transparency Project — part of the nonpartisan watchdog Campaign For Accountability — also showed additional examples of making generative AI ads approved by Meta that violate the platform’s policies against violence and hate speech.
According to Katie Paul, TPP’s director, the ads in question were approved in less than five minutes. That’s a lot faster than the hour it took for TTP’s non-AI ads to be approved when it conducted similar research in 2021. Given Meta’s previous problems with using AI for content moderation and fact-checking, Paul also wondered if there’s enough evidence yet to know if AI detection of generative AI content will be effective across the board. She said TTP’s researchers have already found examples of AI-created political ads in Facebook’s Ads Library that aren’t properly labeled as using AI.
“If we can’t trust what they’ve been using all of these years to address these critical issues, how can we trust the claim from companies like Meta when it comes to forward-looking AI and generative AI?” Paul said. “How are they going to make their platforms safer using that kind of labeling for their content?”
https://digiday.com/?p=534163
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : DigiDay – https://digiday.com/media/meta-expands-ai-image-labeling-to-include-ai-generated-content-from-other-platforms/?utm_campaign=digidaydis&utm_medium=rss&utm_source=general-rss