There’s an old ad-tech adage that bad actors follow the flow of ad dollars, but will the same soon be true for generative AI?
As the popularity of large language models leads to AI creating large volumes of text, images and video content, the question is increasingly focusing on whether advertisers will end up funding low-quality content — even unintentionally.
One new report shows just how quickly questionable websites are publishing AI-generated content and monetizing it. Earlier this week, researchers at the news reliability rating service NewsGuard released an in-depth look at how hundreds of programmatic ads paid for by blue-chip brands were served across a growing number of AI-generated websites that are churning out hundreds of articles a day.
Over the last two months, the team found nearly 400 ads for 141 major brands across more than 50 websites while browsing the internet in Germany, France, Italy and the U.S. But unlike other recent NewsGuard reports about new types of AI content, the websites in the latest findings weren’t necessarily publishing misinformation. Instead, they found low-quality content that ranged from plagiarized versions of real news articles published elsewhere to click-bait headlines promoting unproven or potentially harmful remedies to allergies, ADHD and even cancer. NewsGuard’s list of “unreliable” AI-generated websites seems to also be growing quickly, jumping to more than 200 in June from just a few dozen in May.
“The creation of reliable AI-generated news sites are being incentivized by the monetization of big ad-tech companies who are monetizing these sites en mass,” NewsGuard Enterprise Editor Jack Brewster told Digiday. “And [they] don’t appear to be checking if they have human oversight or checked for accuracy.”
Because the brands likely weren’t aware their ads were running on the AI-generated websites, NewsGuard chose not to disclose the advertisers by name. However, examples ranged from major banks and streaming services to tech and auto giants to sports apparel and pet suppliers. Of the ads identified by NewsGuard, more than 90% were served via Google Ads.
“It’s not like these companies are directly saying, ‘Hey can I adverse on this AI-generated news site?’” Brewster said. “They just tell Google or another third party to advertise to people like you and me and that creates other problems.”
As companies look for new ways to create safeguards, advertisers’ AI-related brand safety concerns are already creating new business for companies like DoubleVerify. Last month, the company said AI content farms drove a 56% increase in the company’s brand safety tech in the first quarter of 2023 compared to 2022.
Although AI-generated content isn’t entirely unique from other brand safety concerns, DoubleVerify CEO Mark Zagorski said it is creating new challenges because of the scale it creates along with new issues such as concerns related to copyright infringement. As a result, more advertisers are adding AI-generated websites to their block lists. Other advertisers are less worried about AI-generated content and instead more concerned with the content AI generates. DoubleVerify also is investing more in its own AI tools: The company’s first-quarter 2023 results showed product development costs increased to $28.5 million from $21.5 million a year earlier. (Zagorski said the upgrades will help develop new ways of detecting content across more languages and more content formats including video.)
“The interesting thing is whether or not this is created by generative AI is less of a factor than what the content is itself,” Zagorski told Digiday. “That’s why we want to use a scalpel rather than a cleaver.”
Generative AI is also adding new challenges to the programmatic ad ecosystem while also compounding existing weaknesses, notes Evelyn Mitchell-Wolf, a senior analyst digital advertising and media analyst at eMarketer. The challenges are also creating an “existential crisis” for traditional publishers that are torn between using generative AI tools, investing in human-created content and deciding whether to allow AI models to have API access to quality content to be used as training data. She also added that exclusion lists don’t guarantee advertisers will be able to avoid all risky content.
“Generative AI is increasing the surface area exponentially where that low-quality content can live,” Mitchell-Wolf said. “It’s a snowball of an issue.”
When asked for comment about NewsGuard’s report, Google spokesperson Michael Aciman said the company reviewed the AI-generated websites mentioned in NewsGuard’s report and removed ads from many of them “due to pervasive policy violations.” On several other websites, Google demonetized individual pages on sites cited by NewsGuard that were violating Google’s policies. Aciman also noted that websites don’t necessarily violate Google policies simply for having AI-generated content, but added that it realizes that “bad actors are always shifting their approach.”
“We have strict policies that govern the type of content that can monetize on our platform,” Aciman said. “For example, we don’t allow ads to run alongside harmful content, spammy or low-value content, or content that’s been solely copied from other sites. When enforcing these policies, we focus on the quality of the content rather than how it was created, and we block or remove ads from serving if we detect violations.”
The challenges come as other parts of the programmatic advertising ecosystem also come under the spotlight. In a new study of the programmatic media supply chain, “made for advertising” (MFA) websites accounted for 21% of impressions and 15% of total ad spend. The report, published this month by the Association of National Advertisers, also found that MFA websites accounted for 19% of open marketplace media buys and even 14% of private marketplace deals.
MFA websites include more types of websites than just those with AI-generated content, but the findings show advertisers aren’t always in control of their own advertising. The report also illustrates how much room for improvement there still is when it comes to helping advertisers fund quality content rather than click-bait from both humans and bots.
Because AI makes it easier to make websites a lot faster, brand suitability becomes more challenging and allows the “bad actors” to make more money, said Keri Bruce, an attorney at Reed Smith, the law firm that developed the ANA’s report. All of that leads to a bigger game of “legal whack-a-mole,” she said, adding that advertisers should keep track of how many websites they’re running while also focusing more on inclusion lists rather than just exclusion lists.
“I can’t name 44,000 websites I go to and don’t think a single consumer can,” she said “That’s the challenge with programmatic: That it can put your ads on thousands and thousands of websites, but do you really need to be on thousands and thousands of websites?”
https://digiday.com/?p=509246
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : DigiDay – https://digiday.com/media-buying/programmatic-ads-pose-new-brand-risks-amid-the-generative-ai-boom/?utm_campaign=digidaydis&utm_medium=rss&utm_source=general-rss