With the 2024 U.S. election just seven months away, U.S. officials warn the AI era poses a far bigger threat than the past two decades of social media. However, few have a more personal connection to the topic than Hillary Clinton.
At an event hosted by the Aspen Institute and Columbia University last week about AI and global elections, the former secretary of state and presidential candidate said AI poses a “totally different level of threat” and makes foreign actors’ past efforts on Facebook and Twitter look “primitive” compared to AI-generated deep fakes and other AI-generated content.
“They had all kinds of videos of people looking like me, but weren’t me,” Clinton told the audience. “So I am worried because having defamatory videos about you is no fun, I can tell you that. But having them in a way that you really can’t make the distinction…You have no idea whether it’s true or not.”
The event convened top leaders from U.S., European and state governments as well as top experts from the worlds of AI and media. Along with highlighting various problems, some of the speakers also suggested possible solutions. According to Michigan Secretary of State Jocelyn Benson, tech companies and governments should create new guardrails while also teaching people with how to avoid being duped. (Her state recently passed new laws related to AI and election-related misinformation, which will ban the use of deceptive information and require disclosures on anything generated by AI.)
“Therein lies both our opportunity but also the real challenge,” Benson said. “…We need to equip that citizen, when they receive a text, to be fully-aware as a critical consumer of information, as to what to do, where to go, how to validate it, where [to find] the trusted voices.”
Companies and governments are more prepared for online misinformation than in 2016, according to Anna Makanju, a former Obama administration national security expert and OpenAI’s vp of global affairs.
“We are not dealing with the same kinds of issues at AI companies,” Makanju said. “We are responsible for generating, or what we do is generate AI content rather than distribute it. But we need to be working across that chain.”
Some speakers — including Clinton, former Google CEO Eric Schmidt and Rappler CEO Maria Ressa — also called on Congress to reform Section 230 of the Communications Decency Act. Ressa, a journalist who won the Nobel Peace Prize in 2021, also noted it’s hard for people to know what it’s like to be a victim of online harassment or misinformation until they’ve been attacked.
“The biggest problem we have is there is impunity,” Ressa said. “Stop the impunity. Tech companies will say they will self-regulate. [But a good example] comes from news organizations — we were not only just self-regulating, there were legal boundaries [that] if we lie, you file a suit. Right now, there’s absolute impunity and America hasn’t passed anything. I joked that the EU won the race of the turtles in filing legislation that will help us. It’s too slow for the lightning-fast pace of tech. The people who pay the price are us.”
In Clinton’s comments during the same conversation about Section 230, she said “shame on us that we are still sitting around talking about it.”
“We need a different system, under which tech companies — and we’re mostly talking obviously talking about the social media — platforms operate,” Clinton said. “I think they will continue to make an enormous amount of money if they change their algorithms to prevent the kind of harm that is caused by sending people to the lowest common denominator every time they log on. You’ve got to stop this reward for this kind of negative, virulent content.”
Here’s a snapshot of what some top speakers said during the half-day event:
Michael Chertoff, former U.S. Secretary of Homeland Security: “In this day and age, we have to regard the internet and information as a domain of conflict … How do you distinguish and teach people to distinguish deep-fakes from real things? And the idea being, we don’t want them to be misled by the deep-fakes. But I worry about the reverse. In a world in which people have been told about deep-fakes, do they think everything is deep-fakes. That really gives a license to autocrats and corrupt government leaders to do whatever they want.”
Eric Schmidt, former CEO of Google: “Information, and the information space we live in, you can’t ignore it. I used to give a speech, you know how we solve this problem? Turn your phone off, get off the internet, eat dinner with your family and have a normal life. Unfortunately, my industry made it impossible for you to escape all of this. As a normal human being, you’re exposed to all of this terrible filth and so on. That’s going to ultimately get fixed by the industry through collaboration or by regulation. A good example here, let’s think about TikTok, is that certain content is being spread more than others. We can debate that. TikTok isn’t really social media. TikTok is really television. And when you and I were younger, there was a huge [debate] on how to regulate television. There was something called an equal time rule where it was a rough balance where we said, it’s okay if you present one side as long as you present the other side in a roughly an equal way. That’s how society solves these information problems. It’s going to get worse unless we do something like that.”
David Agranovich, director of global threat disruption at Meta: “These are increasingly cross-platform, cross-internet operations…The responsibility is more diffuse. Platform companies have the responsibility to share information, not amongst the different platforms that are affected, but with groups that can take meaningful action. The second big trend is that these operations are increasingly commercial. They do coordinated inauthentic behavior. The commercialization of these tools democratizes access and it conceals the people that pay for them. It makes it a lot harder to hold the threat accountable.”
Federal Election Commissioner Dana Lindenbaum: “Despite the name, the Federal Election Commission really only regulates campaign finance laws and federal elections — money in, money out and transparency there … We are in the petition process right now to determine if we should amend our regulations, if we can amend our regulations, and if there a role for the FEC in this space. Our language is pretty clear and very narrow. Even if we can regulate here, it’s really only candidate-on-candidate bad action…Congress could expand our limited jurisdiction. If you asked me years ago if there was any chance congress would regulate in the campaign space and really come to a bipartisan agreement, I would have laughed. But it’s pretty incredible to watch the widespread fear over what can happen here. We had an oversight hearing recently, where members on both sides of the aisle were expressing real concern and while I don’t think anything’s going to happen ahead of November, I see changes coming.”
Prompts & Products: AI News and Announcements
Amazon announced it’s investing another $2.75 billion into AI startup Anthropic, bringing the e-commerce giant’s total investment in OpenAI competitor to $4 billion. The investment comes two months after the Federal Trade Commission opened an inquiry into Anthropic and OpenAI to explore the startups’ relationships with tech giants funding them.
IBM debuted a new AI-focused campaign called “Trust What You Create,” which highlights both the potential risks of AI and how to prevent running into them. The company also announced updates to help markets use generative AI in their content supply chains.
The World Federation of Advertisers announced a new “AI Community” to help advertisers navigate generative AI. Members of the steering committee include executives from a range of brands including Ikea, Diageo, Kraft Heinz, the Lego Group, Mars and Teva Pharmaceuticals.
The Brandtech Group announced it has raised a $115 million Series C investment round to help power the marketing holding company’s generative AI efforts. In 2023 it purchased AI content generator Pencil.
In Google’s 2023 Ads Safety Report, the company highlighted the impact of generative AI including details about new risks, Google’s updated policies and how it’s using generative AI tools in its brand safety efforts. The company also included information about the types of harmful content it took action against in 2023.
For those wondering what European think tanks are thinking about AI, the European Parliament released a briefing that highlights reports and research from various organizations.
Adobe debuted a new marketing platform for marketers called GenStudio, which aims to help both large and small companies build new enterprise applications for generative AI. It also announced new and expanded AI partnerships with Accenture, Microsoft, NBCUniversal and Pfizer, which is already using generative AI to enhance the pharma giant’s content supply chain.
The BBC said it would stop using AI in its marketing for “Doctor Who,” a reverse from a few weeks ago, following complaints related to its use of AI for emails and mobile notifications.
OpenAI announced a new AI text-to-speech tool called Voice Engine, which it says can clone a human voice based on a 15-second audio sample. The startup also acknowledged creating AI-generated voice deepfakes have “serious risks, which are especially top of mind in an election year.”
Quotes from Humans: Q&A with Fiverr CMO Matti Yahav
With freelancers and their clients increasing their interest in AI, freelance marketplaces are also finding ways to ride the wave through new AI tools, categories and advertising efforts.
Tasked with marketing Fiverr’s platform is Matti Yahav, who joined the company as CMO in November after spending years as CMO of Sodastream. In a recent interview, Yahav spoke with Digiday about his approach to marketing the Israeli company, and how he’s seeing the platform navigate the growth of AI. Here is an abbreviated and edited version the conversation:
How is your approach to marketing a platform like Fiverr different from your approach to marketing a physical product like Sodastream?
Yahav: I would say there’s a lot of similarities — how you build the brand, how you try to create demand. On the other hand … I would spend a lot of my time thinking on how is the point of sale looking and how’s the packaging looking and things like that. In consumer goods, those are like specific marketing domains that are less relevant when you talk about marketplaces or software. There’s so many similarities, but there’s also obviously a learning curve, which is exciting for me.
Fiverr added a number of new categories to accommodate for the supply and demand of various AI tools the past year. How are you marketing those to freelancers and to potential clients? What kinds of trends are you seeing?
Freelancers are building AI applications to help businesses integrate AI into their activities like chatbots, of course. Other examples might be expert coders offering to clean up code generated by AI. Artists are generating AI as prompt engineers. We also have a lot of web development freelancers offering services to create your own AI blog-writing tool using ChatGPT or using GPT-3 or other examples of consultation with what you can do with AI for small businesses. Maybe a last but super interesting [example] is fact-checking. We’ve seen on our platform that a lot of people are searching for services like fact-checking, because AI creates so much data. And you never know what’s a hallucination, what’s wrong or what’s right.
Are you running any paid media in generative AI chat or search platforms such as Copilot or Google’s Search Generative Experience?
Are we experimenting with them? You bet. Are we implementing some of them? It’s a process. We’re trying to make sure that we don’t use AI for the sake of saying we use AI, like many marketeers I hear. It’s trying to find the right use cases for us and to find how we can really leverage it to the best of our advantage.
https://digiday.com/?p=539735
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : DigiDay – https://digiday.com/media/ai-briefing-hillary-clinton-and-googles-eric-schmidt-both-suggest-section-230-reform/?utm_campaign=digidaydis&utm_medium=rss&utm_source=general-rss