Google, Microsoft Offer Measly $10 Million to Protect the World from AI

Google, Microsoft Offer Measly $10 Million to Protect the World from AI

2 hours ago

October 27, 2023 at 7:01 am

A few of the biggest companies pushing AI, including the legacy Silicon Valley giants Google and Microsoft alongside new blood OpenAI and Anthropic, are coming together to push their own industry body as the main bastion of AI safety. These multi-billion dollar companies’ first boon to the fledgling AI forum is a new director and a paltry $US10 million to somehow steward the entire world on ethical AI.

These AI companies first revealed the Frontier Model Forum back in July with the stated purpose of advancing AI safety research and promoting responsible AI development. The industry group’s objective is to promote just how great but limited AI currently is.

The companies shared on Wednesday that they found a new lead to spearhead the industry group. Chris Meserole, who recently worked as a director for the Brookings Institution’s AI research group, will come in as the Frontier forum’s executive director. The self-titled expert on AI safety said in the release “to realize their potential we need to better understand how to safely develop and evaluate them.”

Alongside new leadership is the organization’s first funding round totaling a little more than $US10 million. It’s not just coming from the companies, but from a few philanthropic organizations that also promote AI. The companies claim the fund’s main goal is to find novel methods to red team (AKA scrutinize and test) the latest, largest AI models. The fund won’t have its own lab space, per se, but will make use of the partner companies existing teams.

To put that $US10 million in perspective, Microsoft already poured billions of dollars into OpenAI to expand its access to more language models. Amazon recently put $US4 billion into Anthropic to develop more AI models for Amazon’s services.

It’s still early days, but the fund is already looking very much like your average industry-promoting organization. The fund still has to create an advisory board that will have “a range of perspectives and expertise,” though it isn’t clear how many full AI skeptics will be involved who can offer a counterpoint to the fund’s cautious yet glowing attitude toward new AI development.

Both Google and Microsoft have been trying to emphasize how they’re already working to minimize AI risks, all while pushing generative AI as far as it can go by implementing it into practically all their new user-end products. The companies promoted they signed on with a White House pledge to ethically develop AI. The two tech giants alongside VC darlings OpenAI and Anthropic claimed they would steward “trustworthy AI,” but it’s hard to take the pledge seriously when major military contractors like Palantir are among their ranks.

The fund will have its work cut out for it. The UK-based International Watch Foundation (IWF) filed a report Wednesday noting it discovered more than 20,000 AI-generated images posted to a dark web forum for proliferating child sexual abuse material. Of those, analysts found that more than half were expressly illegal under UK laws. The IWF said that users on these forums were actively sharing tips and tricks for creating these images using AI art generators.

“We are extremely concerned that this will lower the barrier to entry for offenders and has the potential to slow the response of the international community to this abhorrent crime,” IWF CEO Susie Hargreaves wrote in the report. The watchdog group requested that the government allow them to scrutinize different companies’ notoriously opaque AI training data

That watchdog group is one of many concerned over these kinds of AI images. The state of Louisiana already passed a law banning sexual deepfakes of minors back in July. Last month, South Korean courts sentenced a man to two and a half years in prison for creating hundreds of AI CSAM images.

It’s not like these AI-developing companies are against regulation. On the surface, the company’s top execs are all for it, though only the kind that won’t impede them from developing bigger AI models. Even though inter-industry cooperation is a good thing for building more ethical AI, the fund is simply a way for these companies to keep doing what they’re already doing.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Gizmodo (AU) – https://gizmodo.com.au/2023/10/google-microsoft-offer-measly-10-million-to-protect-the-world-from-ai/

Exit mobile version