More OpenAI chaos puts Sam Altman on the back foot

More OpenAI chaos puts Sam Altman on the back foot

The company has been on the defensive following the departure of key safety researchers, reports that strict NDAs are silencing former employees, and backlash against a new version of ChatGPT.

The dramatic exits of Jan Leike and Ilya Sutskever last week even forced OpenAI’s leaders, including CEO Sam Altman, to make public statements defending their efforts to control AI risk.

When a Vox report about OpenAI’s tight off-boarding agreements emerged the following day, Altman responded by saying it was one of the “few times” he’d ever “been genuinely embarrassed” running OpenAI. He added he wasn’t aware the clauses were being imposed on departing employees and said the company was working to rectify the agreement.

It’s a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI’s ongoing chaos. A failed coup to remove him last year ultimately bolstered the CEO’s reputation, but it seems OpenAI’s cracks are starting to show once more.

Safety team implosion

OpenAI has been in full damage control mode following the exit of key employees working on AI safety.

Leike and Sutskever, who led the team responsible for ensuring AGI doesn’t go rogue and harm humanity, both resigned last week.

Leike followed his blunt resignation with a lengthy post on X, accusing his former employers of putting “shiny products” ahead of safety. He said the safety team was left “struggling for compute, and it was getting harder and harder to get this crucial research done.”

Quick to play the role of crisis manager, Altman shared Leike’s post, saying, “He’s right, we have a lot more to do; we are committed to doing it.”

The high-profile resignations follow several other recent exits.

According to a report by The Information, two safety researchers, Leopold Aschenbrenner and Pavel Izmailov, were recently fired over claims they were leaking information.

Safety and governance researchers Daniel Kokotajlo and William Saunders also both recently left the company, while Cullen O’Keefe, a research lead on policy frontier, left in April, according to his LinkedIn profile.

Kokotajlo told Vox he’d “gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI.”

The Superalignment team led by Leike and Sutskever, which had about 20 members last year, has now been dissolved. A spokesperson for OpenAI told The Information that it had combined the remaining staffers with its broader research team to meet its superalignment goals.

OpenAI has another team focused on safety called Preparedness, but the high-profile resignations and departures aren’t a good look for a company at the forefront of advanced AI development.

Silenced employees

The implosion of the safety team is a blow for Altman, who has been keen to show he’s safety-conscious when it comes to developing super-intelligent AI.

He told Joe Rogan’s podcast last year: “Many of us were super worried, and still are, about safety and alignment. In terms of the ‘not destroy humanity’ version of it, we have a lot of work to do, but I think we finally have more ideas about what can work.”

Some think Leike’s claims erode Altman’s authority on the subject and have raised eyebrows more widely.

Neel Nanda, who runs Google DeepMind’s mechanistic interpretability team tasked with “reducing existential risk from AI,” responded to Leike’s thread: “Pretty concerning stories of what’s happening inside OpenAI.”

On Friday, Vox reported that strict offboarding agreements essentially silenced OpenAI employees.

They reportedly included non-disclosure and non-disparagement clauses that could take away employees’ vested equity if they criticized their former employer, or even acknowledge that an NDA existed.

Altman addressed the report in an X post: “This is on me and one of the few times i’ve been genuinely embarrassed running openai; i did not know this was happening and i should have.”

He added: “The team was already in the process of fixing the standard exit paperwork over the past month or so.”

“Her” voice paused

Despite OpenAI’s efforts to contain the chaos, the scrutiny doesn’t appear to be over.

On Monday, the company said it was pausing ChatGPT’s “Sky” voice, which has recently been likened to Scarlett Johansson.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company said in a post.

The voice, a key part of the company’s GPT-4o demo, was widely compared to Johansson’s virtual assistant character in the film “Her.” Altman even appeared to acknowledge the similarities, simply posting “her” on X during the demo.

Some users complained about the chatbot’s new voice, calling it overly sexual and too flirty in demo videos circulating online.

Seemingly oblivious to the criticism, OpenAI appeared triumphant following the launch. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.

“I try not to think about competitors too much, but I cannot stop thinking about the aesthetic difference between openai and google,” Altman wrote on X, accompanied by images of the rival demos.

OpenAI didn’t immediately respond to a request for comment from Business Insider, made outside normal working hours.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Business Insider – https://www.businessinsider.com/openai-ai-sam-altman-ilya-sutskever-crisis-resignations-chatgpt-2024-5

Exit mobile version