A new Safety and Security Committee has been formed that will be responsible for making safety recommendations to the company’s board on all OpenAI products.
The team will be headed by CEO Sam Altman, board member Nicole Seligman, and the company’s board chair, Bret Taylor.
The announcement for the new team comes just weeks after the old team was dissolved owing to disagreements between the OpenAI board and team leaders Ilya Sutskever and Jan Leike.
In a blog post on Monday, OpenAI announced the formation of a new Safety and Security Committee, just weeks after the old AI safety team was dissolved.
The new committee, headed by CEO Sam Altman, board member Nicole Seligman, the company’s board chair Bret Taylor, and Adam D’Angelo, will be responsible for making all security-related recommendations on OpenAI projects.
The committee will also consult other security and technical experts, such as former cybersecurity officials Rob Joyce and John Carlin.
”A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days.” – OpenAI
After the completion of those 90 days, the committee will share its recommendations with the board. Once those recommendations have been approved by the board, the adopted recommendations will be made public.
Controversy Surrounding the New Team
The news about the new team comes at the heels of multiple high-profile exits from the company’s safety and security department. This includes co-founders Ilya Sutskever and Jan Leike. The rest of the members of the team were moved to other research departments.
Both Sutskever and Leike were the lead members of the company’s “Superalignment” team and were responsible for ensuring that OpenAI’s AI developments prioritize human needs and safety first.
Leike revealed that he had disagreements with the higher leadership for quite some time, especially about the company’s core values.
He was also severely critical of the way the company handled safety and said that it often prioritized “new and shiny products” over security.
Quite conveniently, this whole resignation episode collided with OpenAI’s launch of GPT-4o, possibly to prevent media houses from digging into these resignations.
However, this new team comes at a crucial time given that OpenAI is planning to launch its “next frontier model,” which will succeed the current model that drives ChatGPT.
Altman believes that this new model will bring them a step closer to making AGI (artificial general intelligence) a reality. This is the same technology that many experts have warned against and might be the possible reason behind co-founders resigning.
To put it in a nutshell, Altman dissolved a team that consisted of members who did not agree with his ideas. He then formed a new team with himself at the helm so that no one could get in the way of what he wanted to do. All this sounds super fishy and dangerous to me.
Plus, the structure of the committee doesn’t sound very independent. It consists of members who are already part of the OpenAI board. This means that the people who are responsible for ensuring maximum profitability will now also be responsible for safety recommendations. This creates a massive conflict of interest—you can connect the dots!
Ex-OpenAI Board Member Reveals Shocking Details
As reported by Reuters, Helen Toner, a former board member of OpenAI, revealed some shocking details in a recent podcast, “The Ted AI Show.”
She claims that Sam Altman was removed as the CEO of OpenAI in November after two OpenAI executives reported grave instances of ‘psychological abuse’. They had even produced documentation and screenshots of the instances.
The employee support Altman received at that time was also claimed to be fear-driven. The employees were told that the company would collapse if Altman didn’t return. They also feared retaliation from Altman if they did not stand in his support.
More shockingly, Helen revealed that the OpenAI board members only came to know about ChatGPT when they saw it on X.
Altman’s Charity
Sam Altman and his husband have decided to give away most of their wealth through the Giving Pledge, a charity that helps rich people donate their wealth to philanthropic causes. Sam Altman is currently worth about $2 billion. Perhaps this is Altman’s way of repentance?
It’s important to note that the Giving Pledge is not a legal contract but a moral commitment that encourages millionaires and billionaires to donate part of their fortune for a good cause. Currently, the charity has more than 245 couples and individuals from over 30 countries.
Our Editorial Process
The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechReport – https://techreport.com/news/openai-new-safety-security-committee/