In an effort to fortify its line of defense against potential threats from artificial intelligence, OpenAI has put a series of advanced safety protocols into place.
The company will introduce a “safety advisory group” to have authority over the technical teams, providing recommendations to leadership.
OpenAI has endowed this board with the power to veto decisions. This move demonstrates the commitment of OpenAI to remain at the top of risk management with a proactive stance.
Of late, OpenAI has witnessed significant leadership changes and there has been an ongoing disclosure about the risks associated with the deployment of AI. This prompted the company to re-evaluate its safety process.
In the updated version of its “Preparedness Framework” posted in a blog, OpenAI disclosed a systematic approach to identify catastrophic risks in the new AI models and address them.
By catastrophic risk, we mean any risk that could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals — this includes but is not limited to, existential risk.OpenAI update
An Insight into OpenAI’s New “Preparedness Framework”
The new Preparedness Framework of OpenAI includes three distinct categories where the models have been divided into phases of governance. In-production models fall under the category of the “safety systems” team.
This is meant to identify risks and quantify them before release. The role of the “superalignment” team is to establish theoretical guidance for such models.
The “cross-functional Safety Advisory Group” of OpenAI is tasked with the responsibility of reviewing reports independently from technical teams to ensure the objectivity of the process.
The process of risk evaluation involves assessing models across four categories. These are CBRN (chemical, biological, radiological, and nuclear threats), model autonomy, persuasion, and cybersecurity.
Although they might consider mitigations, the system won’t allow any model with a ‘high’ risk to be developed. Also, if the models have ‘critical risks’, they cannot further develop the systems.
The fact that OpenAI remains committed to ensuring transparency is evident in its approach to identifying the respective risk levels in the framework. This makes the evaluation process clear and standardized.
For instance, the potential effect of model capabilities in the cybersecurity section determines the various risk levels. This ranges from identifying novel cyberattacks and executing defense strategies to increasing productivity.
Experts who compose this group will recommend the leadership as well as the board. With this two-stage review mechanism in place, OpenAI aims to prevent scenarios where high-risk products are developed without adequate scrutiny.
OpenAI’s CEO and CTO to Make Ultimate Decisions
The CEO and CTO of OpenAI, Sam Altman and Mira Murati reserve the authority to make the final decision regarding the development of advanced models. However, the effectiveness of the veto power of the board remains under question, along with the degree of transparency in making decisions.
OpenAI has promised to get the systems audited by independent third parties. The efficiency of its safety measures largely depends on expert recommendations. As OpenAI takes this bold step to fortify its safety framework, the tech community watches closely.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechReport – https://techreport.com/news/openai-board-granted-veto-power-over-risky-ai-models/