OpenAI Forms Safety and Security Committee to Enhance Generative AI Oversight
OpenAI has announced the formation of a Safety and Security Committee, a move aimed at bolstering the oversight of its artificial intelligence projects and operations. If this sounds familiar, it’s because the company created a Safety Advisory Group last December to recommend ways of developing generative AI safely and granted the board of directors veto power over decisions related to AI safety.
OpenAI Safety
The newly established committee is led by Bret Taylor, with other members including OpenAI CEO Sam Altman, Adam D’Angelo, CEO of Quora; and Nicole Seligman, a corporate lawyer. The primary responsibility of this committee is to ensure the safety and security of OpenAI’s projects, particularly as the company embarks on the development of its next-generation AI model. This model is expected to push the boundaries of artificial intelligence, bringing the company closer to achieving artificial general intelligence (AGI).
The new Safety and Security Committee will first review OpenAI’s existing processes and safeguards and work on ways to enhance tehm. This review period is set for 90 days, after which the committee will present its findings and recommendations to the full board. Following this review, OpenAI plans to publicly share an update on the adopted recommendations.
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” OpenAI explained in a blog post. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”
Safety concerns were floated frequently as the (still unconfirmed) reason surrounding OpenAI CEO Sam Altman’s abrupt ousting before his eventual return. Regardless of whether that was the core of the board’s argument for the firing, the evolving discussion around AI risks puts a lot of attention on OpenAI’s approach to safety. The company’s “Preparedness Framework” claims to establish a clear methodology for identifying and addressing any of the bigger risks associated with the generative AI models under development. OpenAI had set up new safety mechanisms divided by the development stage of the specific AI models. They include a safety systems team for in-production models like those powering ChatGPT and a preparedness team for the frontier still in development, which will attempt to spot and measure risks.
Despite the positive commentary from OpenAI, the new committee is necessary partly because of the dissolution of the “superalignment” team for dealing with superintelligent AI models after the departure of OpenAI co-founder and chief scientist Ilya Sutskever and the move of safety team leader Jan Leike to OpenAI rival Anthropic.
Follow @voicebotaiFollow @erichschwartz
OpenAI Grants Board Veto Power and Forms in Generative AI Safety Advisory Group
OpenAI Creates ‘Superalignment’ Team to Safeguard Against a Generative AI Skynet
Sam Altman Returns as OpenAI CEO With (Mostly) New Board of Directors