Microsoft-Backed OpenAI Forms New Safety Committee as It Trains Next AI Model—Here’s Why
Key Takeaways
- Microsoft-backed OpenAI announced Tuesday it formed a new safety committee and has started training its next artificial intelligence (AI) model.
- The Safety and Security Committee, which is directed by OpenAI CEO Sam Altman and other board members, will make recommendations for all projects.
- The new committee could help ease worries about the safety of AI’s quickly evolving capabilities, as well as concerns about internal upheaval at OpenAI following the departures of several senior scientists and the dissolution of OpenAI’s former “Superalignment” team.
- Concerns around AI safety could affect OpenAI, and in turn, Microsoft, as AI tech evolves.
Microsoft-backed (MSFT) OpenAI announced Tuesday it formed a new safety committee and has started training its next artificial intelligence (AI) model, amid safety concerns surrounding the emerging tech’s quickly evolving capabilities.
The new committee could help boost perceptions of OpenAI leadership by easing concerns about AI safety, as well as worries about internal upheaval following the departures of several senior scientists and the dissolution of OpenAI’s former “Superalignment” team.
New Committee Comes as OpenAI Works on Next Model
OpenAI said the company’s board created the Safety and Security Committee, directed by OpenAI CEO Sam Altman, board chair Bret Taylor, and board members Adam D’Angelo and Nicole Seligman.
The committee “is responsible for making recommendations on critical safety and security decisions for all OpenAI projects” with recommendations made in 90 days, the company said in a release.
The company also reported that it has “recently begun training its next frontier model” and “anticipate[s] the resulting systems to bring [OpenAI] to the next level of capabilities.”
Moves Could Ease Concerns About Internal Upheaval, AI Safety
The formation of the new committee could help ease AI safety concerns following reports that OpenAI dissolved its “Superalignment” team, which focused on aligning AI systems to ensure safety.
OpenAI introduced the Superalignment team in July of 2023, saying it would dedicate 20% of its computing power to managing risks and addressing superintelligence alignment problems.
The team was co-led by OpenAI co-founder and chief scientist Ilya Sutskever and alignment researcher Jan Leike, who both announced earlier this month they were leaving OpenAI.
Leike announced his resignation on Elon Musk’s X (formerly Twitter), saying that “OpenAI must become a safety-first AGI company” as “over the past years, safety culture and processes have taken a backseat to shiny products.”
Following his OpenAI departure, Leike announced that he is joining Amazon-backed (AMZN) Anthropic to “work on scalable oversight, weak-to-strong generalization, and automated alignment research.” Leike’s quick move from OpenAI to Anthropic also underlined the AI talent shortage the emerging tech industry faces.
Perceptions of OpenAI Could Also Affect Microsoft
The recent resignations contributed to speculation of internal upheaval at OpenAI and concerns around AI safety that could threaten to undermine OpenAI’s position as an early AI leader.
Perceptions of the ChatGPT maker could also affect its largest investor, Microsoft. The big tech company has staked its claim in the AI era through its investment in OpenAI as it also works on its own AI initiatives.
The latest news comes about six months after OpenAI ousted and rehired CEO Sam Altman, raising concerns around OpenAI’s governance and Microsoft’s involvement.