OpenAI Forms New ‘Safety and Security Committee’ That Will Oversee All Decisions Being Made Responsibly Concerning Artificial Intelligence
The billion-dollar startup OpenAI is no stranger to a rollercoaster turn of events, with some members of the company having departed as they felt that the firm behind ChatGPT is no longer being careful about the proliferation of artificial intelligence and what impact it can cause if there is no accountability. This would be one reason why OpenAI announced that it has formed a ‘Safety and Security Committee’ that will adopt a more responsible approach in the future.
The new Safety and Security Committee comprises of OpenAI members, including CEO Sam Altman
The company announcement post states that various OpenAI board members, such as Bret Taylor, Adam D’Angelo, Nicole Seligman, and others, are members of the committee and will be responsible for evaluating the startup’s safety processes over the next three months. The team members will then share their findings with the entire OpenAI board of directors for a review. Any adopted suggestions that comply with safety and security will then be published going forward.
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.
A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
OpenAI has witnessed various departures in a short time span, particularly from the safety side of the organization. The reason for their resignation is losing confidence that the company would behave responsibly around artificial intelligence becoming more and more capable. One notable individual was Ilya Sutskever, a co-founder of OpenAI, who left in May earlier this year primarily because of Altman’s obsession with launching various products powered by AI, but at the expense of any due diligence.
However, OpenAI has favored AI regulation and even hired an in-house lobbyist. Additionally, the U.S. Department of Homeland Security announced that Sam Altman would be among the members of its newly formed Artificial Intelligence Safety and Security Board, which may boost the confidence of all those who are torn about the company’s vision. All that remains to be seen is how seriously this new ‘Safety and Security Committee’ is doing its job. It looks like we will find out after the 90-day window.
News Source: OpenAI