OpenAI creates committee to monitor safety of its artificial intelligence models – The Irish Times
OpenAI has created a board committee to evaluate the safety and security of its artificial intelligence models, a governance change made weeks after its top executive on the subject resigned and the company effectively disbanded his internal team.
The new committee will spend 90 days evaluating the safeguards in OpenAI’s technology before giving a report. “Following the full board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,” the company said in a blog post on Tuesday.
OpenAI also said that it has recently started to train its latest AI model.
The private firm’s recent rapid advances in AI have raised concerns about how it manages the technology’s potential dangers. Those worries intensified last fall when chief executive Sam Altman was briefly ousted in a boardroom coup after clashing with co-founder and chief scientist Ilya Sutskever over how quickly to develop AI products and the steps to limit harms.
Those concerns returned this month after Sutskever and a key deputy, Jan Leike, left the company. The scientists ran OpenAI’s so-called superalignment team, which focused on long-term threats of superhuman AI. Leike, who resigned, wrote afterward that his division was “struggling” for computing resources within OpenAI. Other departing employees echoed his criticism.
Following Sutskever’s departure, OpenAI dissolved his team. The company said on Tuesday that this particular work would continue under its research unit and John Schulman, a co-founder with the new title of Head of Alignment Science.
The start-up has at times struggled to manage staff departures. Last week, OpenAI nixed a policy that cancelled the equity from former staffers if they spoke out against the company. A spokesperson said OpenAI was aware of criticism from ex-employees and anticipated more, adding that the company was working to address concerns.
OpenAI’s new safety committee will consist of three board members – Chairman Bret Taylor, Quora CEO Adam D’Angelo and ex-Sony Entertainment executive Nicole Seligman – along with six employees, including Schulman and Altman. The company said it would continue to consult outside experts, naming two of them: Rob Joyce, a Homeland Security adviser to Donald Trump, and John Carlin, a former Justice Department official under President Joe Biden. – Bloomberg