Cybersecurity

OpenAI Forms Another Safety Committee After Dismantling Prior Team


Open AI is forming a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. 

The committee is being formed to make recommendations to the full board on safety measures and security decisions for OpenAI projects and operations.

In its announcement of the committee, OpenAI noted that it has begun training the next iteration of the large language model that underpins ChatGPT, and that it “welcomes a robust debate at this important moment” on AI safety.

The group is first tasked with evaluating and developing the company’s processes and safeguards for the next 90 days, after which the committee will share its recommendations with the board to be reviewed before being shared with the public.

The formation of the committee comes after Jan Leike, a former OpenAI safety executive, resigned from the company due to criticisms of underinvestment in safety work, as well as tensions with leadership. It also comes after its “superalignment” safety oversight team was disassembled, with its members reassigned elsewhere.

Ilia Kolochenko, cybersecurity expert and entrepreneur, raised his skepticism over how this change in the company ultimately will benefit society at large. 

“While this move is certainly welcome, its eventual benefit for society is largely unclear. Making AI models safe, for instance, to prevent their misuse or dangerous hallucinations, is obviously essential,” Kolochenko said in an emailed statement. “However, safety is just one of many facets of risks that GenAI vendors have to address. … Being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative — the absolutely crucial characteristics of GenAI solutions.”





Source

Related Articles

Back to top button