AI

Ex-Open AI researcher Jan Leike joins Anthropic amid AI safety concerns


Surskever has yet to announce his next move but his alignment with Anthropic’s values makes it a possible destination.

Leike’s background aligns with Anthropic’s mission. At OpenAI, he belonged to the “Superalignment” team focused on Ensuring AI systems remained aligned with human values. He criticized OpenAI for not allocating sufficient resources to guarantee the achievement of this goal.

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike said in a follow-up post elaborating his resignation. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

Anthropic, on the other hand, claims to be a responsible AI company and integrates ethical principles into AI development.

“Our research teams investigate the safety, inner workings, and societal impact of AI models — so that artificial intelligence has a positive impact on society as it becomes increasingly advanced and capable,” says the company’s mission statement.

OpenAI finally takes note

Following these high-profile exits, the OpenAI board appears to have taken notice. In a strategic move aligning with the direction advocated by Sutskever and Leike, the ChatGPT creator is establishing a “safety and security” board. This board will make recommendations on critical safety and security decisions on all OpenAI projects.



Source

Related Articles

Back to top button