OpenAI Co-Founder Ilya Sutskever Starts New AI Firm
OpenAI’s former science chief/co-founder is launching his own artificial intelligence (AI) company.
Ilya Sutskever, who stepped down from the ChatGPT-maker last month, announced the launch of Safe Superintelligence (SSI) in a post on X Wednesday (June 19).
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company said in its own social media post.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
— SSI Inc. (@ssi) June 19, 2024
“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace,” the company wrote. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
SSI has offices in both Tel Aviv, Israel, and Palo Alto, California. Sutskever’s co-founders are Daniel Levy, former OpenAI researcher, and Daniel Gross, former AI lead at Apple.
Sutskever left OpenAI in May after playing a role in CEO Sam Altman’s ouster last fall. When Altman retook the helm days later, Sutskever was removed from the board.
However, the two were — at least publicly — on good terms when Sutskever announced his departure from the company.
“The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI [artificial general intelligence] that is both safe and beneficial under the leadership” of Altman, President Greg Brockman and Chief Technology Officer Mira Murati, he wrote.
Sutskever had been part of the company’s superalignment team, which was focused on AI safety.
Another member of the team, Jan Leike, resigned soon after, saying the company had lost its focus on safety, and later took a job with rival AI company Anthropic.
Sutskever’s new company’s launch comes on the heels of a pledge last month by AI companies to implement a “kill switch” policy, potentially halting the development of their most advanced AI models if certain risk thresholds are crossed.
As PYMNTS wrote at the time, some experts are questioning the practicality, effectiveness and consequences of such a policy on innovation, competition and the world economy.
“The term ‘kill switch’ is odd here because it sounds like the organizations agreed to stop research and development on certain models if they cross lines associated with risks to humanity. This isn’t a kill switch, and it’s just a soft pact to abide by some ethical standards in model development,” Camden Swita, head of AI and machine learning innovation at AI firm New Relic,, told PYMNTS.
“Tech companies have made these kinds of agreements before (related to AI and other things like social media), so this feels like nothing new,” Swita added.