Safe Superintelligence, Co-founded By Ilya Sutskever, Raises $1 Billion For AI Safety Development

SSI's valuation is estimated to be around $5 billion, a testament to the faith investors have placed in Sutskever and his co-founders despite the high-risk nature of AI startups.

Safe Superintelligence, Co-founded By Ilya Sutskever, Raises $1 Billion For AI Safety Development
Image: Safe Superintelligence, Co-founded By Ilya Sutskever

Safe Superintelligence (SSI), a new AI startup co-founded by former OpenAI chief scientist Ilya Sutskever, has raised $1 billion to focus on developing safe artificial intelligence systems that surpass human capabilities. The startup, founded in June 2024, is dedicated to ensuring AI safety and preventing harm. SSI has attracted investment from major venture capital firms like Andreessen Horowitz, Sequoia Capital, and DST Global, and is currently valued at $5 billion.

The startup plans to use the funds to acquire computing power and hire top talent, with a team split between Palo Alto, California, and Tel Aviv, Israel. SSI's leadership includes Sutskever as chief scientist, Daniel Gross as CEO, and Daniel Levy, another former OpenAI researcher, as principal scientist.

SSI’s focus on AI safety comes amidst growing concerns about the potential dangers of rogue AI systems. This issue has gained attention in the AI community, with some supporting safety regulations while others, including OpenAI and Google, oppose them.

Sutskever, who played a key role in OpenAI’s early success, left the company after a controversial series of events involving the ousting and reinstatement of CEO Sam Altman. Now, SSI aims to take a new approach to scaling AI technology, building on Sutskever’s experience but with a fresh perspective on developing AI systems safely and responsibly.