Safe Superintelligence (SSI)
Safe Superintelligence Inc., usually referenced by its initials SSI, is an American artificial intelligence research company founded in June 2024 by Ilya Sutskever, Daniel Gross, and Daniel Levy. The company is headquartered in Palo Alto, California, with operations in Tel Aviv. SSI has stated a single mission and product, the development of safe superintelligence, and as of April 2026 it remains pre-product with a private valuation of $32 billion. The company has no website beyond a single page of plain text and has released no models, demonstrations, or research papers.
At a glance
- Founded: June 19, 2024 in Palo Alto, California
- Status: Private. Pre-product. Approximately 20 employees as of mid-2025.
- Funding: Cumulative $3 billion across two reported rounds. $1 billion at a $5 billion valuation in September 2024; $2 billion at a $32 billion valuation in April 2025, led by Greenoaks Capital.
- CEO: Ilya Sutskever (co-founder)
- Co-founders: Daniel Gross (former Apple AI lead and Y Combinator partner) and Daniel Levy (former OpenAI researcher).
- Open weights: None. No models released.
- Flagship models: None publicly released.
Origins
SSI was founded on June 19, 2024, three weeks after Ilya Sutskever announced his departure from OpenAI, where he had been a co-founder, Chief Scientist, and a member of the board. Sutskever had been a central figure in the November 2023 board crisis at OpenAI in which Sam Altman was briefly fired and reinstated. Sutskever stepped back from the OpenAI board following the crisis and formally departed the company in May 2024.
The founding team included Daniel Gross, who had been the head of machine learning at Apple from 2013 to 2017 and a partner at Y Combinator before becoming a prominent AI investor, and Daniel Levy, an AI researcher who had been on the OpenAI optimization team. The launch announcement on June 19, 2024 stated the company's mission in unusually specific terms: a single goal and a single product, both being safe superintelligence.
The founding announcement also stated that SSI would be a "straight-shot SSI lab" focused entirely on the technical problem of safe superintelligence, with no commercial pressure from interim products or services. The framing positioned SSI as a deliberate alternative to the product-and-model release cadence of OpenAI, Anthropic, and Google DeepMind, which Sutskever had publicly characterized as constraints on long-horizon safety research.
In September 2024, SSI raised $1 billion at a $5 billion valuation in its first reported funding round, with backing from a16z, Sequoia Capital, DST Global, and SV Angel. In March and April 2025, SSI raised an additional $2 billion at a $32 billion valuation, a 6.4-fold valuation increase in approximately seven months. The April 2025 round was led by Greenoaks Capital, which committed approximately $500 million, alongside Andreessen Horowitz, Lightspeed Venture Partners, DST Global, and strategic investors including NVIDIA and Alphabet.
In April 2025, Google Cloud announced a partnership to provide TPUs for SSI research, marking the first publicly disclosed compute relationship for SSI. Reports in the first half of 2025 indicated that Meta had attempted to acquire SSI as part of its Meta Superintelligence Labs build, with Sutskever declining the approach. Daniel Gross subsequently joined the Meta Superintelligence Labs leadership team in 2025, an unusual cross-company arrangement whose implications for his SSI role have not been publicly clarified.
Mission and strategy
SSI's stated mission is the development of safe superintelligence. The company's public communication consists of a single page of text on its website, which states the goal in detail: "Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time. We have started the world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence."
The strategic premise is that the conventional AI industry approach of iterative product releases on the way to AGI is incompatible with the kind of fundamental research SSI believes is necessary for genuine safety guarantees. By avoiding interim products, SSI argues it can pursue research directions that would not survive the commercial-pressure environment of OpenAI or Anthropic.
The company has not published technical papers, demonstrations, or research previews of any kind. It has stated that team, investors, and business model are aligned around the single goal of safe superintelligence, with no commercial milestones along the way. The funding model, which has secured $3 billion at a $32 billion valuation without a public product, is unusual in the AI industry and depends entirely on Sutskever's reputation and the team's research credentials.
Models and products
SSI has released no models, products, demonstrations, or research papers as of April 2026. The company has stated publicly that it does not plan interim releases on the path to safe superintelligence.
Benchmarks and standing
There are no publicly available benchmark evaluations of SSI work, as the company has not released models or research artifacts that could be evaluated. The benchmark profile is, definitionally, empty.
The company's standing in the industry rests on the credentials of the founding team and the size of the funding rounds, neither of which translates to capability assessment. SSI is widely watched as a potential future Frontier entrant if and when the company releases its first model.
Leadership
As of April 2026, SSI's senior leadership includes:
- Ilya Sutskever, Chief Executive Officer and co-founder. Co-founder, former Chief Scientist, and former board member of OpenAI. Co-author of the AlexNet paper that catalyzed the deep-learning era and of foundational papers on sequence-to-sequence learning, attention, and reinforcement learning. Public face of SSI and the central reason for the company's funding scale.
- Daniel Gross, co-founder. Former head of machine learning at Apple (2013 to 2017), former Y Combinator partner, prominent AI angel investor. Has also been listed in Meta Superintelligence Labs leadership since 2025; the implications for his SSI role have not been publicly clarified.
- Daniel Levy, co-founder. Former OpenAI optimization researcher.
The team is reported to be approximately 20 employees as of mid-2025, all hired through Sutskever's professional network. The hiring pace is deliberately slow relative to peer-funded AI startups, consistent with the stated focus on small-team research over rapid scale.
Funding and backers
SSI's funding history through April 2026 includes the September 2024 $1 billion round at a $5 billion valuation and the March and April 2025 $2 billion round at a $32 billion valuation, for a cumulative $3 billion across approximately seven months. The $32 billion valuation places SSI among the most valuable private AI companies despite the absence of a public product or revenue.
The investor base combines venture-capital firms (Greenoaks Capital as lead, Andreessen Horowitz, Sequoia Capital, Lightspeed Venture Partners, DST Global, SV Angel) with strategic-corporate investors (NVIDIA, Alphabet) and notable individual investors. The Greenoaks-led April 2025 round was reported to be substantially oversubscribed, with multiple firms attempting to participate at higher commitment levels than were ultimately allocated.
The Google Cloud TPU partnership in April 2025 is the first publicly disclosed compute relationship and indicates that SSI has chosen Google's custom AI silicon rather than NVIDIA-only training infrastructure for the foreseeable future. The compute commitment terms have not been disclosed publicly.
The 2025 reported acquisition attempt by Meta is the most public signal of SSI's strategic value to potential acquirers. The reported terms have not been disclosed; Sutskever publicly declined the approach.
Industry position
SSI occupies the most distinctive position among the Insurgent labs and arguably among all AI companies. The combination of a $32 billion private valuation, no public product, no published research, no website beyond a single page, approximately 20 employees, and a stated commitment to releasing nothing until the company achieves safe superintelligence produces a company shape that has no analog in the AI industry or in venture-backed technology more broadly.
The strategic risks are substantial. The company has no revenue model, no commercial milestones, no interim products to validate the research direction publicly, and no benchmark evidence of capability. Investors are betting entirely on Sutskever's reputation, the team's research credentials, and the long-shot proposition that SSI can produce a single capability breakthrough that justifies the $32 billion valuation and any subsequent rounds.
The strategic strength is the absence of the constraints that bind product-shipping AI labs. SSI has no enterprise sales motion to manage, no consumer product to update, no benchmark cycle to optimize against, no API customers to support, and no public roadmap to defend. The thesis depends entirely on whether this research environment produces capability breakthroughs that the public-facing labs cannot match.
The company's research direction is not publicly known. Sutskever has historically published on optimization, sequence modeling, attention mechanisms, and the scaling-laws literature, but no SSI-specific research direction has been disclosed.
Competitive landscape
SSI competes with several Frontier and Insurgent labs:
- OpenAI. Sutskever's previous lab. The publicly stated SSI strategy is a deliberate counter to OpenAI's product-and-model release cadence.
- Anthropic. Most ideologically adjacent: both labs emphasize safety, both have substantial founder credentials from OpenAI, both have raised significant capital. Anthropic ships products and SSI does not, which is the principal contrast.
- Google DeepMind. Research-deep peer with similar long-horizon framing, though DeepMind is a public-company subsidiary and SSI is private.
- Thinking Machines Lab. Closest peer Insurgent: both founded by senior OpenAI departures in 2024, both pre-product as of late 2025.
- AMI (Advanced Machine Intelligence). The other 2025-founded Insurgent at the most senior end. Yann LeCun's lab pursues a different technical thesis (V-JEPA) but occupies similar fundraising and credentialing space.
- Other Insurgent labs. The 2024 to 2025 founder cohort more broadly competes for senior AI talent, GPU allocations, and investor attention.
Outlook
Several open questions affect SSI's trajectory in 2026 and 2027:
- The first public release. Whether SSI will produce any artifact (paper, model, demonstration, capability claim) and on what timeline is the central unknown.
- The next funding round. SSI has reportedly attracted continued investor interest after the $32 billion valuation; the structure and valuation of any 2026 or 2027 round will indicate market confidence in the no-product strategy.
- Personnel growth. SSI has hired slowly relative to peer-funded AI labs; whether the team scales materially in 2026 will signal whether the research direction requires additional capacity.
- The Daniel Gross dual-role situation. The implications of Gross's Meta Superintelligence Labs role for his continuing SSI co-founder status have not been publicly clarified.
- Compute infrastructure. The Google Cloud TPU partnership terms, scale, and any subsequent compute commitments will indicate the magnitude of the training program SSI has under way.
- Acquisition activity. Meta's reported 2025 approach was rebuffed, but additional acquisition interest from Frontier or Incumbent labs is plausible if SSI demonstrates capability progress.
Sources
- TechCrunch: OpenAI co-founder Ilya Sutskever's Safe Superintelligence reportedly valued at $32B. April 2025 valuation context.
- SiliconANGLE: Ilya Sutskever's SSI reportedly raising new funding at $20B+ valuation. Earlier round reporting.
- CTech: Safe Superintelligence raises $2B at $32B valuation with no product yet. Investor and round details.
- Safe Superintelligence Inc. official site. The single-page company website containing the founding mission statement.
- Wikipedia: Safe Superintelligence Inc.. Reference material.
- Wikipedia: Ilya Sutskever. Founder background.