Ilya Sutskever
Ilya Sutskever is a Soviet-Israeli-Canadian computer scientist, born in 1986 in Nizhny Novgorod, Russia, who serves as co-founder and chief executive officer of Safe Superintelligence, the AI research company he started in June 2024 with Daniel Gross and Daniel Levy. He is a co-author of the 2012 AlexNet paper that catalyzed the deep-learning era and the former co-founder and chief scientist of OpenAI, where he led research direction from December 2015 through May 2024. As of May 2026, he leads Safe Superintelligence following its September 2024 $1 billion seed at a $5 billion valuation and its April 2025 $2 billion round at a $32 billion valuation, and is named on the TIME 100 AI lists for both 2023 and 2024.
At a glance
- Education: Bachelor of Science in mathematics, University of Toronto (2005); Master of Science in computer science, University of Toronto (2007); PhD in computer science, University of Toronto (2013), advised by Geoffrey Hinton; brief postdoctoral stint at Stanford University with Andrew Ng (2012 to 2013); honorary Doctor of Science, University of Toronto (June 2025).
- Current role: Co-founder and Chief Executive Officer of Safe Superintelligence, since 2024 (became CEO in July 2025 after co-founder Daniel Gross left for Meta).
- Key contributions: co-author of AlexNet (2012); first author of the Sequence to Sequence Learning paper (2014); co-founder and Chief Scientist of OpenAI (2015 to 2024); research direction for the GPT family, DALL-E, CLIP, Codex, and ChatGPT; co-lead with Jan Leike of the OpenAI Superalignment program (2023 to 2024).
- X / Twitter: @ilyasut
- Personal site: cs.toronto.edu/~ilya
- Wikipedia: Ilya Sutskever
Origins
Sutskever was born in 1986 in Nizhny Novgorod (then Gorky), in the Soviet Union, into a Jewish family. The family emigrated to Israel in 1991 when he was five, and Sutskever spent his school years in Jerusalem before the family relocated to Toronto, Canada, in 2002. He holds Israeli and Canadian citizenship.
He enrolled at the University of Toronto and completed a Bachelor of Science in mathematics in 2005, a Master of Science in computer science in 2007, and a PhD in computer science in 2013 under Geoffrey Hinton, with a thesis on training recurrent neural networks. The Hinton group at Toronto was the institutional setting for the doctoral work and the AlexNet collaboration that followed.
Career
Sutskever's path runs from the AlexNet collaboration at Toronto through Google Brain, the OpenAI co-founding, the Superalignment program, the November 2023 board crisis, and the 2024 founding of Safe Superintelligence. In 2012, while completing his PhD, he co-authored the AlexNet paper with fellow Hinton student Alex Krizhevsky and Hinton himself. The paper won the 2012 ImageNet Large Scale Visual Recognition Challenge and established deep convolutional networks as the dominant approach in computer vision. The three authors incorporated DNNresearch Inc. to commercialize the work; Google acquired the startup in March 2013 and Sutskever joined Google Brain.
He spent a brief postdoctoral term at Stanford with Andrew Ng between 2012 and 2013. At Google Brain from 2013 he worked on TensorFlow and sequence modeling, and was first author on the 2014 Sequence to Sequence Learning paper with Oriol Vinyals and Quoc Le, which introduced the encoder-decoder architecture that became the foundation of neural machine translation.
In December 2015 Sutskever joined the founding cohort of OpenAI as Chief Scientist and a co-founder, alongside Sam Altman, Greg Brockman, Andrej Karpathy, Wojciech Zaremba, John Schulman, and Elon Musk. He led research direction through GPT-1 (2018), GPT-2 (2019), GPT-3 (2020), DALL-E and CLIP (2021), Codex (2021), ChatGPT (November 2022), and GPT-4 (March 2023), and held a seat on the OpenAI nonprofit board.
In July 2023 Sutskever and Jan Leike announced OpenAI's Superalignment team, with a four-year goal of aligning superhuman AI systems and a 20 percent compute commitment. On November 17, 2023 Sutskever was the board member who delivered the news to Altman that the board had voted to remove him. By November 20 he had reversed position publicly, posting that "I deeply regret my participation in the board's actions. I never intended to harm OpenAI." Altman was reinstated on November 22, and Sutskever stepped back from the board in the subsequent restructuring while continuing as Chief Scientist.
On May 14, 2024 Sutskever announced his departure from OpenAI after almost a decade, with Jan Leike resigning the following day and the Superalignment team disbanding within weeks. On June 19, 2024, Sutskever, Daniel Gross, and Daniel Levy launched Safe Superintelligence with offices in Palo Alto and Tel Aviv and a stated single mission: "build safe superintelligence" with no interim products. The company raised $1 billion in September 2024 at a $5 billion valuation, then $2 billion in April 2025 at a $32 billion valuation led by Greenoaks Capital. On July 3, 2025, Gross departed SSI for Meta Superintelligence Labs, Sutskever assumed the chief executive role, and Levy became president.
Affiliations
- Google Brain: Research Scientist, 2013 to 2015 (joined via Google's March 2013 acquisition of DNNresearch Inc.).
- OpenAI: Co-founder and Chief Scientist, 2015-12 to 2024-05.
- OpenAI: Member of the Board, 2015 to 2023-11.
- Safe Superintelligence: Co-founder, 2024-06-19 to present (Chief Scientist and chair through July 2025; Chief Executive Officer since July 2025).
Notable contributions
Sutskever's body of public work spans foundational machine-learning research, the research-leadership of one of the central frontier AI labs, and the founding of a research-only company built around the long-horizon safety problem.
- AlexNet (2012). Co-author with Alex Krizhevsky and Geoffrey Hinton of the ImageNet-winning convolutional neural network paper. Widely cited as the result that established deep learning as the dominant paradigm in computer vision and the practical starting point of the modern deep-learning era.
- Sequence to Sequence Learning (2014). First author with Oriol Vinyals and Quoc Le. The encoder-decoder architecture became the foundation of neural machine translation and a precursor to the attention-and-transformer line in language modeling.
- OpenAI research direction (2015 to 2024). Chief Scientist through the GPT series, DALL-E, CLIP, Codex, ChatGPT, and GPT-4. Industry coverage has credited Sutskever with much of the conceptual scaffolding behind the lab's scaling-and-pretraining approach.
- ChatGPT (November 2022) and GPT-4 (March 2023). The consumer launch and the multimodal follow-on that defined the 2022 to 2023 generative-AI boom; Sutskever was senior research lead on both releases.
- OpenAI Superalignment program (July 2023 to May 2024). Co-led with Jan Leike. Stated goal of solving superhuman alignment in four years with a 20 percent compute commitment. The program disbanded within weeks of Sutskever's and Leike's May 2024 departures.
- Safe Superintelligence (June 2024 to present). Co-founder and Chief Executive Officer of the pre-product research company. Approximately $3 billion raised across two rounds at a $32 billion valuation despite no public model, paper, or product.
- Awards and honors. Fellow of the Royal Society (FRS) elected May 2022; TIME 100 AI list in 2023 and 2024; honorary Doctor of Science from the University of Toronto in June 2025; National Academy of Sciences Award for Industrial Application of Science (2026); MIT Technology Review "35 Innovators Under 35" (2015).
Investments and boards
- Safe Superintelligence (AI): Co-founder and Chief Executive Officer, 2024 to present. Privately held. Approximately $3 billion cumulative funding through the April 2025 $2 billion round at a $32 billion valuation led by Greenoaks Capital.
No public personal angel-investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Sutskever's footprint in this section is concentrated in his founding and operating role at Safe Superintelligence rather than a parallel investing program.
Network
Sutskever's foundational professional relationship is with Geoffrey Hinton, his PhD advisor at the University of Toronto and co-author of the AlexNet paper. The Hinton-Krizhevsky-Sutskever collaboration that produced AlexNet and DNNresearch Inc. in 2012 was the entry point into both Google Brain and the broader senior deep-learning research community.
His OpenAI co-leadership cohort, with whom he worked from December 2015 through May 2024, includes Sam Altman, the chief executive he voted to remove in November 2023 and whose return three days later coincided with his own withdrawal from the board; Greg Brockman, president and fellow co-founder; Andrej Karpathy, founding research scientist; Mira Murati, the Chief Technology Officer who briefly served as interim chief executive during the November 2023 episode; and John Schulman, co-founder and reinforcement-learning lead. Jan Leike, his Superalignment co-lead, departed OpenAI alongside him in May 2024.
At Safe Superintelligence, Daniel Gross was co-founder and founding chief executive until his July 2025 departure for Meta Superintelligence Labs. Daniel Levy, formerly an OpenAI optimization researcher, is co-founder and current president. Outside SSI, Sutskever shares the OpenAI alumni-and-rivals universe with Dario Amodei and Daniela Amodei of Anthropic, Demis Hassabis of Google DeepMind, and Yann LeCun and Fei-Fei Li on the broader AI-research circuit.
Position in the field
As of May 2026, Sutskever is one of a small number of frontier AI researchers who have moved from research-scientist roles into chief-executive positions of independent labs, alongside Demis Hassabis at Google DeepMind, Dario Amodei at Anthropic, and Mira Murati at Thinking Machines Lab. He is the only one of the group to have led research at one frontier lab and then founded a second.
Safe Superintelligence's positioning is structurally distinctive among AI companies. The company has secured approximately $3 billion at a $32 billion valuation without releasing a model, a paper, a demonstration, or a website beyond a single page of plain text. The thesis underwriting the funding scale is Sutskever's research credentials, the team's pedigree, and the proposition that an explicit refusal to ship interim products will produce capability breakthroughs that the product-shipping labs cannot match.
The November 2023 OpenAI board episode is a defining event in his public record. Reporting at the time placed Sutskever at the center of the board action that removed Sam Altman, with the November 20 "I deeply regret my participation" tweet representing a public reversal that preceded Altman's reinstatement on November 22. The episode anchors a substantial fraction of mainstream press coverage of his subsequent moves, including the May 2024 OpenAI departure and the June 2024 SSI launch.
Outlook
Open questions over the next 6 to 18 months:
- First Safe Superintelligence release. Whether SSI produces any public artifact, paper, or demonstration, and whether the no-interim-products posture holds through 2026 and 2027.
- Next funding round. Whether SSI raises again at a higher valuation, holds at $32 billion, or moves toward a different financing structure as the no-product runway extends.
- Senior-talent recruitment. Continued movement of senior researchers from OpenAI, Anthropic, Meta, and Google DeepMind into SSI, and any reciprocal departures.
- Compute infrastructure. Execution of the April 2025 Google Cloud TPU partnership and any further compute commitments.
- Acquisition or partnership activity. SSI declined Meta's reported 2025 acquisition approach; whether further interest emerges from other Frontier or Incumbent labs.
- Public-policy and research posture. Sutskever's profile in industry safety discussions and any future commentary on alignment, scaling, or the November 2023 OpenAI episode.
Sources
- Ilya Sutskever. Wikipedia biographical entry covering education, career, the OpenAI period, and the Safe Superintelligence founding.
- Safe Superintelligence Inc.. Company site with the founding mission statement and recruitment messaging.
- ImageNet Classification with Deep Convolutional Neural Networks. The 2012 AlexNet paper by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton at NIPS 2012.
- Sequence to Sequence Learning with Neural Networks. The 2014 seq2seq paper by Sutskever, Vinyals, and Le.
- Introducing Superalignment. OpenAI's July 2023 Superalignment program announcement co-authored by Sutskever and Leike, with the four-year goal and 20 percent compute commitment.
- Ilya Sutskever on X: "I deeply regret my participation in the board's actions". The November 20, 2023 tweet during the OpenAI board crisis.
- Ilya Sutskever on X: OpenAI departure announcement. The May 14, 2024 tweet announcing the OpenAI departure after almost a decade.
- Ilya Sutskever: Deep Learning | Lex Fridman Podcast #94. The May 2020 long-form interview covering deep learning, language models, and AI safety.
- Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs. TechCrunch coverage of the May 2024 OpenAI departure.
- Ilya Sutskever will lead Safe Superintelligence following his CEO's exit. TechCrunch coverage of the July 2025 leadership transition at SSI.
- Ilya Sutskever: The 100 Most Influential People in AI 2024. TIME 100 AI 2024 entry.
- Dr Ilya Sutskever FRS. Royal Society fellowship page following the May 2022 election.
- Ilya Sutskever, a leader in AI and its responsible development, receives U of T honorary degree. University of Toronto Faculty of Arts and Science announcement of the June 2025 honorary Doctor of Science.
- Photo: File:Ilya Sutskever and Sam Altman in TAU (cropped).jpg on Wikimedia Commons, CC-BY-SA 4.0 Eladkarmel (5 June 2023, Tel Aviv University).