Daniel Levy

Daniel Levy is co-founder and president of Safe Superintelligence Inc., the pre-product AI research company started in June 2024 with Ilya Sutskever, and the former lead of the optimization team at OpenAI.
Daniel Levy

Daniel Levy

Daniel Levy is a French-trained AI researcher and the co-founder and president of Safe Superintelligence, the pre-product AI research company he started in June 2024 with Ilya Sutskever and Daniel Gross. Before SSI he was a member of technical staff at OpenAI from March 2022 to June 2024, where his Stanford homepage describes him as leader of the optimization team and where he is credited as optimization lead and overall vision co-lead on the GPT-4 program. As of May 2026, Levy is president of SSI following the July 3, 2025 transition in which Sutskever assumed the chief executive role and co-founder Daniel Gross departed for Meta Superintelligence Labs.

At a glance

Origins

Levy was educated in the French system. From September 2010 to June 2012 he attended the preparatory program at Lycée Louis-le-Grand in Paris on the mathematics, physics, and computer science track, the route into the entrance exams for the French Grandes Écoles. He ranked 13th nationally on the Polytechnique exam and completed the September 2012 to July 2015 Diplôme d'ingénieur at the École Polytechnique. During the program he served as a full-time teaching assistant at a Priority Action Zone school in Aulnay-sous-Bois (September 2012 to April 2013), tutoring high-school students in sciences.

In 2015 Levy moved to the United States for the Master of Science in computer science at Stanford University (September 2015 to June 2018), advised by Stefano Ermon on probabilistic models and reinforcement learning.

Career

Levy continued at Stanford for the PhD in computer science from September 2018 to December 2021, advised by John Duchi, with Percy Liang listed as second thesis advisor and Christopher Ré and Aaron Sidford on the committee. The dissertation, "Advancing optimization to address the challenges of modern machine learning," covers stochastic optimization, distributionally robust optimization, and differential privacy.

Across the Stanford period, Levy held summer research internships at Microsoft Paris (2014, Xbox Music analytics), Shift Technology in Paris (March to July 2015, bandit methods for fraud detection), and the Facebook Applied Machine Learning Group in Menlo Park (2016, bandits and reinforcement learning for text classification). He was selected for the Google Brain Residency Program in 2017 and worked with Jascha Sohl-Dickstein and Matt Hoffman on Markov-chain Monte Carlo methods, producing the ICLR 2018 paper "Generalizing Hamiltonian Monte Carlo with Neural Networks." A 2020 internship at Google Research New York with Ananda Theertha Suresh, Satyen Kale, and Mehryar Mohri produced the NeurIPS 2021 paper on user-level differential privacy. Levy was a teaching assistant for Stanford's CS229 Machine Learning in fall 2016 and EE364A Convex Optimization in winter 2021.

Levy joined OpenAI in March 2022 as a member of technical staff. His Stanford homepage lists him as leader of the optimization team. OpenAI's GPT-4 contributors page credits him as optimization lead and overall vision co-lead on the GPT-4 program, with additional roles in training-run babysitting and paper authorship. He is also listed on the GPT-4V(ision), GPT-4o, and o1 contributor pages.

On June 19, 2024, three weeks after Sutskever's May 14 announcement of his departure from OpenAI, Levy co-signed the founding statement of Safe Superintelligence with Sutskever and Daniel Gross. The announcement described SSI as "an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent" and stated the mission as "one goal and one product: a safe superintelligence." Coverage in CTech reported that Levy is based in the Tel Aviv office. On the same day, Levy posted on X: "Beyond excited to be starting this company with Ilya and DG. I can't imagine working on anything else at this point in human history."

SSI raised $1 billion in September 2024 at a $5 billion valuation, with backing from NFDG, Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. In April 2025 it raised $2 billion at a $32 billion valuation led by Greenoaks Capital, bringing cumulative funding to approximately $3 billion. On July 3, 2025, Sutskever announced that Daniel Gross had departed effective June 29 and that Sutskever had assumed the chief executive role with Levy as president; the technical team continued to report to Sutskever. Gross joined Meta Superintelligence Labs.

Affiliations

  • Microsoft Paris: Intern, summer 2014 (analytics on the Cosmos big-data platform).
  • Shift Technology: Intern, March 2015 to July 2015 (bandit methods for fraud detection).
  • Facebook Applied Machine Learning Group: Intern, summer 2016 (bandits and reinforcement learning for text classification).
  • Google Brain Residency Program: Research intern, summer 2017 (MCMC methods with Jascha Sohl-Dickstein and Matt Hoffman).
  • Google Research New York: Research intern, summer 2020 (differential privacy with Ananda Theertha Suresh, Satyen Kale, and Mehryar Mohri).
  • OpenAI: Member of technical staff and leader of the optimization team, 2022-03 to 2024-06.
  • Safe Superintelligence: Co-founder, 2024-06-19 to present (co-founder through July 2025; president from July 2025).

Notable contributions

Levy's body of public work falls in two phases: Stanford optimization, robustness, and privacy research from 2016 through 2021, and OpenAI flagship-model training from 2022 through 2024. SSI has released no models, papers, or demonstrations as of May 2026, so Levy's record at SSI is currently the founding signature rather than research output.

Investments and boards

No public personal angel-investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Levy's footprint in this section is concentrated in his founding and operating role at Safe Superintelligence rather than a parallel investing program.

Network

Levy's longest-running professional relationships outside OpenAI are with his Stanford optimization advisor John Duchi, his Stanford master's advisor Stefano Ermon, and the broader Stanford optimization circle of Percy Liang, Christopher Ré, Aaron Sidford, and Yair Carmon, with whom he co-authored the 2020 distributionally robust optimization paper. The Google Brain Residency cohort connects him to Jascha Sohl-Dickstein and Matthew Hoffman, his ICLR 2018 MCMC collaborators, and the 2020 Google Research privacy collaboration with Mehryar Mohri, Satyen Kale, and Ananda Theertha Suresh.

His OpenAI cohort, with whom he worked from March 2022 through June 2024, includes Ilya Sutskever, the chief scientist who became his SSI co-founder; Sam Altman, the chief executive; Greg Brockman, president and co-founder; Mira Murati, the chief technology officer through her September 2024 departure; and John Schulman, the reinforcement-learning lead through his August 2024 departure. At Safe Superintelligence, the founding cohort with Sutskever and Daniel Gross was the central operating relationship through Gross's June 2025 departure for Meta Superintelligence Labs.

Position in the field

As of May 2026, Levy occupies an unusual position among senior AI researchers. The Stanford optimization research and the NeurIPS 2019 oral presentation establish him as a credentialed optimization theorist within the John Duchi school of robust and private machine-learning optimization. The OpenAI optimization-team leadership, particularly the credited optimization-lead role on GPT-4, places him in the small group of practitioners directly responsible for training-time decisions on the largest deployed language models in 2023 and 2024.

The SSI co-founder and president role situates him alongside Sutskever as the public face of the most distinctive funding-and-strategy posture among AI labs: a $32 billion valuation, no public product, no published research, no website beyond a single page of plain text, and a stated commitment to release nothing until the company achieves safe superintelligence. The thesis underwriting the $3 billion in funding rests on Sutskever's research credentials and on the credentials of co-founders including Levy.

Levy's public profile is concentrated on the Stanford homepage, the @daniellevy__ account on X, and a small number of conference talks tied to his Stanford research period. His role within SSI has not been publicly described in technical specifics, consistent with the company's no-disclosure posture.

Outlook

Open questions over the next 6 to 18 months:

  • First Safe Superintelligence release. Whether SSI produces any public artifact, paper, or demonstration, and Levy's specific role on a first release.
  • Public technical role at SSI. Whether Levy's specific research direction beyond the president title is publicly described in 2026 or 2027.
  • Next funding round. Whether SSI raises at a higher valuation or holds at $32 billion as the no-product runway extends.
  • Tel Aviv research presence. Whether the SSI Tel Aviv office grows materially in 2026 and whether Levy remains based there.
  • Public commentary cadence. Whether Levy increases his English-language conference, talk, or podcast presence beyond the founding announcement and the @daniellevy__ X account.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.