Tim Rocktäschel

Tim Rocktäschel is a German-British AI researcher, founder and chief executive of Recursive Superintelligence, professor of artificial intelligence at University College London, and the former director of the Open-Endedness research group at Google DeepMind.
Tim Rocktäschel

Tim Rocktäschel

Tim Rocktäschel is a German-British computer scientist and Professor of Artificial Intelligence at University College London. He is the founder and chief executive officer of Recursive Superintelligence, the AI research company he started in late 2025, and was previously a Director and Principal Scientist at Google DeepMind, where he led the Open-Endedness research group. As of May 2026, he runs Recursive Superintelligence following its April 2026 emergence from stealth with a $500 million pre-Series A round at a $4 billion pre-money valuation led by GV with Nvidia participating, and continues his UCL professorship and the UCL DARK research lab in parallel.

At a glance

  • Education: Diploma (equivalent to MSc) in computer science, Humboldt University of Berlin (2012); PhD in computer science, University College London (2017), supervised by Sebastian Riedel, with a thesis on neural-symbolic reasoning. Microsoft Research PhD Scholarship (2013); Google PhD Fellowship in Natural Language Processing (2017).
  • Current roles: Founder and Chief Executive Officer of Recursive Superintelligence since late 2025; Professor of Artificial Intelligence at University College London since 2019; principal investigator of the UCL Deciding, Acting, and Reasoning with Knowledge (DARK) Lab.
  • Key contributions: principal investigator on the NetHack Learning Environment (2020) and the related MiniHack benchmark; co-author on Genie: Generative Interactive Environments and Open-Endedness is Essential for Artificial Superhuman Intelligence, the two ICML 2024 Best Paper Award recipients credited to his team; author of Artificial Intelligence: 10 Things You Should Know (Orion, 2024).
  • X / Twitter: @_rockt
  • LinkedIn: rockt
  • Personal site: rockt.ai

Origins

Rocktäschel was born in Germany. He completed his Diploma in computer science at Humboldt University of Berlin in 2012, the German pre-Bologna degree corresponding to a Master of Science. The diploma thesis introduced him to symbolic reasoning and machine-learning methods that would carry through his subsequent doctoral work.

He moved to London in 2013 for doctoral study at University College London under Sebastian Riedel in the Machine Reading group. The PhD examined the integration of symbolic and neural methods in automated reasoning, including end-to-end differentiable theorem provers that learn to compose first-order logic rules from data, and was supported by a Microsoft Research PhD Scholarship (2013) and a Google PhD Fellowship in Natural Language Processing (2017). He completed the PhD in 2017. He also held a research-internship position at DeepMind in summer 2015, his first contact with the lab he would later join as a director.

Career

From May 2017 through August 2018 Rocktäschel was a Postdoctoral Researcher in the Whiteson Research Lab at the University of Oxford under Shimon Whiteson, with parallel appointments as Junior Research Fellow at Jesus College and Stipendiary Lecturer at Hertford College. The postdoctoral period shifted his research focus from neural-symbolic reasoning toward reinforcement learning, the line that has dominated his work since.

In 2018 he returned to London to join Facebook AI Research, the lab now known as Meta AI, as a Research Scientist, eventually becoming a Manager and Reinforcement Learning Team Lead. He worked alongside Edward Grefenstette and others on reinforcement-learning evaluation environments, and the flagship public output of the FAIR period is the NetHack Learning Environment, a 2020 research benchmark that re-implements the rogue-like game NetHack as a procedurally generated reinforcement-learning environment.

UCL appointed Rocktäschel to a permanent academic post in 2019, first as Lecturer and subsequently as Reader and then full Professor of Artificial Intelligence. The dual industrial-and-academic appointment has persisted since. At UCL he co-founded and continues to lead the DARK Lab, the Deciding, Acting, and Reasoning with Knowledge research group whose graduate alumni populate the senior reinforcement-learning research community at the European frontier labs.

In May 2022 Rocktäschel left FAIR for Google DeepMind as Senior Staff Research Scientist and Open-Endedness Team Lead, and was promoted to Director and Principal Scientist in November 2024. The Open-Endedness program centers on systems that produce novel, increasingly complex behaviors without bounds set by human-curated objectives, a research line with ties to evolutionary computation and the work of his future co-founder Jeff Clune. The team won two Best Paper Awards at ICML 2024 for Genie: Generative Interactive Environments and Debating with More Persuasive LLMs Leads to More Truthful Answers.

Rocktäschel founded Recursive Superintelligence in late 2025, with the founding team finalized over December 2025 and January 2026, and departed DeepMind to take the chief-executive role. The company emerged from stealth in early April 2026 with the announcement of a $500 million pre-Series A round at a $4 billion pre-money valuation led by GV with Nvidia participating. The co-founders alongside Rocktäschel are Jeff Clune, the former head of OpenAI's Open-Endedness team, Josh Tobin and Tim Shi, both formerly of OpenAI, and Richard Socher, the former chief scientist of Salesforce and founder of you.com.

Affiliations

  • Humboldt University of Berlin: Diploma student in computer science, 2006 to 2012.
  • University College London: PhD candidate in computer science, 2013 to 2017 (advised by Sebastian Riedel, Machine Reading group).
  • DeepMind: Research Intern, summer 2015.
  • University of Oxford: Postdoctoral Researcher, Whiteson Research Lab; Junior Research Fellow, Jesus College; Stipendiary Lecturer, Hertford College, 2017-05 to 2018-09.
  • Meta AI (Facebook AI Research): Research Scientist, then Manager and Reinforcement Learning Team Lead, 2018 to 2022.
  • University College London: Lecturer, then Reader, then Professor of Artificial Intelligence, 2019 to present; principal investigator of the UCL DARK Lab.
  • Google DeepMind: Senior Staff Research Scientist and Open-Endedness Team Lead from 2022-05; Director and Principal Scientist from 2024-11; departed in late 2025.
  • Recursive Superintelligence: Founder and Chief Executive Officer, 2025 to present.
  • European Laboratory for Learning and Intelligent Systems (ELLIS): Fellow.

Notable contributions

Rocktäschel's published record runs from neural-symbolic reasoning at UCL through reinforcement-learning evaluation environments at FAIR to open-endedness research at DeepMind, and is unified by an interest in artificial general intelligence and self-improvement.

  • Neural-symbolic reasoning (2014 to 2017). The PhD-era line of work, including end-to-end differentiable theorem provers, Reasoning About Entailment with Neural Attention (ICLR 2016), and related papers on injecting first-order logic rules into neural representations.
  • NetHack Learning Environment (NeurIPS 2020). Co-author with Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, and Edward Grefenstette of the procedurally generated reinforcement-learning environment built on NetHack. The 2021 NeurIPS NetHack Challenge used it as the standard benchmark, and the game remains unsolved by reinforcement-learning agents. MiniHack, a related smaller benchmark, followed at NeurIPS 2021.
  • Genie: Generative Interactive Environments (ICML 2024, Best Paper). The DeepMind Open-Endedness team foundation model that generates playable two-dimensional environments from unlabelled internet video, lead-authored by Jake Bruce with Rocktäschel as a senior author.
  • Open-Endedness is Essential for Artificial Superhuman Intelligence (ICML 2024). Position paper formalizing open-endedness through novelty and learnability and arguing that open-ended systems built on foundation models are a path to artificial superhuman intelligence. Co-authored with Edward Hughes, Michael Dennis, Jack Parker-Holder, Feryal Behbahani, and Jeff Clune among others.
  • Debating with More Persuasive LLMs Leads to More Truthful Answers (ICML 2024, Best Paper). Empirical study of debate-based scalable oversight on language models.
  • UCL DARK Lab (2019 to present). Co-founder and principal investigator of the UCL Deciding, Acting, and Reasoning with Knowledge research group, whose graduate alumni populate OpenAI, DeepMind, FAIR, Anthropic, and the British insurgent labs.
  • Artificial Intelligence: 10 Things You Should Know (Orion, September 2024). Solo-authored 128-page popular-science book of ten short essays on artificial intelligence.
  • Awards. Two Best Paper Awards at ICML 2024; ICLR 2025 keynote speaker; Microsoft Research PhD Scholarship (2013); Google PhD Fellowship (2017); Fellow of the European Laboratory for Learning and Intelligent Systems.

Investments and boards

  • Recursive Superintelligence (AI): Founder and Chief Executive Officer, 2025 to present. Privately held. $500 million pre-Series A at a $4 billion pre-money valuation closed April 2026, led by GV with Nvidia participating.
  • University College London: Professor of Artificial Intelligence and principal investigator of the DARK Lab, 2019 to present. Academic appointment held continuously alongside industrial research roles.

Rocktäschel's UCL professorship is an academic appointment rather than a corporate-board seat. No public personal angel-investor activity in AI, semiconductors, datacenters, software, or energy is on record as of May 2026.

Network

Rocktäschel's longest-running professional relationship is with his UCL doctoral advisor Sebastian Riedel, the Machine Reading group lead who supervised the 2013 to 2017 PhD. His Oxford postdoctoral supervisor was Shimon Whiteson, the Oxford reinforcement-learning professor.

His Recursive Superintelligence co-founders are Jeff Clune, the former head of OpenAI's Open-Endedness team and a recurring research collaborator who is also co-author on the 2024 ICML Best Paper position piece on open-endedness, Josh Tobin, Tim Shi, and Richard Socher. At FAIR and subsequently as a UCL collaborator, Edward Grefenstette is the recurring senior co-author on the NetHack Learning Environment and on the General Intelligence Requires Rethinking Exploration paper.

His broader frontier-lab cohort includes Demis Hassabis, the chief executive of Google DeepMind during his tenure as Open-Endedness director, and Yann LeCun, the chief AI scientist of Meta during his FAIR period. He overlaps with David Silver of Ineffable Intelligence as the second UCL computer-science professor to depart DeepMind in late 2025 and found a UK-based reinforcement-learning startup, and with Ilya Sutskever of Safe Superintelligence, Dario Amodei of Anthropic, and Mira Murati of Thinking Machines Lab on the broader frontier-research-founder circuit.

Position in the field

As of May 2026, Rocktäschel is one of a small number of senior frontier-AI researchers who have moved from a research-leadership role inside an established lab into the chief-executive role at an independent startup, alongside Ilya Sutskever at Safe Superintelligence, Mira Murati at Thinking Machines Lab, and David Silver at Ineffable Intelligence. The Recursive launch and the Ineffable launch occurred within weeks of each other, and the two UK-based companies are likely to compete directly for senior reinforcement-learning talent.

Open-Endedness is the research subfield Rocktäschel is most identified with. The term refers to AI systems capable of producing novel, increasingly complex behaviors without bounds set by human-curated objectives, with academic roots in evolutionary computation and quality-diversity methods. Where Ineffable Intelligence's Silver-led "no human data" thesis inverts the pretraining ratio, Recursive's thesis automates the iteration cycle that wraps around any frontier-model training run. Both positions are deliberate contrasts to the dominant LLM-pretraining-and-RLHF approach of OpenAI, Anthropic, and Google DeepMind. The recursive-self-improvement framing places Recursive's stated direction in tension with portions of the AI-safety research community; the company has not yet publicly addressed the safety-research considerations its approach raises.

Outlook

Open questions over the next 6 to 18 months:

  • First Recursive artifact. Whether the company ships any public-facing demonstration of the recursive-self-improvement approach before its next funding round, and what form such a demonstration takes given the absence of a conventional model-release roadmap.
  • Round expansion. Press coverage in early April 2026 referenced an oversubscribed round potentially expanding toward $1 billion in total capitalization. Whether the additional commitments close, and at what valuation, is the principal external signal in the absence of public technical artifacts.
  • Talent acquisition trajectory. Which senior researchers Recursive hires from DeepMind, OpenAI, Meta, and the academic open-endedness community, and how the team grows from its initial 20-person founding cohort.
  • Compute supply. Whether Nvidia's strategic investment translates into priority allocation of frontier-tier GPUs, and whether Recursive supplements with additional supplier relationships.
  • AI-safety posture. The recursive-self-improvement thesis sits in deliberate tension with portions of the AI-safety research community. Whether and how the company addresses safety considerations in its public communications and hiring will be a watchable signal.
  • UCL appointment continuation. Whether the UCL professorship and DARK Lab continue at full capacity as Recursive scales, given the parallel structure with David Silver's UCL position.
  • Open-Endedness research direction at DeepMind. Whether DeepMind's continuing investment in the Open-Endedness team that Rocktäschel previously led produces results that compete directly with Recursive's output.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.