Recursive Superintelligence
Recursive Superintelligence is an artificial intelligence research company founded in late 2025 by Tim Rocktäschel, the former director of Google DeepMind's Open-Endedness research group and a professor of artificial intelligence at University College London. The company's stated research direction is the construction of self-improving AI systems that continuously refine their own training, evaluation, and research-direction processes without human intervention. As of late April 2026, four months after founding, the company has raised approximately $500 million in a pre-Series A round led by GV (formerly Google Ventures) at a $4 billion pre-money valuation, with Nvidia among the participating investors, placing it in the leading tier of recently founded reinforcement-learning-centered insurgent labs.
At a glance
- Founded: Late 2025 in the United Kingdom and United States by Tim Rocktäschel and four co-founders. Approximately four months in operation as of April 2026.
- Status: Private. Pre-Series A, capitalized through April 2026.
- Funding: Approximately $500 million raised at a $4 billion pre-money valuation. GV led; Nvidia participated. Press coverage in early April 2026 described an additional fundraising effort that could expand total capitalization toward $1 billion if subsequent tranches close.
- CEO: Tim Rocktäschel, co-founder, formerly director of Google DeepMind's Open-Endedness group. Continues as a professor of artificial intelligence at University College London.
- Other notable leadership: Josh Tobin, Jeff Clune, and Tim Shi, each formerly of OpenAI, and Richard Socher, formerly chief scientist of Salesforce and founder of you.com. All listed as co-founders.
- Open weights: Not declared. The company has not stated a posture on the open-versus-closed distribution question.
- Flagship outputs: None publicly disclosed as of April 2026. Stated research direction is automated self-improvement of AI training pipelines.
Origins
Recursive Superintelligence was founded in late 2025, with the founding team finalized over December 2025 and January 2026. Tim Rocktäschel led the formation, departing Google DeepMind after running the company's Open-Endedness research group. The Open-Endedness program at DeepMind focused on AI systems capable of producing novel, increasingly complex behaviors without bounds set by human-curated objectives, a research line with deep ties to evolutionary computation and to the work of co-founder Jeff Clune.
The four co-founders alongside Rocktäschel each bring a distinct frontier-research background. Josh Tobin was previously at OpenAI as a research scientist working on robotics and large-scale reinforcement learning, and subsequently founded the data-quality startup Gantry. Jeff Clune was the head of OpenAI's Open-Endedness team and a professor of computer science at the University of British Columbia, with research on AI-generating algorithms and on quality-diversity methods that align directly with Recursive's stated research thesis. Tim Shi was a research scientist at OpenAI working on agentic and code-generation systems. Richard Socher was the chief scientist of Salesforce from 2017 to 2020 and the founder of search-and-AI startup you.com, and brings commercial-scale AI deployment experience to the founding team.
The company emerged from stealth in early April 2026 with announcements of the $500 million pre-Series A round and the founding-team composition. Coverage characterized the launch as one of the fastest-scaled fundraises among recently founded AI labs, with the round closing approximately four months after founding at a valuation that placed Recursive Superintelligence in the same tier as several lab competitors with substantially longer track records.
Mission and strategy
Recursive Superintelligence has stated that its goal is to build artificial intelligence systems that improve themselves recursively, automating the entire frontier AI development pipeline including evaluation, data selection, training, post-training, and research direction. The strategic premise is that the rate of progress in frontier AI research is currently bounded by the number and quality of human researchers who can iterate on training pipelines, and that automating this iteration is the highest-leverage path to capability scaling.
The thesis draws directly from Open-Endedness research. The Open-Endedness research line at OpenAI (under Jeff Clune) and at DeepMind (under Rocktäschel) produced systems including POET, AI-GAs (AI-Generating Algorithms), and the evolutionary-curriculum-generation work that underpins the founders' shared technical approach. The premise is that an AI system capable of generating its own training environments, evaluating the resulting capabilities, and routing improvements back into the next training cycle can produce capability gains that compound faster than human-supervised research cycles permit.
Strategically, this places Recursive Superintelligence in deliberate contrast to the prevailing frontier-lab approach in two distinct ways. First, the focus is on automating the research process rather than producing a particular flagship model: Recursive's commercial output, if and when one ships, may be less a single model than a process or platform that produces a continuous sequence of progressively more capable systems. Second, the recursive-self-improvement framing places the company in tension with portions of the AI-safety community that have raised concerns about exactly this category of system. The company has not yet publicly addressed the safety-research considerations that its stated approach raises.
Industry coverage has characterized the company's research direction as the most direct attempt by a credible technical team to operationalize the recursive-self-improvement thesis that has long been central to discussions of advanced artificial intelligence. Whether this translates into a defensible commercial position, beyond the underlying research result, will depend on whether the resulting systems can be deployed in revenue-generating contexts.
Models and products
No public products as of April 2026. The company has not disclosed a model name, a target capability, or a public release date. Coverage of the launch indicates that the round will fund a multi-year capital-intensive research program focused on infrastructure, automated evaluation, and the synthesis of training environments rather than on near-term consumer or enterprise product deployment.
Distribution channels have not been disclosed. The closed-versus-open-weights posture has not been declared. Pricing structure, if applicable, has not been described.
Benchmarks and standing
Recursive Superintelligence has not yet shipped a public model and accordingly has no benchmark positions. The stated research direction (automated self-improvement of frontier training pipelines) suggests that early public artifacts, when they appear, are likely to be evaluated less on conventional model-capability benchmarks (Artificial Analysis Intelligence Index, LMArena, SWE-bench Verified) than on meta-level metrics: the rate at which the system produces new capabilities, the quality of evaluations the system synthesizes, and the diversity of the training-environment distribution the system generates. Whether the AI-research community develops standardized benchmarks for this category of system is itself an open question.
Leadership
- Tim Rocktäschel. Co-founder and CEO. Director of Google DeepMind's Open-Endedness research group from 2021 through 2025. Professor of artificial intelligence at University College London since 2019. Earlier research positions at Facebook AI Research and the University of Oxford. Author or co-author on Open-Endedness, multi-agent reinforcement learning, and evolutionary-AI research that aligns directly with the company's stated direction.
- Jeff Clune. Co-founder. Previously head of OpenAI's Open-Endedness team, professor of computer science at the University of British Columbia, and senior research manager at Uber AI Labs. Authored seminal research on AI-Generating Algorithms, novelty search, and quality-diversity methods.
- Josh Tobin. Co-founder. Previously a research scientist at OpenAI focused on robotics and large-scale reinforcement learning. Founder of Gantry, a data-quality platform for machine-learning systems, prior to joining Recursive Superintelligence.
- Tim Shi. Co-founder. Previously a research scientist at OpenAI working on agentic and code-generation systems.
- Richard Socher. Co-founder. Chief scientist at Salesforce from 2017 to 2020. Founder of you.com, an AI-first search startup, in 2020. Brings commercial-scale AI deployment and product experience to the founding team. Adjunct professor at Stanford and a frequent keynote speaker on AI commercialization.
The senior research and engineering staff beyond the founders has not been disclosed in detail. Coverage of the launch describes recruitment underway across both reinforcement-learning research and large-scale-training engineering disciplines, with a stated emphasis on hiring researchers with backgrounds in evolutionary computation, automated curriculum design, and large-scale environment synthesis.
Funding and backers
The pre-Series A round closed in April 2026, raising approximately $500 million at a $4 billion pre-money valuation. GV led the round; Nvidia participated. The full participating-investor list has not been comprehensively disclosed. Press coverage in early April 2026 referenced a possible expansion of the round toward $1 billion in total capitalization through additional tranches, though the additional commitments had not closed as of late April.
The investor composition signals strategic alignment on two fronts. GV's lead position reflects continued Alphabet engagement with research alumni who depart DeepMind for independent ventures, mirroring Google's seed participation in Ineffable Intelligence. Nvidia's participation positions Recursive Superintelligence within the small set of frontier-research labs likely to receive priority allocation of frontier-tier GPUs in a hardware-constrained environment.
The four-month timeline from founding to a $4 billion valuation places Recursive Superintelligence among the fastest-scaled financings in the 2025 to 2026 frontier-lab cohort. The pace reflects investor conviction in the founding team's research backgrounds rather than any pre-product technical demonstration, and is consistent with the broader pattern of insurgent-lab fundraising in this period.
Industry position
Recursive Superintelligence sits in the cluster of recently founded labs whose research thesis runs counter to the prevailing language-model-pretraining-and-RLHF approach that has produced the current generation of frontier closed-source models. The cluster includes Ineffable Intelligence, founded by David Silver in late 2025 and capitalized at $5.1 billion in April 2026; the original Safe Superintelligence lab founded by Ilya Sutskever in 2024 at $32 billion; and a smaller group of newer labs working on related theses. The thesis-level overlap between Recursive and Ineffable in particular is substantial, and the two companies are likely to compete directly for senior reinforcement-learning talent.
The Open-Endedness research line that the founding team is built around is academically credible but commercially unproven. The closed-domain successes of evolutionary computation and quality-diversity methods (in environments with well-defined reward signals) have not historically extended to open-domain tasks at frontier capability. Whether Recursive Superintelligence demonstrates that the approach scales is the central research question its early public artifacts will need to answer.
The company's UK and US dual-base structure, with Rocktäschel continuing his UCL professorship and several US-based co-founders, places it in the small group of frontier labs straddling both regulatory and talent ecosystems. This structure is positioned by the founding team as an asset for cross-Atlantic talent acquisition.
Competitive landscape
Direct competitors to Recursive Superintelligence's stated research direction:
- Ineffable Intelligence. Reinforcement-learning-centered, also pre-product, also founded by a senior DeepMind alumnus, capitalized at $5.1 billion at the seed stage. The two companies have substantially overlapping theses on reinforcement-learning-driven capability scaling, and will compete for senior reinforcement-learning research talent and for frontier-tier compute allocation.
- Google DeepMind. Rocktäschel's former employer continues to invest in Open-Endedness research alongside its language-model program, and retains the Open-Endedness team that Rocktäschel previously led. Whether DeepMind's continued investment in this research line produces results that compete directly with Recursive's output is one of the principal research questions over the coming 24 months.
- OpenAI. Three of Recursive's five co-founders are OpenAI alumni, and OpenAI's o-series of reasoning models is built on a reinforcement-learning post-training stage that overlaps with portions of Recursive's research direction. The competitive question is whether OpenAI's automated-research-pipeline investments scale faster within an existing flagship company than Recursive's from a clean-sheet foundation.
- Safe Superintelligence. Ilya Sutskever's lab, also pre-product and also operating in stealth at frontier capitalization. The two companies overlap on the "frontier-capability research without near-term product pressure" positioning and may compete for talent and for compute allocation.
- Thinking Machines Lab. Mira Murati's lab, with a $12 billion valuation as of April 2026 and the Tinker fine-tuning platform as its first public product. Less directly aligned with Recursive's research direction, but competes within the same talent pool of senior frontier-lab alumni.
Outlook
Open questions for Recursive Superintelligence over the next 6 to 18 months:
- Demonstration of recursive self-improvement. Whether the company produces a concrete public artifact showing that an AI-driven research-pipeline-improvement system produces capability gains faster or more cheaply than conventional human-supervised research cycles. This is the central research question and the principal validation point for the funded thesis.
- Talent acquisition and team composition. Which senior researchers Recursive hires from DeepMind, OpenAI, and academic Open-Endedness groups. The hiring pattern will signal which sub-fields of Open-Endedness research the company is prioritizing.
- Compute supply and scaling. Whether Nvidia's strategic investment translates to priority allocation of frontier-tier GPUs at the scale required for the stated research direction.
- The safety-research posture question. The recursive-self-improvement framing places the company's stated direction in tension with portions of the AI-safety research community. Whether and how the company addresses safety-research considerations in its public communications and in its hiring will be a watchable signal.
- Series A timing and structure. The timing, lead investor, and valuation of the next financing event, particularly given the possibility raised in early-April press coverage of expanding the current round to $1 billion total capitalization.
- Closed-domain to open-domain transition. Whether the company demonstrates an Open-Endedness result that extends beyond the historical closed-domain successes of the research line into open-domain reasoning at frontier-language-model scale.
Sources
- The Decoder: Self-improving AI startup Recursive Superintelligence pulls in $500 million just four months after founding. Funding details, founder roster, and research-thesis description.
- Implicator AI: Recursive Superintelligence Raises $500M at $4B Valuation. Round structure, GV lead, and Nvidia participation.
- Phemex News: Recursive Superintelligence Raises $500M in Pre-Series A Funding. Round-stage classification and valuation context.
- Dealroom: Recursive Superintelligence company information, funding & investors. Funding history and investor list.
- UCL News: UCL researchers lead two of Europe's largest-ever AI funding rounds. Institutional context for Rocktäschel's continued UCL professorship.
- CNBC: Meta, Google, OpenAI among Big Tech firms seeing top staff leaving to launch AI startups. Frontier-lab-exodus framing for Recursive in the broader 2025 to 2026 cohort.