Apollo Research
Apollo Research is a UK-based artificial intelligence safety research nonprofit headquartered in London, founded in May 2023 by Marius Hobbhahn (Co-Founder and Chief Executive Officer) and Jérémy Scheurer (Co-Founder), with a founding mandate explicitly oriented around dangerous-capability evaluations of frontier foundation models and deceptive-AI research. The organization's research output includes published evaluations of frontier model deceptive capability, situational-awareness research, and cooperation with OpenAI, Anthropic, Google DeepMind, and other frontier AI labs through pre-deployment evaluation engagements. As of April 2026, Apollo Research is one of the principal independent AI safety evaluation nonprofits, alongside METR, LMArena, and other peers.
At a glance
- Founded: May 2023 in London, UK, by Marius Hobbhahn and Jérémy Scheurer.
- Status: Independent UK nonprofit research organization.
- Funding: AI safety philanthropic backing including the Survival and Flourishing Fund, Open Philanthropy, and other AI-safety-focused funders.
- CEO: Marius Hobbhahn, Co-Founder and Chief Executive Officer. AI safety researcher.
- Other notable leadership: Jérémy Scheurer, Co-Founder.
- Open weights: N/A. Apollo Research conducts safety-research evaluation rather than producing foundation models.
- Flagship outputs: Published research output on dangerous-capability evaluations (deceptive-AI capability, situational-awareness), pre-deployment evaluation engagements with frontier AI labs, and contribution to UK AI Safety Institute and US AI Safety Institute (NIST) research-cooperation.
Origins
Apollo Research was founded in May 2023 by Marius Hobbhahn and Jérémy Scheurer with a AI safety research mandate explicitly oriented around dangerous-capability evaluations of frontier foundation models. The 2023 to 2024 period built dangerous-capability evaluation research infrastructure, with cooperation with OpenAI, Anthropic, and Google DeepMind through pre-deployment evaluation engagements.
The 2024 release of Apollo Research's deceptive-AI capability research, with industry attention as one of the principal published evaluations of frontier-model deceptive capability, anchored research-organization credibility. The 2024 to 2026 period has continued published research output and cooperation with frontier AI labs.
Mission and strategy
Apollo Research's mission is to advance dangerous-capability evaluations of frontier AI models and to advance deceptive-AI research. The strategy combines two threads. First, dangerous-capability evaluation research with published research output. Second, pre-deployment evaluation engagements with frontier AI labs.
Distribution channels include published research output through major academic venues and cooperation with frontier AI labs through pre-deployment evaluation engagements.
Models and products
- Published research output. On dangerous-capability evaluations including deceptive-AI capability, situational-awareness research, and other areas.
- Pre-deployment evaluation engagements. Cooperation with frontier AI labs.
- Cooperation with UK AI Safety Institute and US AI Safety Institute (NIST).
Distribution channels include published research output and cooperation with frontier AI labs.
Benchmarks and standing
Apollo Research's evaluation framework focuses on published dangerous-capability evaluations and cooperation with frontier AI labs. Industry coverage has consistently characterized Apollo Research as one of the principal independent AI safety evaluation nonprofits globally, alongside METR, LMArena, and other peers.
Leadership
As of April 2026, Apollo Research's senior leadership includes:
- Marius Hobbhahn, Co-Founder and Chief Executive Officer.
- Jérémy Scheurer, Co-Founder.
- Senior research staff across the dangerous-capability evaluation program.
Funding and backers
AI safety philanthropic backing including the Survival and Flourishing Fund, Open Philanthropy, and other AI-safety-focused funders.
Industry position
Apollo Research occupies a distinctive position as one of the principal independent AI safety evaluation nonprofits, with dangerous-capability evaluation research output, pre-deployment evaluation engagements with frontier AI labs, and cooperation with UK and US AI Safety Institutes.
Competitive landscape
- METR, LMArena, Transluce, Timaeus. Independent AI safety evaluation peer organizations.
- UK AI Safety Institute, US AI Safety Institute (NIST). Government AI safety institute peers with cooperation.
- Anthropic, OpenAI, Google DeepMind. Frontier AI lab partners with pre-deployment evaluation engagements.
- Center for AI Safety, MIRI, Conjecture. AI safety research nonprofit peers.
Outlook
- Continued dangerous-capability evaluation research output through 2026 to 2027.
- Continued pre-deployment evaluation engagements with frontier AI labs.
- Continued cooperation with UK and US AI Safety Institutes.
Sources
- Apollo Research official site. Organization reference.
- Apollo Research publications. Published research output.
- Marius Hobbhahn LinkedIn. Co-Founder reference.
- Open Philanthropy. Funder.
- UK AI Safety Institute. Cooperation partner.