Apollo Research

Apollo Research is a UK AI safety research nonprofit founded in 2023 by Marius Hobbhahn and Jérémy Scheurer, focused on dangerous-capability evaluations and deceptive-AI research alongside frontier AI labs.
Apollo Research

Apollo Research

Apollo Research is a UK-based artificial intelligence safety research nonprofit headquartered in London, founded in May 2023 by Marius Hobbhahn (Co-Founder and Chief Executive Officer) and Jérémy Scheurer (Co-Founder), with a founding mandate explicitly oriented around dangerous-capability evaluations of frontier foundation models and deceptive-AI research. The organization's research output includes published evaluations of frontier model deceptive capability, situational-awareness research, and cooperation with OpenAI, Anthropic, Google DeepMind, and other frontier AI labs through pre-deployment evaluation engagements. As of April 2026, Apollo Research is one of the principal independent AI safety evaluation nonprofits, alongside METR, LMArena, and other peers.

At a glance

  • Founded: May 2023 in London, UK, by Marius Hobbhahn and Jérémy Scheurer.
  • Status: Independent UK nonprofit research organization.
  • Funding: AI safety philanthropic backing including the Survival and Flourishing Fund, Open Philanthropy, and other AI-safety-focused funders.
  • CEO: Marius Hobbhahn, Co-Founder and Chief Executive Officer. AI safety researcher.
  • Other notable leadership: Jérémy Scheurer, Co-Founder.
  • Open weights: N/A. Apollo Research conducts safety-research evaluation rather than producing foundation models.
  • Flagship outputs: Published research output on dangerous-capability evaluations (deceptive-AI capability, situational-awareness), pre-deployment evaluation engagements with frontier AI labs, and contribution to UK AI Safety Institute and US AI Safety Institute (NIST) research-cooperation.

Origins

Apollo Research was founded in May 2023 by Marius Hobbhahn and Jérémy Scheurer with a AI safety research mandate explicitly oriented around dangerous-capability evaluations of frontier foundation models. The 2023 to 2024 period built dangerous-capability evaluation research infrastructure, with cooperation with OpenAI, Anthropic, and Google DeepMind through pre-deployment evaluation engagements.

The 2024 release of Apollo Research's deceptive-AI capability research, with industry attention as one of the principal published evaluations of frontier-model deceptive capability, anchored research-organization credibility. The 2024 to 2026 period has continued published research output and cooperation with frontier AI labs.

Mission and strategy

Apollo Research's mission is to advance dangerous-capability evaluations of frontier AI models and to advance deceptive-AI research. The strategy combines two threads. First, dangerous-capability evaluation research with published research output. Second, pre-deployment evaluation engagements with frontier AI labs.

Distribution channels include published research output through major academic venues and cooperation with frontier AI labs through pre-deployment evaluation engagements.

Models and products

  • Published research output. On dangerous-capability evaluations including deceptive-AI capability, situational-awareness research, and other areas.
  • Pre-deployment evaluation engagements. Cooperation with frontier AI labs.
  • Cooperation with UK AI Safety Institute and US AI Safety Institute (NIST).

Distribution channels include published research output and cooperation with frontier AI labs.

Benchmarks and standing

Apollo Research's evaluation framework focuses on published dangerous-capability evaluations and cooperation with frontier AI labs. Industry coverage has consistently characterized Apollo Research as one of the principal independent AI safety evaluation nonprofits globally, alongside METR, LMArena, and other peers.

Leadership

As of April 2026, Apollo Research's senior leadership includes:

  • Marius Hobbhahn, Co-Founder and Chief Executive Officer.
  • Jérémy Scheurer, Co-Founder.
  • Senior research staff across the dangerous-capability evaluation program.

Funding and backers

AI safety philanthropic backing including the Survival and Flourishing Fund, Open Philanthropy, and other AI-safety-focused funders.

Industry position

Apollo Research occupies a distinctive position as one of the principal independent AI safety evaluation nonprofits, with dangerous-capability evaluation research output, pre-deployment evaluation engagements with frontier AI labs, and cooperation with UK and US AI Safety Institutes.

Competitive landscape

Outlook

  • Continued dangerous-capability evaluation research output through 2026 to 2027.
  • Continued pre-deployment evaluation engagements with frontier AI labs.
  • Continued cooperation with UK and US AI Safety Institutes.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.