Timaeus
Timaeus is an artificial intelligence safety research nonprofit headquartered in Cambridge, Massachusetts and London, founded in 2023 by Jesse Hoogland (Co-Founder and Chief Executive Officer), Stan van Wingerden (Co-Founder), Daniel Murfet (Co-Founder), and co-founders. The organization's research mandate is oriented around developmental interpretability and singular learning theory research for foundation models, with cross-institution research-cooperation including the University of Melbourne, Princeton, MIT, and other academic peers. As of April 2026, Timaeus is one of the principal independent AI safety research nonprofits with focus on theoretical interpretability research.
At a glance
- Founded: 2023 by Jesse Hoogland, Stan van Wingerden, Daniel Murfet, and co-founders.
- Status: Independent nonprofit research organization.
- Funding: AI safety philanthropic backing including Open Philanthropy, the Survival and Flourishing Fund, and other AI-safety-focused funders.
- CEO: Jesse Hoogland, Co-Founder and Chief Executive Officer.
- Other notable leadership: Stan van Wingerden, Co-Founder. Daniel Murfet, Co-Founder. University of Melbourne mathematician.
- Open weights: Yes, partial. Selected research outputs released open-source through GitHub.
- Flagship outputs: Published research output on developmental interpretability and singular learning theory; cross-institution research-cooperation; AI safety research community contributions.
Origins
Timaeus was founded in 2023 by Jesse Hoogland, Stan van Wingerden, Daniel Murfet, and co-founders with an AI safety research mandate oriented around developmental interpretability and singular learning theory research. Daniel Murfet, the University of Melbourne mathematician with singular learning theory research credibility, anchored technical credibility.
The 2023 to 2026 founding period built research infrastructure including published research output on developmental interpretability, singular learning theory applications to foundation models, and other areas. Cross-institution research-cooperation including University of Melbourne, Princeton, MIT, and other academic peers has anchored research-program quality.
Mission and strategy
Timaeus's mission is to advance developmental interpretability and singular learning theory research for foundation models. The strategy combines two threads. First, theoretical interpretability research through developmental interpretability and singular learning theory. Second, cross-institution research-cooperation across academic and industry research peers.
Distribution channels include published research output through major academic venues, cooperation with academic and industry research peers, and AI safety research community contributions.
Models and products
- Published research output. On developmental interpretability and singular learning theory applications to foundation models.
- Cross-institution research-cooperation. With University of Melbourne, Princeton, MIT, and other academic peers.
- AI safety research community contributions.
Distribution channels include published research output and cooperation with academic and industry research peers.
Benchmarks and standing
Timaeus's evaluation framework focuses on published theoretical interpretability research output and AI safety research community contributions. Industry coverage has consistently characterized Timaeus as one of the principal independent AI safety theoretical-interpretability research nonprofits.
Leadership
As of April 2026, Timaeus's senior leadership includes:
- Jesse Hoogland, Co-Founder and Chief Executive Officer.
- Stan van Wingerden, Co-Founder.
- Daniel Murfet, Co-Founder. University of Melbourne mathematician.
- Senior research staff across the developmental interpretability program.
Funding and backers
AI safety philanthropic backing including Open Philanthropy, the Survival and Flourishing Fund, and other AI-safety-focused funders.
Industry position
Timaeus occupies a distinctive position as one of the principal independent AI safety theoretical-interpretability research nonprofits, with published developmental interpretability and singular learning theory research output and cross-institution research-cooperation.
Competitive landscape
- Apollo Research, METR, LMArena, Transluce. Independent AI safety evaluation peer organizations.
- Anthropic Interpretability Team. Frontier AI lab interpretability research peer.
- Center for AI Safety, MIRI, Conjecture. AI safety research nonprofit peers.
- University of Melbourne, Princeton, MIT. Cross-institution research-cooperation peers.
Outlook
- Continued published developmental interpretability research output through 2026 to 2027.
- Continued cross-institution research-cooperation.
- Continued AI safety theoretical-interpretability research community contributions.
Sources
- Timaeus official site. Organization reference.
- Jesse Hoogland LinkedIn. Co-Founder reference.
- Daniel Murfet LinkedIn. Co-Founder reference.
- Open Philanthropy. Funder.
- University of Melbourne. Co-Founder Murfet university affiliation.