Transluce

Transluce is the US AI safety research nonprofit founded in 2024 by Jacob Steinhardt and Sarah Schwettmann, focused on building open AI interpretability tools and transparency research for frontier foundation models.
Transluce

Transluce

Transluce is a US-based artificial intelligence safety research nonprofit headquartered in Berkeley, California, founded in 2024 by Jacob Steinhardt (UC Berkeley professor and AI safety researcher) and Sarah Schwettmann (former MIT CSAIL researcher). The organization's research mandate is oriented around building open AI interpretability tools and transparency research for frontier foundation models. As of April 2026, Transluce is one of the principal independent AI safety interpretability research nonprofits, with published research output and active cooperation with frontier AI labs through pre-deployment interpretability research engagements.

At a glance

  • Founded: 2024 in Berkeley, California, by Jacob Steinhardt and Sarah Schwettmann.
  • Status: Independent US nonprofit research organization.
  • Funding: AI safety philanthropic backing including Open Philanthropy and other AI-safety-focused funders.
  • CEO / Lead: Jacob Steinhardt, Co-Founder. UC Berkeley professor.
  • Other notable leadership: Sarah Schwettmann, Co-Founder. Former MIT CSAIL researcher.
  • Open weights: Yes, partial. Open AI interpretability tools released through GitHub.
  • Flagship outputs: Transluce Investigator (open AI interpretability tool); published research output on AI interpretability and transparency; cooperation with frontier AI labs.

Origins

Transluce was founded in 2024 by Jacob Steinhardt and Sarah Schwettmann with an AI safety research mandate oriented around building open AI interpretability tools. Steinhardt, a UC Berkeley professor with AI safety research credibility, anchored founder credibility. Schwettmann, a former MIT CSAIL researcher with interpretability research output, anchored technical credibility.

The 2024 to 2026 founding period built AI interpretability tool infrastructure including Transluce Investigator, with published research output and cooperation with frontier AI labs through pre-deployment interpretability research engagements.

Mission and strategy

Transluce's mission is to build open AI interpretability tools and to advance transparency research for frontier foundation models. The strategy combines two threads. First, open AI interpretability tool development including Transluce Investigator. Second, published research output on AI interpretability and transparency.

Distribution channels include open-source distribution through GitHub, published research output through major academic venues, and cooperation with frontier AI labs.

Models and products

  • Transluce Investigator. Open AI interpretability tool.
  • Published research output. On AI interpretability and transparency.
  • Cooperation with frontier AI labs.

Distribution channels include open-source distribution and cooperation with frontier AI labs.

Benchmarks and standing

Transluce's evaluation framework focuses on published interpretability research output and open-tool adoption. Industry coverage has consistently characterized Transluce as one of the principal independent AI safety interpretability research nonprofits.

Leadership

As of April 2026, Transluce's senior leadership includes:

  • Jacob Steinhardt, Co-Founder. UC Berkeley professor.
  • Sarah Schwettmann, Co-Founder. Former MIT CSAIL researcher.
  • Senior research staff across the AI interpretability program.

Funding and backers

AI safety philanthropic backing including Open Philanthropy and other AI-safety-focused funders.

Industry position

Transluce occupies a distinctive position as one of the principal independent AI safety interpretability research nonprofits, with the Transluce Investigator open AI interpretability tool and published research output.

Competitive landscape

Outlook

  • Continued published interpretability research output through 2026 to 2027.
  • Continued cooperation with frontier AI labs.
  • Continued open AI interpretability tool development.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.