Goodfire

Goodfire is an American AI research company founded in 2024, developer of the Ember interpretability platform that decodes the internal computations of neural networks for direct programmable access.
Goodfire

Goodfire

Goodfire is an American artificial intelligence research company founded in 2024 by Eric Ho, Daniel Balsam, and Tom McGrath. The company is headquartered in San Francisco and develops Ember, a mechanistic-interpretability platform that decodes the internal computations of neural networks and provides direct programmable access to model behavior. As of April 2026, Goodfire has raised approximately $209 million across three rounds, with a Series B in February 2026 at a $1.25 billion valuation, and is the leading commercial venture in the AI interpretability research category.

At a glance

  • Founded: 2024 in San Francisco by Eric Ho, Daniel Balsam, and Tom McGrath.
  • Status: Private. Approximately 30 employees as of early 2026.
  • Funding: Approximately $209 million cumulative across three reported rounds. $7 million seed in August 2024 (Lightspeed Venture Partners lead). $50 million Series A in April 2025 (Menlo Ventures lead, with Anthropic as a notable participant). $150 million Series B in February 2026 at a $1.25 billion valuation (B Capital lead, with Salesforce, Eric Schmidt, and other backers).
  • CEO: Eric Ho (co-founder)
  • Other notable leadership: Daniel Balsam (co-founder and CTO), Tom McGrath (co-founder; formerly senior research scientist and mechanistic-interpretability team founder at Google DeepMind), Chris Olah (advisor; Anthropic co-founder and leading interpretability researcher).
  • Open weights: Mixed. Some research artifacts published openly; the Ember platform itself is closed.
  • Flagship products: Ember (mechanistic-interpretability platform).

Origins

Goodfire was founded in 2024 by Eric Ho, Daniel Balsam, and Tom McGrath. Ho had previously founded RippleMatch, a Series B AI-driven recruiting startup, and had prior experience as an operator before pivoting to interpretability as Goodfire's commercial focus. Balsam joined as co-founder and Chief Technology Officer with engineering and ML systems background. McGrath joined from Google DeepMind, where he had been a senior research scientist and had founded DeepMind's mechanistic-interpretability team, contributing the academic-research credentials that anchor Goodfire's positioning.

The founding thesis combined a research direction (mechanistic interpretability of large neural networks) with a commercial direction (developer tools that expose internal model behavior to engineers building on top of foundation models). The thesis is that as foundation models become more deeply embedded in production systems, the inability to understand or control their internal behavior becomes a binding constraint on enterprise deployment, and that a focused company can develop tools that translate academic interpretability research into commercial offerings.

The seed round in August 2024 raised $7 million at an early-stage valuation, led by Lightspeed Venture Partners. The Series A in April 2025 raised $50 million, led by Menlo Ventures, with Anthropic participating as a strategic investor. Anthropic's involvement is notable because Anthropic itself runs an extensive in-house interpretability research program led by Chris Olah (a Goodfire advisor) and the Anthropic interpretability team, and had been using Goodfire's tools as an enterprise customer prior to the investment.

In February 2026, Goodfire raised a $150 million Series B at a $1.25 billion valuation, led by B Capital with participation from Salesforce, Eric Schmidt, and additional backers. The valuation step from the Series A through the Series B reflects increased commercial validation of the interpretability-as-a-service positioning, as well as the broader market interest in AI safety and governance products.

Mission and strategy

Goodfire's stated mission is to "design models with interpretability," reframing interpretability from a research-academic activity into a commercial design and operations capability. The company has framed the work as "AI brain surgery": tools that let engineers operate on the internal mechanisms of neural networks rather than treating models as black boxes.

The strategy combines three threads. First, the Ember platform, which provides commercial-grade tools for inspecting, modifying, and steering foundation-model internals. Second, fundamental research on mechanistic interpretability, contributing to the academic field through publications and open-source research artifacts. Third, enterprise distribution targeting AI labs, model-deploying enterprises, and regulated industries where understanding model behavior is a deployment requirement.

The competitive premise is that interpretability is a separable layer in the AI stack, distinct from model training itself, and that a focused company can produce better tools than the in-house research teams at frontier labs because the focused company's incentives align with commercial product-quality rather than internal research-publication metrics. The Anthropic investment in the Series A is a public validation of this positioning: even labs with strong in-house interpretability programs (Anthropic's specifically) value Goodfire's tooling as a commercial complement to their research.

Models and products

  • Ember. Commercial mechanistic-interpretability platform. Decodes the internal computations of large neural networks (transformer-family LLMs especially) and provides programmable access to internal representations, attention patterns, and circuits. Used by AI labs and enterprise customers for model debugging, safety analysis, and behavioral steering.
  • Research artifacts. Goodfire has published research papers and released some open-source code components related to mechanistic interpretability, contributing to the broader academic field alongside the commercial Ember product.
  • Enterprise interpretability services. Beyond Ember, Goodfire offers research engagements with enterprise customers deploying foundation models in regulated or high-stakes environments.

The product strategy has evolved from research-tooling into a broader interpretability-and-governance product line as the company has scaled, but Ember remains the principal commercial offering as of April 2026.

Benchmarks and standing

There are no standardized benchmarks for interpretability platforms, and Goodfire's products are not evaluated on the LLM-capability leaderboards that measure foundation models. The company's standing in the industry rests on the technical credibility of its research output, the quality of the Ember product as judged by users at AI labs and enterprises, and the depth of investor and customer relationships.

Customer adoption is the most useful proxy for product-quality assessment. Anthropic's investment in the Series A and reported use of Goodfire tooling, plus Salesforce's investment in the Series B, indicate enterprise validation. The valuation step to $1.25 billion in February 2026 reflects investor confidence in continued commercial traction.

Leadership

As of April 2026, Goodfire's senior leadership includes:

  • Eric Ho, Chief Executive Officer and co-founder. Previously founded RippleMatch, a Series B AI recruiting startup. Operator background distinct from the deep-research backgrounds of co-founders.
  • Daniel Balsam, Chief Technology Officer and co-founder. Engineering and ML systems leadership.
  • Tom McGrath, co-founder. Formerly senior research scientist at Google DeepMind and founder of DeepMind's mechanistic-interpretability team. Brings academic-research credentials and the technical leadership for the interpretability research direction.
  • Chris Olah, advisor. Anthropic co-founder and the most prominent academic researcher in mechanistic interpretability. The advisory role is unusual given Olah's concurrent senior position at Anthropic, but reflects the alignment between Anthropic's interpretability program and Goodfire's commercial positioning.

The team has hired aggressively from interpretability and ML-research programs at frontier labs and academic institutions. Specific senior-leadership additions beyond the named cohort have not been broadly profiled in industry coverage.

Funding and backers

Goodfire's funding history through April 2026 includes the August 2024 $7 million seed (Lightspeed Venture Partners lead), the April 2025 $50 million Series A (Menlo Ventures lead, with Anthropic participating), and the February 2026 $150 million Series B at a $1.25 billion valuation (B Capital lead, with Salesforce, Eric Schmidt, and additional backers). Cumulative funding is approximately $209 million.

The investor base reflects the unusual positioning. Lightspeed and Menlo Ventures provide standard venture-capital scaling. Anthropic's Series A participation is the most notable strategic signal, as it indicates the leading interpretability-research lab considers Goodfire a complementary commercial provider rather than a competitor. B Capital led the Series B with participation from Salesforce (an enterprise-software strategic investor with deep interest in AI deployment and governance) and Eric Schmidt (whose participation across multiple AI startups indicates broad strategic-investor interest).

The valuation trajectory from approximately $50 million implied at the seed to $1.25 billion at the Series B in eighteen months reflects rapid commercial validation as enterprise interest in interpretability has scaled.

Industry position

Goodfire occupies a structurally distinctive position among Insurgent labs through the focused interpretability positioning. The combination of mechanistic-interpretability research depth, the Ember commercial platform, the strategic Anthropic relationship, the senior advisor presence of Chris Olah, and the rapid valuation acceleration produces a profile no other company has matched in the interpretability category.

Strategic risks include the relatively small commercial market for interpretability tooling compared to the foundation-model market itself, the competitive pressure as frontier labs expand their in-house interpretability programs, and the open question of whether interpretability remains a separable commercial layer or becomes commoditized into the foundation-model platforms themselves.

Strategic strengths include the academic-research credibility through McGrath and the Olah advisory relationship, the enterprise-customer validation through the Anthropic and Salesforce investor relationships, and the structural position as the principal commercial interpretability lab during a period when AI safety and model governance are increasing in commercial importance.

The April 2026 industry context, including the Llama 4 benchmark-disclosure episode at Meta, the enterprise focus of OpenAI's 2026 strategy, and the increasing prominence of regulated-industry AI deployment, supports the commercial premise underlying Goodfire's positioning.

Competitive landscape

Goodfire competes with several research and commercial organizations:

  • Anthropic's in-house interpretability program. The leading academic-and-engineering interpretability program globally, led by Chris Olah and others. Anthropic's investment in Goodfire suggests cooperation rather than head-on competition, though there is overlap.
  • Google DeepMind's mechanistic-interpretability team. Tom McGrath's prior employer. Continues research output but does not commercialize tools as Goodfire does.
  • OpenAI's safety and alignment research. Includes interpretability-adjacent work, but OpenAI's commercial focus does not include standalone interpretability products.
  • EleutherAI and academic labs. Open-source interpretability research that overlaps with Goodfire's research output but does not compete commercially.
  • TransluceAI, Conjecture, and similar interpretability-focused startups. Competitors in the small commercial-interpretability market.
  • AI governance and safety platforms more broadly. Commercial competition from broader AI safety and governance offerings, including IBM watsonx.governance and various AI risk-management products.

Outlook

Several open questions affect Goodfire's trajectory in 2026 and 2027:

  • Ember adoption metrics across AI labs and enterprise customers, which determine whether the interpretability-as-a-service market scales materially.
  • Continued strategic relationships with Anthropic and other frontier labs, including any deepening of the partnership beyond the Series A investment.
  • Research output and academic-publication trajectory, which sustains the technical-credibility positioning.
  • Possible expansion into adjacent product categories including AI governance, model debugging, and safety-evaluation tooling.
  • Frontier-lab competitive response. As Anthropic, OpenAI, and Google DeepMind expand their internal interpretability programs, the commercial space for an independent provider may compress.
  • Possible follow-on funding rounds or strategic-acquisition interest at the $1.25 billion-class valuation level.

Sources

About the author
Nex Tomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.