MIRI

MIRI is the Machine Intelligence Research Institute, the American AI safety research nonprofit founded in 2000 by Eliezer Yudkowsky in Berkeley, California, with a 25-year history of foundational AI alignment research and existential-risk advocacy.
MIRI

MIRI

MIRI (Machine Intelligence Research Institute) is an American artificial intelligence safety research nonprofit headquartered in Berkeley, California, founded in 2000 by Eliezer Yudkowsky as the Singularity Institute for Artificial Intelligence (SIAI), renamed to the Singularity Institute in 2005, and renamed again to the Machine Intelligence Research Institute in 2013. MIRI has the longest operating history among the principal AI safety research organizations globally, predating the broader academic and industry AI safety research field by approximately a decade. The institute's research mandate is explicitly oriented around long-horizon AI alignment research with existential-risk framing, with substantial influence on the broader AI safety community through Yudkowsky's writings (the Sequences on LessWrong, "Harry Potter and the Methods of Rationality," and the September 2025 book "If Anyone Builds It, Everyone Dies" co-authored with Nate Soares). The institute's strategic positioning has shifted substantially through 2024 to 2025: the historical research-program emphasis on technical alignment research (decision theory, agent foundations, embedded agency) was reduced, and the organization's principal output has become public communication and policy advocacy on existential AI risk. As of April 2026, MIRI is one of the structurally consequential AI safety research organizations globally despite the substantively reduced technical-research output, with continued influence anchored on Yudkowsky's writing and public-engagement output.

At a glance

  • Founded: July 2000 in Atlanta as the Singularity Institute for Artificial Intelligence (SIAI). Renamed to the Singularity Institute in 2005. Renamed to MIRI in January 2013. Headquartered in Berkeley, California, since the 2010s.
  • Status: US 501(c)(3) nonprofit research organization.
  • Funding: Donor-funded. Historical donors include Peter Thiel, Jaan Tallinn (Skype co-founder), Vitalik Buterin, and adjacent technology philanthropists. Annual operating budget reported in industry coverage as in the multi-million-dollar range.
  • CEO / Lead: Malo Bourgon, Chief Operating Officer and effectively the senior executive coordinating MIRI's day-to-day operations. Eliezer Yudkowsky, Founder and Senior Research Fellow; Yudkowsky remains the principal public voice and the intellectual leader of the organization.
  • Other notable leadership: Nate Soares, President of MIRI (since 2015). Co-author with Yudkowsky of "If Anyone Builds It, Everyone Dies" (September 2025). Senior research staff and policy-engagement leadership.
  • Open weights: N/A. MIRI does not produce foundation models; the organization's research output is alignment-research papers and policy-advocacy material.
  • Flagship outputs: Multi-decade research-publication output on AI alignment, decision theory, and agent foundations; Yudkowsky's writing on LessWrong including the Sequences (2006 to 2009), AI to Zombies, and the broader rationalist movement infrastructure that MIRI helped anchor; the September 2025 book "If Anyone Builds It, Everyone Dies" (Yudkowsky and Soares); substantial public-engagement and policy-advocacy output through 2023 to 2026.

Origins

MIRI was founded in July 2000 in Atlanta as the Singularity Institute for Artificial Intelligence (SIAI) by Eliezer Yudkowsky, then a 21-year-old autodidact who had been writing about artificial general intelligence and recursive self-improvement on usenet and early web forums. The founding mandate was explicitly oriented around the question of how to build AGI safely — the operating premise being that AGI development was substantively likely within decades, that uncontrolled or misaligned AGI would constitute an existential risk to humanity, and that alignment research was structurally underserved by the broader academic and industry AI research field. The 2000 founding predated the broader AI safety research field by approximately a decade and predated the post-2010 deep-learning revolution that subsequently elevated AI safety from a niche concern to a mainstream policy topic.

The 2000 to 2013 period built the institute's research-program foundation under the SIAI / Singularity Institute brand. Yudkowsky's writing during this period — including the 2006 to 2009 Sequences on LessWrong (a multi-thousand-word series on rationality, cognitive bias, AI, and adjacent topics that anchored what became the broader rationalist intellectual movement), the "Harry Potter and the Methods of Rationality" rationalist-fiction series (2010 to 2015), and the substantive engagement with the AI safety research community through SIAI conferences and adjacent venues — established Yudkowsky as one of the principal public voices on AI safety. Substantive donor support during this period came from Peter Thiel and adjacent technology philanthropists.

The January 2013 rename from the Singularity Institute to MIRI accompanied a strategic narrowing: the broader Singularity-themed activities (Singularity Summit conferences, adjacent transhumanist programming) were divested to Singularity University, while the renamed MIRI focused exclusively on AI alignment research. The 2013 to 2022 period built MIRI's technical-research-program output under Nate Soares (President since 2015) and senior research staff including Eliezer Yudkowsky, Scott Garrabrant, Sam Eisenstat, Tsvi Benson-Tilsen, Abram Demski, and adjacent researchers. The principal research areas included decision theory (logical induction, updateless decision theory), agent foundations (embedded agency, the cartesian frame model), and adjacent foundational alignment topics.

The 2022 to 2024 period saw MIRI's strategic positioning shift substantially. The post-ChatGPT generative-AI wave brought the broader AI safety field into mainstream attention in ways that MIRI's prior decade of work had anticipated but had not produced commensurate research-program output to address. MIRI's leadership concluded through 2023 to 2024 that the organization's historical alignment-research approach was unlikely to produce technical alignment solutions on the timeline that frontier-AI capability development required, and that the organization's comparative advantage had shifted toward public communication and policy advocacy on existential AI risk.

The September 2025 publication of "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" (Yudkowsky and Soares, published by Little, Brown and Company) was the organization's principal public-facing output of the post-pivot period. The book argued in popular form that current frontier-AI development trajectories produce substantial existential risk and that international AI development moratoria are required to prevent catastrophic outcomes. The book was widely covered in international media at the time of release and became a bestseller in the AI policy category.

The 2024 to 2026 period has continued the public-communication and policy-advocacy posture alongside reduced technical-research output. The technical-research staff has been substantively reduced; industry coverage has reported senior researcher departures including Scott Garrabrant and others to academic and frontier-AI-lab roles.

Mission and strategy

MIRI's stated mission is to reduce existential risk from advanced artificial intelligence. The strategy combines two threads in the post-pivot period. First, public communication on existential AI risk, with the September 2025 Yudkowsky-Soares book as the principal recent example alongside continued Yudkowsky writing, podcast engagement, and public-speaking activity. Second, policy advocacy for international AI development moratoria and adjacent regulatory interventions, with senior MIRI staff engaging with US, UK, EU, and international AI safety policy processes.

The competitive premise of the post-pivot positioning is that effective AI safety advocacy at the political level is structurally important, that the organization's accumulated credibility (multi-decade history, Yudkowsky's writing influence, the established donor base) provides a platform that newer AI safety organizations cannot replicate, and that public-communication-and-policy work has higher marginal impact than continued technical alignment research given the timeline-and-scale dynamics of contemporary frontier-AI development.

Models and products

  • Multi-decade research-publication output. AI alignment, decision theory, agent foundations, embedded agency, logical induction, updateless decision theory, and adjacent technical alignment research papers across the 2000 to 2024 period.
  • Yudkowsky writing. The Sequences (2006 to 2009 on LessWrong), Rationality: AI to Zombies (2015 book version of the Sequences), Harry Potter and the Methods of Rationality (2010 to 2015), and adjacent essays.
  • "If Anyone Builds It, Everyone Dies". September 2025 book co-authored with Nate Soares. Published by Little, Brown and Company. Bestseller in AI policy category.
  • Policy-advocacy submissions. Contributions to US, UK, EU, and international AI safety regulatory processes.
  • Public-engagement output. Yudkowsky podcast and speaking engagements, Soares public-policy engagements, and adjacent organizational visibility.

Distribution channels are predominantly through public-communication channels (LessWrong, podcasts, traditional media) and direct policy-advocacy engagement.

Benchmarks and standing

MIRI is not evaluated against horizontal AI benchmarks. The organization's standing is measured through the influence of its public-communication output, the policy-engagement scale, and the broader public-discourse effects on AI safety framing.

Industry coverage has consistently characterized MIRI as one of the structurally consequential AI safety research organizations globally despite the substantively reduced technical-research output, with the multi-decade operating history, the Yudkowsky writing influence, and the September 2025 Yudkowsky-Soares book as principal validating data points. Skeptical coverage has questioned whether the public-communication-and-policy pivot has produced commensurate AI safety progress relative to the prior decade's technical-research-program investment.

Leadership

As of April 2026, MIRI's senior leadership includes:

  • Eliezer Yudkowsky, Founder and Senior Research Fellow. Principal public voice and intellectual leader.
  • Nate Soares, President. Co-author of "If Anyone Builds It, Everyone Dies".
  • Malo Bourgon, Chief Operating Officer. Coordinates day-to-day operations.
  • Senior research and policy staff.

The senior leadership has been comparatively stable, with key-person dependence on Yudkowsky as the principal public voice.

Funding and backers

Donor-funded. Historical donors include Peter Thiel (substantial founding-period donor), Jaan Tallinn (Skype co-founder), Vitalik Buterin (Ethereum co-founder), and adjacent technology philanthropists. The Future of Life Institute, Open Philanthropy, and adjacent AI-safety-focused funders have also provided support. Annual operating budget reported in industry coverage as in the multi-million-dollar range.

Industry position

MIRI occupies a distinctive position as the longest-operating AI safety research organization globally and as the principal historical anchor of the broader AI safety and rationalist intellectual movement. Industry coverage has consistently characterized MIRI as one of the structurally consequential AI safety organizations despite the post-2024 strategic pivot and reduced technical-research output. The Yudkowsky writing influence has been characterized as one of the more consequential intellectual contributions to the broader AI safety field, with substantial cross-organization influence on researchers at Anthropic, OpenAI, Google DeepMind, and adjacent commercial and academic AI safety teams.

The structural risks are two. First, the post-pivot reduction in technical-research output has limited the organization's ability to contribute to the technical alignment progress that the contemporary frontier-AI development trajectory requires; the organizational positioning depends on public-communication-and-policy effectiveness rather than technical research. Second, the key-person dependence on Yudkowsky introduces continuity risk if Yudkowsky's public engagement diminishes or if his policy positions diverge from the broader AI safety consensus.

Competitive landscape

Outlook

  • Continued public-communication and policy-advocacy output through 2026 to 2027.
  • The "If Anyone Builds It, Everyone Dies" book reception and its influence on broader AI safety policy discourse.
  • The continued strategic positioning of the post-pivot organization and any potential return to technical-research output.
  • Continued Yudkowsky public-engagement output and the visibility of MIRI within the broader AI safety policy environment.
  • The Yudkowsky succession question and the broader continuity of MIRI as an organization beyond key-person dependence.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.