US AI Safety Institute (NIST)

The US AI Safety Institute is the United States government's AI safety research body within NIST under the Department of Commerce, established February 2024 following the October 2023 Biden Executive Order, with bilateral cooperation with the UK AISI and shifting US-policy context post-January 2025.
US AI Safety Institute (NIST)

US AI Safety Institute (NIST)

The US AI Safety Institute (US AISI) is the United States government's artificial intelligence safety research body, established in February 2024 within the National Institute of Standards and Technology (NIST) under the US Department of Commerce. The institute operates with research staff distributed across NIST's headquarters in Gaithersburg, Maryland, and adjacent NIST locations, and was created following the October 2023 Biden Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. US AISI is the US counterpart to the UK AI Safety Institute, with bilateral cooperation including a formal Memorandum of Understanding signed in April 2024 covering joint AI safety evaluation and research. The institute coordinates the AI Safety Institute Consortium (AISIC), a cross-organization research and policy structure of more than 280 AI organizations and academic institutions launched in February 2024. As of April 2026, US AISI continues to operate through a substantially altered US AI policy context — the January 2025 Trump administration revoked the Biden Executive Order, and the institute's mandate, leadership, and resourcing have been subject to ongoing US Congressional appropriations debate and administrative restructuring through 2025 to 2026.

At a glance

  • Founded: February 2024 by the US Government within NIST, following the October 2023 Biden Executive Order on AI. The October 2023 Executive Order had directed NIST to establish AI safety research and evaluation infrastructure; the February 2024 announcement formalized the institute as a directorate within NIST.
  • Status: US government research directorate within NIST under the Department of Commerce. Operated under Biden-administration AI policy framing through January 2025; subsequently operated under shifting Trump-administration policy direction through 2025 to 2026.
  • Funding: US government funding through NIST and the Department of Commerce. Reported initial fiscal-year-2024 funding allocation of approximately $10 million, with subsequent fiscal-year allocations subject to Congressional appropriations debate. Specific allocations through 2025 to 2026 have been less stable than the parallel UK AISI funding trajectory.
  • CEO / Lead: Elizabeth Kelly was the founding Director of US AISI from February 2024 through early 2025; Kelly had been a special assistant to Biden and a senior White House official before her US AISI appointment. Director-level leadership has rotated through 2025 to 2026 reflecting US government transitions.
  • Other notable leadership: Senior research and policy staff drawn from NIST, academic AI research, and industry. The AI Safety Institute Consortium (AISIC) of over 280 organizations provides additional research-and-policy coordination structure including frontier-AI labs, hyperscale-cloud providers, academic AI institutions, and adjacent AI-policy organizations.
  • Open weights: Limited. US AISI's research outputs are predominantly evaluation reports, AI safety research papers, and standards-setting contributions rather than foundation-model releases.
  • Flagship outputs: Pre-deployment evaluation reports for frontier AI models, including the December 2024 joint evaluation of OpenAI o1 (with the UK AI Safety Institute under bilateral cooperation arrangements); contributions to the NIST AI Risk Management Framework (the NIST AI RMF) and adjacent standards-setting work; the AI Safety Institute Consortium (AISIC) framework; AI safety research papers across dangerous-capability evaluation and adjacent areas.

Origins

The US AI Safety Institute's establishment was directed by the October 2023 Biden Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110), which had been one of the most significant US government AI-policy actions of the Biden administration. The Executive Order had directed NIST to establish an AI Safety Institute, develop AI safety standards and frameworks, conduct frontier-AI evaluation, and coordinate cross-organization AI safety research. The February 2024 announcement formalized US AISI as a NIST directorate, with Elizabeth Kelly as founding Director.

Early institute activities included signing the bilateral Memorandum of Understanding with the UK AI Safety Institute in April 2024 covering joint AI safety evaluation, the establishment of pre-deployment access agreements with frontier AI labs (OpenAI, Anthropic, Google DeepMind), and the establishment of the AI Safety Institute Consortium (AISIC) of over 280 AI organizations and academic institutions. AISIC member organizations include frontier-AI labs (OpenAI, Anthropic, Google DeepMind, Meta AI / FAIR, Microsoft AI), hyperscale-cloud providers (AWS, Google Cloud, Microsoft Azure), academic AI institutions (MIT CSAIL, Stanford HAI / CRFM, Berkeley BAIR, CMU SCS, Allen Institute for AI), and adjacent AI-policy organizations.

The 2024 to early 2025 period under Kelly's leadership produced the December 2024 joint US AISI / UK AISI evaluation of OpenAI o1, the institute's most internationally visible early evaluation report. The publication, released jointly with the UK AISI, evaluated o1's capabilities on biosecurity, cybersecurity, autonomy, and adjacent dangerous-capability evaluation areas, providing the first publicly disclosed pre-deployment evaluation report from the bilateral cooperation arrangements.

The January 2025 transition to the Trump administration brought substantial uncertainty about US AI safety policy direction. The Biden Executive Order on AI was revoked through one of the new administration's early Executive Orders. Subsequent US AI policy direction through 2025 to 2026 has emphasized different priorities than the Biden-era safety framing, with US AISI's continued mandate, leadership, and resourcing being subjects of ongoing US Congressional appropriations debate and administrative restructuring. Elizabeth Kelly departed the Director role in early 2025; subsequent leadership rotations have been reported through 2025 to 2026.

The institute has continued to operate through this period and to publish AI safety research, including continued pre-deployment evaluation reports and the broader NIST AI Risk Management Framework standards-setting work. The continued operating posture has been characterized in industry coverage as reflecting NIST's broader bipartisan policy positioning (NIST is a long-running standards-setting body with substantial bipartisan congressional support) within an otherwise politically contested US AI safety policy landscape.

Mission and strategy

US AISI's stated mission is to advance the science, practice, and adoption of AI safety, with emphasis on standards-setting (consistent with NIST's broader mission), AI safety methodology development, pre-deployment evaluation of frontier AI capabilities, and cross-organization AI safety research coordination through the AISIC framework. The strategy combines four threads. First, AI safety standards development through NIST's standards-setting mandate, including continued development of the NIST AI Risk Management Framework. Second, pre-deployment AI evaluation of frontier models through the industry agreements established under Biden-era arrangements. Third, the AI Safety Institute Consortium for cross-organization AI safety research coordination. Fourth, bilateral and multilateral cooperation with international AI safety bodies, including the UK AISI cooperation under the April 2024 Memorandum of Understanding.

The competitive premise is that AI safety research and standards-setting is a structurally important US government capability that requires sustained federal investment, that NIST's standards-setting mandate provides a politically durable institutional home that can survive US administration transitions better than freestanding AI policy bodies, and that the AISIC consortium framework anchors cross-organization cooperation that no individual institution alone can replicate.

Models and products

  • NIST AI Risk Management Framework. The NIST AI RMF, released January 2023 with subsequent updates. The principal US government AI risk-management standard.
  • AI Safety Institute Consortium (AISIC). Cross-organization research and policy coordination structure. Over 280 member organizations.
  • Pre-deployment evaluation reports. Including the December 2024 joint US AISI / UK AISI evaluation of OpenAI o1.
  • AI safety research papers. Output across dangerous-capability evaluation and adjacent AI safety research areas.
  • Bilateral cooperation with the UK AI Safety Institute. Memorandum of Understanding signed April 2024.
  • Standards-setting contributions to US AI policy. Through NIST's broader standards-development processes.

Distribution channels include NIST publication channels, AISIC member-organization coordination, US Congressional and Executive Branch policy submission, and bilateral cooperation publications with the UK AISI.

Benchmarks and standing

US AISI is not evaluated against horizontal AI benchmarks. The institute's standing is measured through the rigor and adoption of its AI safety standards (NIST AI RMF), the quality of pre-deployment evaluation reports, the scale and engagement of the AISIC consortium, and the institutional credibility within the broader US and international AI safety community.

Industry coverage has consistently characterized US AISI as the United States government's principal AI safety research body, with the bilateral cooperation with the UK AISI, the AISIC consortium scale, and the December 2024 joint o1 evaluation as principal validating data points. Skeptical coverage through 2025 to 2026 has questioned whether US AISI's continued mandate and resourcing will be sustained under shifting US-policy context, and whether the institute's standards-setting work can maintain its existing institutional momentum without the Biden-era policy support that originally established the body.

Leadership

As of April 2026, US AISI's senior leadership has been subject to ongoing rotation reflecting US government transitions:

  • Director-level leadership has evolved through 2024 to 2026. Elizabeth Kelly was the founding Director from February 2024 through early 2025; subsequent leadership rotations have been reported.
  • Senior research and policy staff drawn from NIST, academic AI research, and industry.
  • AI Safety Institute Consortium participants include over 280 AI organizations and academic institutions providing additional research-and-policy coordination structure.

Funding and backers

US government funding through NIST and the Department of Commerce. Reported initial fiscal-year-2024 funding allocation of approximately $10 million, with subsequent fiscal-year allocations subject to US Congressional appropriations debate. Specific funding through 2025 to 2026 has been less stable than the parallel UK AISI funding trajectory, reflecting the broader US AI policy contestation that has surrounded the institute since the January 2025 administration transition.

Industry position

US AISI occupies a structurally distinctive position as the United States government's principal AI safety research body, with the bilateral cooperation with the UK AI Safety Institute, the AISIC consortium framework, and the NIST standards-setting institutional anchor. Industry coverage has consistently characterized US AISI as one of the structurally consequential government AI safety bodies globally, despite the post-2025 US-policy contestation that has affected the institute's operating context.

The structural risks are three. First, the post-January 2025 US-policy context has reduced the political support that originally established the institute, with continued US Congressional appropriations debate about US AISI's mandate and resourcing. Second, the bilateral cooperation with the UK AISI depends on continued US government commitment that has been less stable than the parallel UK government commitment, and continued joint evaluation work has required navigation of differing UK and US AI policy direction. Third, the leadership rotation through 2025 to 2026 has limited the institute's ability to develop sustained research-program momentum that the founding period had begun to establish.

Competitive landscape

  • UK AI Safety Institute. Bilateral cooperation partner under the April 2024 Memorandum of Understanding. Continues to operate with more stable government commitment than US AISI through 2025 to 2026.
  • EU AI Office. European Union's AI safety oversight body within the European Commission. Different regulatory mandate (AI Act enforcement) but adjacent AI safety policy positioning.
  • AI Safety Institute Consortium (AISIC) members. Over 280 AI organizations and academic institutions including frontier-AI labs, hyperscale-cloud providers, academic AI institutions, and AI-policy organizations.
  • Anthropic, OpenAI, Google DeepMind, Meta AI / FAIR, Microsoft AI safety teams. Commercial AI labs whose models are evaluated by US AISI under pre-deployment access agreements.
  • Apollo Research, METR, Transluce, Timaeus, Conjecture, Goodfire. Independent AI safety research peer organizations.
  • MIRI (Machine Intelligence Research Institute), Center for AI Safety, CHAI (Berkeley Center for Human-Compatible AI). Earlier-cohort AI safety research organizations.
  • Stanford HAI / CRFM, Berkeley BAIR, MIT CSAIL, CMU SCS. Academic AI research peers (also AISIC members).

Outlook

  • The institute's continued mandate and resourcing through 2026 and 2027 US Congressional budget cycles.
  • Continued frontier AI evaluation and research output through US AI policy transitions.
  • The bilateral cooperation with the UK AI Safety Institute and any restructuring under shifting US-policy context.
  • AISIC consortium activity and cross-organization AI safety coordination.
  • Continued NIST AI Risk Management Framework development as a politically durable standards-setting anchor.
  • The broader US AI safety policy direction through 2026 and 2027 and the institute's positioning within that landscape.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.