James Bradbury

James Bradbury is Head of Compute at Anthropic and a co-creator of the JAX numerical-computing library, previously a research engineer on JAX, TPUs, and large language models at Google and a research scientist at Salesforce Research.
James Bradbury

James Bradbury

James Bradbury is an American machine-learning systems engineer and Head of Compute at Anthropic, the public-benefit corporation that develops the Claude family of large-language models. He is one of the named original creators of JAX, the Google open-source library for accelerator-oriented array computation, and was previously a research engineer at Google working on JAX, TPUs, and large language models. As of May 2026, he is responsible at Anthropic for ensuring the company has the accelerator capacity to train and serve Claude.

At a glance

Origins

Bradbury grew up in McLean, Virginia, and attended Stanford University as an undergraduate. He matriculated in 2012 and majored in linguistics. During his sophomore year, in early 2014, he wrote a series of opinion columns titled "Outside the Bubble" for The Stanford Daily covering international politics.

The path from a linguistics degree into machine-learning systems engineering ran through neural natural-language processing, the subfield where his linguistic and computational interests converged. As a senior in 2015 to 2016 he held a research-intern role at the deep-learning startup MetaMind, founded by Richard Socher in 2014.

Career

Bradbury joined Salesforce Research full-time in April 2016 when Salesforce acquired MetaMind, and continued as a research scientist for approximately two years. The signature outputs from the period are the MetaMind Neural Machine Translation System for WMT 2016, which Bradbury and Richard Socher submitted to the WMT 2016 shared task and which placed second in the competition, and the ICLR 2017 paper "Quasi-Recurrent Neural Networks" with Stephen Merity, Caiming Xiong, and Socher, on a sequence-modeling architecture reported as up to sixteen times faster than stacked LSTMs at comparable accuracy. He was an active contributor in the same period to the Chainer and PyTorch deep-learning frameworks.

Bradbury moved to Google Brain in 2018 as a research software engineer, joining what would become the JAX team. The first public release of JAX in 2018 carries his name in the canonical author list alongside Roy Frostig, Peter Hawkins, Matthew James Johnson, Yash Katariya, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. The project provides composable transformations of Python and NumPy programs with automatic differentiation, vectorization, and just-in-time compilation to GPUs and TPUs via XLA. He also remained an outside co-author on the December 2019 NeurIPS paper "PyTorch: An Imperative Style, High-Performance Deep Learning Library" with Adam Paszke, Sam Gross, and the Meta-led PyTorch team.

Inside Google Brain and later Google DeepMind, Bradbury worked at the intersection of JAX, TPU systems, and large-language-model training. He was a named co-author on the April 2022 paper "PaLM: Scaling Language Modeling with Pathways," the 540-billion-parameter Pathways Language Model trained on TPU v4 Pods using the Pathways distributed-dataflow system, and on the November 2022 MLSys 2023 paper "Efficiently Scaling Transformer Inference" led by Reiner Pope on PaLM inference on TPU v4. After the April 2023 merger of Google Brain and DeepMind, his role transitioned into the Google DeepMind organization. The DeepMind "How To Scale Your Model" textbook, published in 2025, acknowledges Bradbury along with Reiner Pope and Blake Hechtman as having "originally derived many of the ideas in this manuscript" and having been "early to understanding the systems view of the Transformer."

In February 2023 Bradbury announced his move to Anthropic on his X account. His role is Head of Compute, focused on ensuring the company has the accelerator resources required to train and serve Claude. The Anthropic compute footprint over his tenure has grown to include the AWS Trainium2 cluster announced as Project Rainier at AWS re:Invent 2024, expanded TPU usage, and continuing Nvidia GPU access through the company's hyperscaler partnerships. In September 2024 he was among more than one hundred current and former frontier-lab employees who signed an open letter through Call to Lead urging California Governor Gavin Newsom to sign Senate Bill 1047.

Affiliations

  • MetaMind: Research Intern, approximately 2015 to 2016.
  • Salesforce Research: Research Scientist, 2016-04 to approximately 2018.
  • Google Brain: Research Software Engineer, approximately 2018 to 2023; Google DeepMind from the April 2023 merger.
  • Anthropic: Head of Compute, 2023-02 to present.

Notable contributions

Bradbury's body of work is concentrated on the open-source frameworks and large-scale distributed-training systems that operationalize the underlying research. The publication record is heavily co-authored, with first-authorship on the JAX project and the QRNN paper.

Investments and boards

No public personal angel-investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Bradbury's footprint is concentrated in the Anthropic operating role and the prior research-engineering positions at Google and Salesforce rather than in a parallel investing portfolio. He is not part of the seven-person Anthropic co-founder cohort.

Network

Bradbury's professional relationships span the JAX co-creator group at Google and Google DeepMind, the Salesforce Research and MetaMind cohort, and the Anthropic compute organization led by Tom Brown as Chief Compute Officer. The original JAX author group includes Roy Frostig of Google Research, Matthew James Johnson, and Chris Leary as the early designers of the system; the broader JAX-team network includes Peter Hawkins, Skye Wanderman-Milne, Anselm Levskaya, and Adam Paszke. The PaLM and Efficiently Scaling Transformer Inference co-author networks at Google included Reiner Pope, who left to co-found MatX in 2023, Sholto Douglas, and Jeff Dean.

The Salesforce Research network from his MetaMind period centers on Richard Socher, his principal collaborator on the WMT 2016 system and the QRNN paper, alongside Stephen Merity and Caiming Xiong. At Anthropic, the working relationship with Tom Brown on training infrastructure and the partnership with AWS on the Project Rainier Trainium2 build-out are the most material public anchors of his current network.

Position in the field

As of May 2026, Bradbury sits in the small group of senior machine-learning systems engineers whose names are routinely associated with the major open-source frameworks and large-scale distributed-training systems of the 2018 to 2026 era. The JAX co-creator credit, the lead-named-author position on the canonical 2018 JAX release citation, and the co-author credits on the PyTorch NeurIPS paper, the PaLM paper, and the Efficiently Scaling Transformer Inference paper place him on the systems and frameworks side of the field. The combined Google Scholar citation count exceeds 110,000 as of May 2026, concentrated in the PyTorch, PaLM, JAX, and Gemini papers.

The Head of Compute role at Anthropic is one of two senior compute-and-infrastructure positions at the lab, alongside Tom Brown's Chief Compute Officer remit. Bradbury's public-talk cadence is comparatively low; the most prominent recorded appearance is the September 2024 PyTorch Conference panel, and the bulk of his external technical commentary runs through @jekbradbury on X.

The career path is structurally distinctive among senior AI-systems engineers. The Stanford linguistics degree and absence of a research doctorate set him apart from most peers in the JAX co-creator group and from most named authors on the PaLM and Gemini papers, who hold computer-science or physics doctorates. No Wikipedia entry exists for him as of May 2026.

Outlook

Open questions over the next 6 to 18 months:

  • Anthropic compute build-out. The activation cadence of additional AWS Trainium2 and Trainium3 capacity under the April 2026 expanded Amazon agreement.
  • Multi-cloud compute mix. The balance of Anthropic's compute across AWS Trainium, Google TPU, and Nvidia GPU as the next training cycle begins.
  • Public-talk and writing cadence. Whether Bradbury's conference-panel cadence increases as Anthropic's infrastructure footprint becomes more prominent in its external profile.
  • Anthropic systems publication record. Whether the company's practice of publishing on safety and interpretability extends to systems and infrastructure work, an area where his Google-period publication record was substantial.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.