Sam McCandlish
Sam McCandlish is an American theoretical physicist and AI researcher, co-founder of Anthropic, the public-benefit corporation that develops the Claude family of large-language models. He was previously a researcher at OpenAI, where he was the second-named author of the January 2020 paper "Scaling Laws for Neural Language Models." As of May 2026, he serves as Anthropic's Chief Architect with a focus on pre-training and large-scale model training, a role he assumed in October 2025 after Rahul Patil joined as Chief Technology Officer.
At a glance
- Education: B.S. and M.S. in mathematics and physics, Brandeis University (2012); PhD in physics, Stanford University (2017), advised by Eva Silverstein; thesis "Depth Perception in Holography." Postdoctoral position at Boston University Physics (approximately 2017 to 2018).
- Current role: Co-founder and Chief Architect of Anthropic, since October 2025; previously Chief Technology Officer.
- Key contributions: lead author of "An Empirical Model of Large-Batch Training" (December 2018); second-named author of "Scaling Laws for Neural Language Models" (January 2020); senior co-author of "Language Models are Few-Shot Learners" (GPT-3, 2020) and "Constitutional AI: Harmlessness from AI Feedback" (December 2022); pre-AI publication record in quantum gravity and holography.
- Awards: NSF Graduate Research Fellowship (2012).
- X: @samsamoa; GitHub: @samsamoa; LinkedIn: Sam McCandlish.
Origins
Public biographical material on McCandlish is comparatively thin compared with the Anthropic co-founders who have Wikipedia entries. The path into AI ran through a theoretical-physics doctoral program rather than a computer-science track.
McCandlish completed his B.S. and M.S. in mathematics and physics at Brandeis University in 2012. As an undergraduate he worked in the Hagan and Baskaran groups, contributing to the 2012 Soft Matter paper "Spontaneous Segregation of Self-Propelled Particles with Different Motilities" with Aparna Baskaran and Michael Hagan. He was awarded the NSF Graduate Research Fellowship in 2012.
Career
McCandlish moved to Stanford University for graduate study under Eva Silverstein of the Stanford Institute for Theoretical Physics. He defended his PhD dissertation, "Depth Perception in Holography," on May 16, 2017. The thesis sat in the AdS/CFT correspondence and quantum-gravity literature, an overlap with the early-career interests of fellow Anthropic co-founder Jared Kaplan, whose 2009 Harvard thesis on holography was supervised by Nima Arkani-Hamed. After Stanford, McCandlish held a postdoctoral position at Boston University Physics for approximately one year before moving into AI research.
He joined OpenAI around 2018 as a research scientist. The first publication from that period was the December 2018 paper "An Empirical Model of Large-Batch Training," with McCandlish as lead author and co-authors Jared Kaplan, Dario Amodei, and the OpenAI Dota Team. The paper introduced the gradient noise scale, an empirical statistic predicting the largest efficient batch size across supervised learning, reinforcement learning, and generative model training.
McCandlish was the second-named author of the January 2020 paper "Scaling Laws for Neural Language Models," with Jared Kaplan as first author and co-authors including Tom Henighan, Tom Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. The paper documented that language-model test loss follows smooth power laws in model size, dataset size, and compute across seven orders of magnitude, and is widely credited as the empirical foundation for the GPT-3 training run later that year. He was also a senior co-author on "Language Models are Few-Shot Learners" (May 2020), the GPT-3 paper led by Tom Brown.
In December 2020 he left OpenAI alongside Dario Amodei, Daniela Amodei, Tom Brown, Jared Kaplan, Jack Clark, and Chris Olah, and helped incorporate Anthropic as a Delaware Public Benefit Corporation in early 2021. McCandlish initially served as Chief Scientist and subsequently as Chief Technology Officer through October 2025. The first public Claude model launched in March 2023, and the company has shipped successive Claude generations through Claude Opus 4.7 in April 2026.
In October 2025, Anthropic announced that former Stripe Chief Technology Officer Rahul Patil would join as the new CTO, and that McCandlish would transition to the role of Chief Architect. The announcement framed the new role as a remit to "deepen his focus on large-scale model training, continuing to lead pretraining while expanding his scope to include research productivity and RL infrastructure." Both Patil and McCandlish report to president Daniela Amodei.
Affiliations
- Brandeis University Hagan and Baskaran groups: Undergraduate researcher, through 2012.
- Stanford University Institute for Theoretical Physics: PhD student under Eva Silverstein, 2012 to 2017.
- Boston University Physics: Postdoctoral researcher, approximately 2017 to 2018.
- OpenAI: Research scientist, approximately 2018 to 2020-12.
- Anthropic: Co-founder and Chief Scientist, then Chief Technology Officer, 2021 to 2025-10; Chief Architect, 2025-10 to present.
Notable contributions
McCandlish's body of work spans pre-AI theoretical physics and post-2018 AI research, with the latter concentrated on lead-author and senior-author credits at the foundation of the modern scaling paradigm.
- Pre-AI physics record (through approximately 2017). Doctoral and postdoctoral publications in theoretical physics, focused on AdS/CFT correspondence, quantum gravity, and holography. Earlier undergraduate publication includes "Spontaneous Segregation of Self-Propelled Particles with Different Motilities" (Soft Matter, 2012) with Aparna Baskaran and Michael Hagan. Doctoral thesis "Depth Perception in Holography" defended at Stanford in May 2017 under Eva Silverstein.
- "An Empirical Model of Large-Batch Training" (December 2018, arXiv 1812.06162). Lead-authored OpenAI paper introducing the gradient noise scale, an empirical statistic predicting the largest efficient batch size for parallel training across supervised, reinforcement, and generative-model tasks. Co-authored with Jared Kaplan, Dario Amodei, and the OpenAI Dota Team.
- "Scaling Laws for Neural Language Models" (January 2020, arXiv 2001.08361). Second-named author behind Jared Kaplan on the OpenAI scaling-laws paper documenting that language-model test loss follows power laws in model size, dataset size, and compute across seven orders of magnitude. Widely cited as the empirical foundation for industrial frontier-model investment over the period that followed.
- "Language Models are Few-Shot Learners" (May 2020). The 175-billion-parameter GPT-3 paper led by Tom Brown, with McCandlish among the senior co-authors. Widely cited as one of the most influential AI publications of the past decade.
- "Constitutional AI: Harmlessness from AI Feedback" (December 2022). Anthropic methodology paper introducing the alignment technique in which a model trains to follow a written constitution of principles via self-critique and revision. Lead author Yuntao Bai, McCandlish among the named co-authors, and Jared Kaplan as the final-listed senior author. The basis for the alignment posture in shipped Claude models.
- "Training a Helpful and Harmless Assistant with RLHF" (April 2022). Anthropic paper documenting the RLHF pipeline used in early Claude precursors, with McCandlish among the named co-authors.
- Public-talk record. "Building Anthropic: A conversation with our co-founders" on the Anthropic YouTube channel in December 2024, a panel with the seven co-founders. His Google Scholar profile lists citations above 125,000 as of May 2026.
Investments and boards
- Anthropic (AI): Co-founder and Chief Architect (formerly Chief Technology Officer), 2021 to present. Public Benefit Corporation incorporated in Delaware. Approximately $73 billion cumulative funding through the February 2026 Series G at a $380 billion post-money valuation.
No public personal angel-investor activity on record outside the Anthropic operating role in AI, semiconductors, datacenters, software, or energy as of May 2026. In January 2026 he joined the other six Anthropic co-founders in a pledge to donate 80% of personal fortunes to address AI-driven inequality. Forbes estimated his net worth at approximately $3.7 billion in 2026, derived from the Anthropic equity position.
Network
McCandlish's longest-running professional relationships are with his six fellow Anthropic co-founders, all of whom he worked with at OpenAI before the 2021 founding: Dario Amodei, the chief executive; Daniela Amodei, the president; Tom Brown, the Chief Compute Officer and GPT-3 lead author; Jared Kaplan, the chief science officer; Jack Clark, the head of public benefit; and Chris Olah, the interpretability lead. The Kaplan collaboration is the closest of the research relationships in print: the two are senior co-leads on the scaling-laws paper and have continued to co-author Anthropic research, including the 2018 large-batch-training paper that preceded it.
His pre-AI physics network is concentrated in the Stanford Institute for Theoretical Physics community under Eva Silverstein, and the Brandeis active-matter group with Aparna Baskaran and Michael Hagan.
Position in the field
As of May 2026, McCandlish is most often cited as the second-named author of the scaling-laws paper that informed the GPT-3 era and the broader frontier-training thesis adopted across OpenAI, Anthropic, Google DeepMind, xAI, and the Chinese frontier labs. Combined with the lead-author position on the 2018 large-batch-training paper, the GPT-3 senior co-authorship, and the founding role at Anthropic, the credential profile places him among the smaller group of senior frontier-lab leaders whose research record and operating role are both widely cited.
Of the seven Anthropic co-founders, McCandlish has the lowest sustained public-commentary cadence. Where Dario Amodei, Jack Clark, and to a lesser extent Jared Kaplan appear regularly on podcasts, in congressional testimony, and in long-form essays, McCandlish does not maintain an active podcast presence and has no Wikipedia entry as of May 2026. The most prominent video appearance is the December 2024 co-founders panel on the Anthropic YouTube channel, with the X account @samsamoa used at low frequency.
The October 2025 transition from Chief Technology Officer to Chief Architect is structurally distinctive. The title is uncommon at the frontier-lab senior-leadership tier; the closest analogs at peer labs are Chief Scientist or VP of Pre-Training roles. The pairing with a separate CTO responsible for product engineering and inference infrastructure is read in industry coverage as Anthropic's response to the increasing operational scale of frontier model training and deployment.
Outlook
Open questions over the next 6 to 18 months:
- Successor scaling-laws results. Whether McCandlish or his Anthropic colleagues publish updated empirical work that revises or extends the original power-law thesis as compute, data, and training regimes move further from the 2020 era.
- Chief Architect remit. Publication and operating record from the new organization, including documented results from the pre-training and RL-infrastructure work that the October 2025 announcement described as the new role's focus.
- Pre-training cycle for the next Claude generation. The pre-training timeline for the Claude generation beyond the 4.x line, with McCandlish's organization as the operational counterpart to Tom Brown's compute build-out and Jared Kaplan's research direction.
- Public commentary cadence. Whether the historically low podcast and conference-talk frequency continues, or whether the Chief Architect role and the January 2026 pledge increase external visibility.
- Co-founder cohort stability. Whether the seven-person founding cohort, stable since 2021, remains intact through Anthropic's next funding cycle.
Sources
- Sam McCandlish - LinkedIn. McCandlish's LinkedIn profile listing the Anthropic co-founder role and prior OpenAI position.
- Sam McCandlish - Google Scholar. Google Scholar profile listing publications and citation counts, with total citations above 125,000 as of May 2026 and listed research interests in machine learning, artificial intelligence, and theoretical physics.
- Rahul Patil joins Anthropic as Chief Technology Officer. October 2025 Anthropic announcement of the leadership transition, with McCandlish moving to the Chief Architect role focused on pre-training and large-scale model training.
- Anthropic hires new CTO with focus on AI infrastructure. TechCrunch coverage of the October 2025 leadership change.
- PHYSICS Ph.D. DISSERTATION DEFENSE: Sam McCandlish. Stanford Physics Department announcement of McCandlish's May 16, 2017 PhD defense, with thesis title "Depth Perception in Holography" and advisor Eva Silverstein.
- An Empirical Model of Large-Batch Training. The December 2018 OpenAI paper led by McCandlish that introduced the gradient noise scale.
- Scaling Laws for Neural Language Models. The January 2020 OpenAI scaling-laws paper led by Jared Kaplan with McCandlish as second-named author.
- Language Models are Few-Shot Learners. The May 2020 GPT-3 paper led by Tom Brown with McCandlish among the senior co-authors.
- Constitutional AI: Harmlessness from AI Feedback. The December 2022 Anthropic Constitutional AI paper with McCandlish among the named co-authors.
- Spontaneous Segregation of Self-Propelled Particles with Different Motilities. The 2012 Soft Matter paper with McCandlish as a named author from his Brandeis undergraduate years.
- Building Anthropic: A conversation with our co-founders. The December 2024 co-founders panel discussion published on the Anthropic YouTube channel.
- Anthropic's billionaire cofounders are giving away 80% of their wealth. January 2026 Fortune coverage of the seven-co-founder pledge.
- Sam McCandlish (@samsamoa) on X. McCandlish's X account.
- Baskaran Group Alumni - Brandeis University. Brandeis Physics Department page listing McCandlish among former undergraduate researchers in the Baskaran group.
- Feature image: text-mode card generated via
scripts/make_lab_card.py, used as a fallback because no Wikipedia portrait, Anthropic press-kit headshot, or other credit-cleared photograph of McCandlish was located in May 2026.