Jared Kaplan
Jared Daniel Kaplan is an American theoretical physicist and AI researcher, co-founder and Chief Science Officer of Anthropic, the public-benefit corporation that develops the Claude family of large-language models. He is concurrently an associate professor of physics and astronomy at Johns Hopkins University, where he has held a faculty appointment since 2012, and was previously a researcher at OpenAI, where he was the lead author of the January 2020 paper "Scaling Laws for Neural Language Models." As of May 2026, in addition to the Chief Science Officer role he serves as Anthropic's Responsible Scaling Officer, the position created in October 2024 to oversee safety assessments before model releases under Anthropic's Responsible Scaling Policy.
At a glance
- Education: Bachelor of Science in physics and mathematics, Stanford University; PhD in physics, Harvard University (2009), advised by Nima Arkani-Hamed; thesis "Aspects of holography." Postdoctoral positions at SLAC and Stanford.
- Current roles: Co-founder and Chief Science Officer of Anthropic, since 2021; Responsible Scaling Officer at Anthropic, since October 2024; associate professor of physics and astronomy at Johns Hopkins University, since 2012.
- Key contributions: lead author of "Scaling Laws for Neural Language Models" (January 2020); senior co-author of "Constitutional AI: Harmlessness from AI Feedback" (December 2022); co-author of "Language Models are Few-Shot Learners" (the GPT-3 paper, 2020); pre-AI publication record in quantum gravity, holography, and conformal field theory.
- Awards: Hertz Foundation Fellowship (2005); Sloan Research Fellowship; National Science Foundation CAREER Award (PHY-1454083).
- LinkedIn: Jared Kaplan
- Wikipedia: Jared Kaplan (physicist)
Origins
Kaplan trained as a theoretical physicist before pivoting to AI research in his late thirties. He completed a Bachelor of Science in physics and mathematics at Stanford University and then moved to Harvard University for graduate study in theoretical physics. He received the Hertz Foundation Fellowship in 2005, the same Hertz program through which Anthropic chief executive Dario Amodei was later recognized at the doctoral-thesis-prize tier in 2011 and 2012.
He completed his PhD at Harvard in 2009 under the supervision of Nima Arkani-Hamed, with a thesis titled "Aspects of holography." After Harvard he held postdoctoral positions at the Stanford Linear Accelerator Center and Stanford University before joining the Johns Hopkins faculty in 2012. The unusual feature of his subsequent career path is that the Hopkins appointment never lapsed: the move into AI in 2019 ran in parallel with the academic role rather than replacing it.
Career
Kaplan's early academic publication record is concentrated in particle physics, cosmology, and quantum gravity, with a particular focus on the AdS/CFT correspondence and conformal field theory. The 2011 paper "A Natural Language for AdS/CFT Correlators" with A. Liam Fitzpatrick, Joao Penedones, Suvrat Raju, and Balt C. van Rees in the Journal of High Energy Physics is among the most cited from this period. Through the early 2010s he received the Sloan Research Fellowship and the National Science Foundation CAREER Award (PHY-1454083) for his theoretical-physics research at Hopkins.
He began collaborating with researchers at OpenAI in 2019 on the question of how language-model loss scales with model size, dataset size, and training compute. The collaboration culminated in the January 2020 paper "Scaling Laws for Neural Language Models," with Kaplan as first author and a co-author list that includes Sam McCandlish, Tom Henighan, Tom Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. The paper established the empirical claim that test loss follows smooth power laws in the three quantities of model size, dataset size, and compute across seven orders of magnitude, and that architectural details mattered far less than the macro-scaling variables. The result became the empirical foundation for the GPT-3 training run that followed five months later, and it has been cited as the framework that informed much of subsequent industry-wide investment in frontier-scale training.
Kaplan was a co-author on "Language Models are Few-Shot Learners" (May 2020), the GPT-3 paper led by Tom Brown that documented the 175-billion-parameter language model and its few-shot prompting capabilities. He remained at OpenAI through 2020.
In December 2020 he left OpenAI alongside Dario Amodei, Daniela Amodei, Tom Brown, Sam McCandlish, Jack Clark, and Chris Olah and helped incorporate Anthropic as a Delaware Public Benefit Corporation in early 2021. Kaplan took the Chief Science Officer role at the new company, with continuing responsibility for research direction across the model line. The first public Claude model launched in March 2023, and Anthropic has shipped successive Claude generations through Claude Opus 4.7 in April 2026.
In October 2024, Anthropic announced that Kaplan would additionally serve as the company's Responsible Scaling Officer, the senior role responsible for safety assessments under the Responsible Scaling Policy before each model release. The dual title sits alongside continuing co-authorship on Anthropic interpretability and alignment papers, and the continuing associate-professor appointment at Johns Hopkins.
In 2024 Kaplan also took on an advisory role at Thinking Machines Lab, the Insurgent AI company founded by former OpenAI Chief Technology Officer Mira Murati. The advisory role is unusual given the Anthropic Chief Science Officer position; Anthropic and Thinking Machines compete directly for senior research staff, and the structural implications of the dual relationship have not been publicly addressed by either company.
Affiliations
- Johns Hopkins University Department of Physics and Astronomy: Associate Professor, 2012 to present.
- OpenAI: Researcher, approximately 2019 to 2020-12.
- Anthropic: Co-founder and Chief Science Officer, 2021 to present; additionally Responsible Scaling Officer, 2024-10 to present.
- Thinking Machines Lab: Advisor, 2024 to present.
Notable contributions
Kaplan's body of work spans pre-AI theoretical physics and post-2019 AI research, with the latter concentrated on lead-author and senior-author credits at the foundation of the modern scaling-and-alignment paradigm.
- Pre-AI physics record (through approximately 2018). Published research in particle physics, cosmology, and quantum gravity, concentrated on the AdS/CFT correspondence and conformal field theory. Representative publication: "A Natural Language for AdS/CFT Correlators" (2011), Journal of High Energy Physics, with A. Liam Fitzpatrick, Joao Penedones, Suvrat Raju, and Balt C. van Rees.
- "Scaling Laws for Neural Language Models" (January 2020, arXiv 2001.08361). Kaplan-led OpenAI paper documenting that language-model test loss follows power laws in model size, dataset size, and compute across seven orders of magnitude. Co-authors include Sam McCandlish, Tom Henighan, Tom Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. The paper became the empirical underpinning for industrial frontier-model investment and has accumulated thousands of citations.
- "Language Models are Few-Shot Learners" (May 2020). The GPT-3 paper led by Tom Brown, with Kaplan among the named co-authors. The 175-billion-parameter language model established large-scale autoregressive pre-training as the dominant paradigm in language modeling.
- "Constitutional AI: Harmlessness from AI Feedback" (December 2022). Anthropic methodology paper introducing the alignment technique in which a model trains to follow a written constitution of principles via self-critique and revision. Kaplan is the final-listed senior author, with the lead author block carried by Yuntao Bai and a co-author list of more than fifty Anthropic researchers. Constitutional AI is the basis for the alignment posture in shipped Claude models.
- "Training a Helpful and Harmless Assistant with RLHF" (April 2022). Anthropic paper documenting the RLHF pipeline used in early Claude precursors, with Kaplan among the named co-authors.
- Anthropic Responsible Scaling Policy (September 2023, with revisions through 2025). Kaplan is the named Responsible Scaling Officer from October 2024, the senior role with formal authority to determine whether a model passes safety review before public release.
- Public-talk record. "Scaling and the Road to Human-Level AI" at the Y Combinator AI Startup School in San Francisco, June 2025; the TechCrunch Sessions: AI appearance on the future of AI agents (2025); a March 2022 talk at the New Technologies in Mathematics seminar on scaling laws and their implications for coding AI.
Investments and boards
- Anthropic (AI): Co-founder and Chief Science Officer, 2021 to present; Responsible Scaling Officer from October 2024. Public Benefit Corporation incorporated in Delaware. Approximately $73 billion cumulative funding through the February 2026 Series G at a $380 billion post-money valuation.
- Thinking Machines Lab (AI): Advisor, 2024 to present. Insurgent AI company founded by Mira Murati. Approximately $2 billion seed at a $12 billion valuation in July 2025, with a reported $50 billion follow-on under negotiation.
No public personal angel-investor activity on record outside the Anthropic operating role and the Thinking Machines advisory role in AI, semiconductors, datacenters, software, or energy as of May 2026. Kaplan's footprint in this section is concentrated in the founding role at Anthropic and the academic appointment at Johns Hopkins rather than a parallel investing program. Forbes has estimated his net worth at approximately $3.7 billion as of 2026, derived from the Anthropic equity position rather than a portfolio of disclosed transactions.
Network
Kaplan's longest-running professional relationships are with his six fellow Anthropic co-founders, all of whom he worked with at OpenAI before the 2021 founding: Dario Amodei, the chief executive officer; Daniela Amodei, the president; Tom Brown, the Chief Compute Officer and GPT-3 lead author; Sam McCandlish, the second-named author of the scaling-laws paper and a research-leadership co-founder; Jack Clark, the policy lead; and Chris Olah, the interpretability research lead. The seven-person founding cohort has been stable since 2021. The McCandlish collaboration is the closest of the research relationships in print: the two are the senior co-leads on the scaling-laws paper and have continued to co-author Anthropic research.
His pre-AI academic network is concentrated in the high-energy theory community. Doctoral advisor Nima Arkani-Hamed, now at the Institute for Advanced Study, supervised the 2009 Harvard thesis on holography. Frequent physics co-author A. Liam Fitzpatrick at Boston University was the closest collaborator on the AdS/CFT papers of the early 2010s.
The Thinking Machines Lab advisory role connects him to former OpenAI senior staff outside the Anthropic founding cohort, including Mira Murati, John Schulman, Barret Zoph, and Lilian Weng.
Position in the field
As of May 2026, Kaplan is most often cited in industry coverage as the lead author of the scaling-laws paper that informed the GPT-3 era and the broader frontier-training thesis adopted across OpenAI, Anthropic, Google DeepMind, xAI, and the Chinese frontier labs through 2025 and 2026. The first-author position on the January 2020 paper, combined with the Anthropic Chief Science Officer role and the Responsible Scaling Officer designation, places him among senior frontier-lab leaders with a paired research and operating record.
His career path is structurally distinctive among the Anthropic founding cohort. Where Tom Brown came in through eight years of consumer-and-developer-platform startup work and Chris Olah through self-directed research without an undergraduate degree, Kaplan came in through a full theoretical-physics doctoral program, postdoctoral training at SLAC and Stanford, and a tenured-track academic appointment at Johns Hopkins that runs continuously alongside the Anthropic role. The credential profile is closer to that of Dario Amodei, who completed a Princeton physics PhD before the OpenAI period.
The dual Johns Hopkins and Anthropic appointment is unusual at the frontier-lab senior-leadership tier; most peer chief science officers have departed academic appointments to take their industrial roles. The October 2024 Responsible Scaling Officer designation is similarly distinctive: the position has no exact analog at peer frontier labs, with the closest comparable function being the chief safety or alignment role that other labs split across multiple senior leaders. The combination of senior research leadership and named release-gating authority places the role closer to a regulatory-style remit than to a typical chief-science role.
Outlook
Open questions over the next 6 to 18 months:
- Successor scaling-laws results. Whether Kaplan or his Anthropic colleagues publish updated scaling-laws empirical work that revises or extends the original power-law thesis as compute, data, and training-recipe regimes move further from the 2020 era.
- Responsible Scaling Policy revisions. Future updates to the capability-threshold framework as Claude scales further and as competitor labs publish parallel policies. Kaplan's named authority over release decisions makes the cadence and substance of these revisions a watchable signal.
- Thinking Machines advisory role. Whether the dual Anthropic and Thinking Machines arrangement continues, is restructured, or ends as both companies scale through their next funding cycles and as Thinking Machines moves toward its first in-house model release in 2026.
- Johns Hopkins academic role. Whether the associate-professor appointment continues at the same level of engagement as Anthropic's footprint and Kaplan's Responsible Scaling Officer remit expand.
- Public commentary cadence. Whether the comparatively low podcast-and-keynote frequency of the post-2021 period continues or shifts as the Responsible Scaling Officer role becomes a more visible part of Anthropic's external posture.
- Successor Claude generations. The capability and safety profile of the next major Claude generation beyond the 4.x line, with Kaplan's Chief Science Officer organization as the operational counterpart to the engineering build-out led by Tom Brown.
Sources
- Jared Kaplan (physicist). Wikipedia biographical entry covering education, the Johns Hopkins faculty appointment, the OpenAI period, and the Anthropic founding.
- Jared Kaplan - LinkedIn. Kaplan's LinkedIn profile listing the Anthropic and Johns Hopkins roles.
- Scaling Laws for Neural Language Models. The January 2020 OpenAI scaling-laws paper led by Kaplan.
- Language Models are Few-Shot Learners. The May 2020 GPT-3 paper led by Tom Brown with Kaplan among the named co-authors.
- Constitutional AI: Harmlessness from AI Feedback. The December 2022 Anthropic Constitutional AI paper with Kaplan as the final-listed senior author.
- A Natural Language for AdS/CFT Correlators. The 2011 Journal of High Energy Physics paper representative of Kaplan's pre-AI publication record in quantum gravity and holography.
- Announcing our updated Responsible Scaling Policy. October 15, 2024 Anthropic announcement naming Kaplan as the company's Responsible Scaling Officer.
- Anthropic's Responsible Scaling Policy. The September 2023 policy framework defining capability thresholds.
- Scaling and the Road to Human-Level AI. Kaplan's June 2025 Y Combinator AI Startup School talk in San Francisco.
- Y Combinator Library: Scaling and the Road to Human-Level AI. Y Combinator's writeup of the June 2025 talk.
- Johns Hopkins Department of Physics and Astronomy. Kaplan's institutional affiliation since 2012.
- Hertz Foundation Fellowship. The 2005 Hertz Fellowship recognized Kaplan during his graduate-study period.
- Photo: Jared Kaplan (cropped).jpg, CC BY 2.0 Nicole Henderson / TechCrunch, June 2025.