Barret Zoph

Barret Zoph is an American AI researcher, lead author of the foundational Neural Architecture Search paper, and a Vice President at OpenAI; previously co-founder and Chief Technology Officer of Thinking Machines Lab.
Barret Zoph

Barret Zoph

Barret Zoph is an American AI researcher and engineering leader. He is the lead author of the 2017 paper "Neural Architecture Search with Reinforcement Learning," widely cited as the foundational publication of the neural-architecture-search subfield, and was previously co-founder and Chief Technology Officer of Thinking Machines Lab and Vice President of Research (Post-Training) at OpenAI. As of May 2026, he leads applied post-training at OpenAI under Fidji Simo, the company's CEO of Applications, having returned to OpenAI on January 14, 2026 following his departure from Thinking Machines Lab.

At a glance

Origins

Public biographical material on Zoph is comparatively thin. He has no Wikipedia entry as of May 2026, and the available record runs through his personal site, the 2024 USC alumni feature, and OpenAI and Thinking Machines coverage of the 2024 to 2026 period.

Zoph completed a Bachelor of Science in computer science at the University of Southern California in 2016. As an undergraduate he conducted research at the USC Information Sciences Institute (ISI) in Marina del Rey, working with Kevin Knight and Daniel Marcu on statistical and neural machine translation. The 2024 ISI alumni feature describes Knight as Zoph's research advisor. The ISI period produced his first published papers, including "Multi-Source Neural Translation" (NAACL 2016) and "Transfer Learning for Low-Resource Neural Machine Translation" (EMNLP 2016) with Deniz Yuret and Knight.

Career

After his 2016 USC graduation, Zoph joined Google Brain through the Google Brain Residency program, working under Quoc Le and Jeff Dean. His first major publication, "Neural Architecture Search with Reinforcement Learning," was uploaded to arXiv in November 2016 with Zoph as lead author and Quoc Le as second author and was presented at ICLR 2017. The paper introduced an approach in which a recurrent network generates neural-network architecture descriptions and is trained with reinforcement learning to maximize validation accuracy on a target task; it is widely cited as the foundational publication of the neural-architecture-search subfield.

Through 2017 and 2018, Zoph extended the NAS line as lead author of "Learning Transferable Architectures for Scalable Image Recognition" (CVPR 2018, the NASNet paper) and as a senior co-author on Efficient Neural Architecture Search via Parameter Sharing (ICML 2018) and Progressive Neural Architecture Search (ECCV 2018). The 2019 to 2022 period broadened his record into data augmentation, sparse mixture-of-experts models, and computer-vision scaling: "AutoAugment" (CVPR 2019), "RandAugment" (NeurIPS 2020), "SpecAugment" (Interspeech 2019), "Switch Transformers" (2021, with William Fedus as lead author), "GLaM" (2021), and "ST-MoE" (2022). At Google Brain he served as a research team lead on the MUM Search initiative and on AutoML, and his personal-site bio describes the latter period as Staff Research Scientist with a focus on sparse language models.

In September 2022, Zoph joined OpenAI as a senior researcher. According to his September 2024 farewell post on X, he joined "right before ChatGPT" and helped build the OpenAI post-training team from scratch alongside John Schulman and others. He rose to Vice President of Research with a remit covering post-training, alignment, tool use, evaluations, ChatGPT, search, and multimodality. The OpenAI GPT-4 contributions page lists him among the contributors to flagship training runs, dataset construction, ChatML format, model safety, and refusals work, and industry coverage has identified him as a co-lead of GPT-4 post-training alongside Schulman.

On September 25, 2024, Zoph announced his departure from OpenAI in a post on X, saying it "felt like a natural point" to "explore new opportunities outside of OpenAI." His departure occurred within hours of Mira Murati's OpenAI departure announcement and Chief Research Officer Bob McGrew's departure the same day. Reuters and TechCrunch coverage on September 25 and 26, 2024 placed the three departures together as the most senior research-and-product cohort to leave OpenAI in the post-2023 wave.

In late 2024, Zoph joined Mira Murati's Thinking Machines Lab as a co-founder and Chief Technology Officer. Thinking Machines launched publicly in February 2025. Zoph participated in the Stanford HAI talk "ChatGPT and the Art of Post-Training" with John Schulman in February 2025; the talk was not recorded, but the slide deck was published.

On January 14, 2026, OpenAI's CEO of Applications Fidji Simo announced that Zoph would return to OpenAI alongside Luke Metz, also a Thinking Machines co-founder, and Sam Schoenholz, a former Thinking Machines researcher. The announcement followed Murati's communication to Thinking Machines staff that Zoph's employment had been terminated for "unethical conduct." Coverage in Wired, Fortune, and TechCrunch reported that Murati's leadership cited a workplace-relationship matter and concerns about confidential-information sharing, while Simo's internal memo at OpenAI stated that OpenAI did not share those concerns. Zoph's new role reports directly to Simo on the applications side rather than to research leadership.

Affiliations

  • USC Information Sciences Institute: Undergraduate researcher under Kevin Knight, through 2016.
  • Google Brain: Google Brain Residency, then research scientist and Staff Research Scientist on AutoML, Neural Architecture Search, and sparse language models, 2016 to 2022.
  • OpenAI: Research scientist, then Vice President of Research (Post-Training), 2022-09 to 2024-09.
  • Thinking Machines Lab: Co-founder and Chief Technology Officer, 2024 to 2026-01.
  • OpenAI: Vice President reporting to CEO of Applications Fidji Simo, 2026-01 to present.

Notable contributions

Zoph's body of work spans neural architecture search, data-augmentation methodology, sparse mixture-of-experts models, and the OpenAI post-training program that produced ChatGPT and GPT-4. His Google Scholar profile lists citations above 80,000 as of May 2026.

Investments and boards

No public personal investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Zoph's footprint is concentrated in his research and operating roles at Google Brain, OpenAI, and Thinking Machines Lab rather than a parallel investing program.

Network

Zoph's longest-running professional relationships fall in three cohorts. The first is the USC ISI cohort under Kevin Knight, his undergraduate research advisor and co-author on the 2016 neural-machine-translation papers, with Daniel Marcu and Deniz Yuret as additional collaborators. The second is the Google Brain cohort under Quoc Le and Jeff Dean, where the Neural Architecture Search and Switch Transformer papers were produced; co-authors William Fedus, Noam Shazeer, Jonathon Shlens, and Vijay Vasudevan are the closest of these research relationships in print.

The third cohort is OpenAI and the Thinking Machines Lab founding team. John Schulman, the OpenAI co-founder and reinforcement-learning research leader, was Zoph's most material collaborator at OpenAI through the post-training program from 2022 to 2024 and his co-founder colleague at Thinking Machines from late 2024. Mira Murati, formerly Chief Technology Officer at OpenAI and now Chief Executive Officer of Thinking Machines, recruited him into the founding team. Other Thinking Machines co-founders include Lilian Weng (formerly OpenAI's Vice President of Safety Systems), Andrew Tulloch, and Luke Metz, all former OpenAI senior staff. Bob McGrew, formerly OpenAI's Chief Research Officer who departed alongside Murati and Zoph in September 2024, advises Thinking Machines, and Jared Kaplan of Anthropic advises the lab. The January 2026 split with Murati's leadership over the circumstances of Zoph's departure makes the future of these relationships an open question; Luke Metz returned to OpenAI alongside Zoph. His broader OpenAI peers include Sam Altman, Greg Brockman, Ilya Sutskever, and Andrej Karpathy, and his current reporting line is to Fidji Simo.

Position in the field

As of May 2026, Zoph occupies a structurally distinctive position. The lead authorship of the foundational Neural Architecture Search paper places him among the small group of researchers whose named contribution is core methodology with cross-lab adoption, and the lead authorship of NASNet plus the senior-author position on Switch Transformers, GLaM, and ST-MoE establishes a sustained record across architecture search and sparse language modeling. The data-augmentation family of papers is a separately influential thread.

The OpenAI post-training co-lead role from 2022 through 2024 places him in a smaller group again. The post-training program he led with John Schulman is the methodology that produced ChatGPT, GPT-4, and successor consumer-facing models, and is the proximate cause of the 2023 to 2024 frontier-AI consumer adoption phase. He is one of a small group of senior OpenAI executives whose authorship credits span pre-OpenAI architecture-search foundations and the post-training methodology behind the company's flagship product line.

The January 2026 departure from Thinking Machines Lab is the most unusual entry in his record. The split with Mira Murati's leadership over the circumstances of his exit, the public disagreement between Thinking Machines leadership and OpenAI leadership over the framing of the events, and the immediate return to OpenAI under Fidji Simo on the applications side rather than research leadership are all without close precedent among senior frontier-lab figures. The events drew sustained press coverage in Wired, Fortune, TechCrunch, and The Information.

His public-commentary cadence has historically been low. The X account @barret_zoph is used at low frequency, and the Yannic Kilcher and Stanford CS25 appearances on Switch Transformers in 2022, the Stanford HAI talk in February 2025, and a small number of conference keynotes are the principal video record.

Outlook

Open questions over the next 6 to 18 months:

  • OpenAI applications scope. Whether the role under Fidji Simo grows from the narrower applications-side post-training mandate into a broader research-and-applications remit.
  • Public commentary on the January 2026 departure. Whether Zoph publicly addresses the misconduct framing offered by Thinking Machines leadership and the contrasting position taken by OpenAI in the internal memo to staff.
  • Research output cadence. Whether the new role at OpenAI produces named-author papers at the cadence of the 2016 to 2022 Google Brain period or the lower-cadence 2022 to 2024 OpenAI period.
  • Network with Thinking Machines. The future of the John Schulman, Mira Murati, and Lilian Weng relationships as Thinking Machines moves into in-house model releases through 2026.
  • Talent flow back to OpenAI. Whether Zoph's return alongside Luke Metz and Sam Schoenholz signals a wider reverse-flow from Thinking Machines to OpenAI in the period that follows.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.