Barret Zoph
Barret Zoph is an American AI researcher and engineering leader. He is the lead author of the 2017 paper "Neural Architecture Search with Reinforcement Learning," widely cited as the foundational publication of the neural-architecture-search subfield, and was previously co-founder and Chief Technology Officer of Thinking Machines Lab and Vice President of Research (Post-Training) at OpenAI. As of May 2026, he leads applied post-training at OpenAI under Fidji Simo, the company's CEO of Applications, having returned to OpenAI on January 14, 2026 following his departure from Thinking Machines Lab.
At a glance
- Education: Bachelor of Science in computer science, University of Southern California (2016), with undergraduate research at the USC Information Sciences Institute under Kevin Knight.
- Current role: Vice President at OpenAI reporting to Fidji Simo, since January 2026.
- Key contributions: lead author of "Neural Architecture Search with Reinforcement Learning" (ICLR 2017) and "Learning Transferable Architectures for Scalable Image Recognition" (CVPR 2018, the NASNet paper); co-author of "Switch Transformers" (2021), "GLaM" (2021), and the AutoAugment / RandAugment / SpecAugment family; co-lead of the OpenAI post-training program through ChatGPT and GPT-4.
- X / Twitter: @barret_zoph
- LinkedIn: Barret Zoph
- Personal site: barretzoph.github.io
- Google Scholar: Barret Zoph
Origins
Public biographical material on Zoph is comparatively thin. He has no Wikipedia entry as of May 2026, and the available record runs through his personal site, the 2024 USC alumni feature, and OpenAI and Thinking Machines coverage of the 2024 to 2026 period.
Zoph completed a Bachelor of Science in computer science at the University of Southern California in 2016. As an undergraduate he conducted research at the USC Information Sciences Institute (ISI) in Marina del Rey, working with Kevin Knight and Daniel Marcu on statistical and neural machine translation. The 2024 ISI alumni feature describes Knight as Zoph's research advisor. The ISI period produced his first published papers, including "Multi-Source Neural Translation" (NAACL 2016) and "Transfer Learning for Low-Resource Neural Machine Translation" (EMNLP 2016) with Deniz Yuret and Knight.
Career
After his 2016 USC graduation, Zoph joined Google Brain through the Google Brain Residency program, working under Quoc Le and Jeff Dean. His first major publication, "Neural Architecture Search with Reinforcement Learning," was uploaded to arXiv in November 2016 with Zoph as lead author and Quoc Le as second author and was presented at ICLR 2017. The paper introduced an approach in which a recurrent network generates neural-network architecture descriptions and is trained with reinforcement learning to maximize validation accuracy on a target task; it is widely cited as the foundational publication of the neural-architecture-search subfield.
Through 2017 and 2018, Zoph extended the NAS line as lead author of "Learning Transferable Architectures for Scalable Image Recognition" (CVPR 2018, the NASNet paper) and as a senior co-author on Efficient Neural Architecture Search via Parameter Sharing (ICML 2018) and Progressive Neural Architecture Search (ECCV 2018). The 2019 to 2022 period broadened his record into data augmentation, sparse mixture-of-experts models, and computer-vision scaling: "AutoAugment" (CVPR 2019), "RandAugment" (NeurIPS 2020), "SpecAugment" (Interspeech 2019), "Switch Transformers" (2021, with William Fedus as lead author), "GLaM" (2021), and "ST-MoE" (2022). At Google Brain he served as a research team lead on the MUM Search initiative and on AutoML, and his personal-site bio describes the latter period as Staff Research Scientist with a focus on sparse language models.
In September 2022, Zoph joined OpenAI as a senior researcher. According to his September 2024 farewell post on X, he joined "right before ChatGPT" and helped build the OpenAI post-training team from scratch alongside John Schulman and others. He rose to Vice President of Research with a remit covering post-training, alignment, tool use, evaluations, ChatGPT, search, and multimodality. The OpenAI GPT-4 contributions page lists him among the contributors to flagship training runs, dataset construction, ChatML format, model safety, and refusals work, and industry coverage has identified him as a co-lead of GPT-4 post-training alongside Schulman.
On September 25, 2024, Zoph announced his departure from OpenAI in a post on X, saying it "felt like a natural point" to "explore new opportunities outside of OpenAI." His departure occurred within hours of Mira Murati's OpenAI departure announcement and Chief Research Officer Bob McGrew's departure the same day. Reuters and TechCrunch coverage on September 25 and 26, 2024 placed the three departures together as the most senior research-and-product cohort to leave OpenAI in the post-2023 wave.
In late 2024, Zoph joined Mira Murati's Thinking Machines Lab as a co-founder and Chief Technology Officer. Thinking Machines launched publicly in February 2025. Zoph participated in the Stanford HAI talk "ChatGPT and the Art of Post-Training" with John Schulman in February 2025; the talk was not recorded, but the slide deck was published.
On January 14, 2026, OpenAI's CEO of Applications Fidji Simo announced that Zoph would return to OpenAI alongside Luke Metz, also a Thinking Machines co-founder, and Sam Schoenholz, a former Thinking Machines researcher. The announcement followed Murati's communication to Thinking Machines staff that Zoph's employment had been terminated for "unethical conduct." Coverage in Wired, Fortune, and TechCrunch reported that Murati's leadership cited a workplace-relationship matter and concerns about confidential-information sharing, while Simo's internal memo at OpenAI stated that OpenAI did not share those concerns. Zoph's new role reports directly to Simo on the applications side rather than to research leadership.
Affiliations
- USC Information Sciences Institute: Undergraduate researcher under Kevin Knight, through 2016.
- Google Brain: Google Brain Residency, then research scientist and Staff Research Scientist on AutoML, Neural Architecture Search, and sparse language models, 2016 to 2022.
- OpenAI: Research scientist, then Vice President of Research (Post-Training), 2022-09 to 2024-09.
- Thinking Machines Lab: Co-founder and Chief Technology Officer, 2024 to 2026-01.
- OpenAI: Vice President reporting to CEO of Applications Fidji Simo, 2026-01 to present.
Notable contributions
Zoph's body of work spans neural architecture search, data-augmentation methodology, sparse mixture-of-experts models, and the OpenAI post-training program that produced ChatGPT and GPT-4. His Google Scholar profile lists citations above 80,000 as of May 2026.
- "Neural Architecture Search with Reinforcement Learning" (November 2016, ICLR 2017). Lead author with Quoc Le. Introduced the use of reinforcement learning to train a recurrent generator that produces neural-network architecture descriptions, with validation-set accuracy as the reward. Widely cited as the foundational paper of the NAS subfield.
- "Learning Transferable Architectures for Scalable Image Recognition" (CVPR 2018, the NASNet paper). Lead author. Introduced the NASNet search space and demonstrated transfer of architectures discovered on a small dataset to larger image-classification tasks.
- "Switch Transformers: Scaling to Trillion Parameter Models" (January 2021). Co-author with William Fedus as lead author and Noam Shazeer. Simplified mixture-of-experts routing into a top-1 procedure that enabled the first published trillion-parameter language models.
- "GLaM: Efficient Scaling of Language Models with Mixture-of-Experts" (December 2021) and "ST-MoE" (February 2022). Google sparse language-model papers; GLaM scaled to 1.2 trillion parameters, and ST-MoE analyzed stability and transfer learning of the Switch Transformer line.
- Data-augmentation family. "AutoAugment" (CVPR 2019), "RandAugment" (NeurIPS 2020), and "SpecAugment" (Interspeech 2019), the data-augmentation papers that became default training-pipeline components for image classification, object detection, and automatic speech recognition.
- "Multi-Source Neural Translation" (NAACL 2016) and "Transfer Learning for Low-Resource Neural Machine Translation" (EMNLP 2016). Lead-author USC ISI papers under Kevin Knight.
- OpenAI post-training program (2022 to 2024). Co-lead with John Schulman of the team that built and trained the models shipped into ChatGPT, the API, and successive GPT-4 generations. Listed on the GPT-4 contributions page for flagship training runs, ChatML format, model safety, refusals, and dataset work.
- "ChatGPT and the Art of Post-Training" (Stanford HAI, February 2025). Joint talk with John Schulman; the talk was not recorded, but the slide deck is publicly available.
- Public-talk record. "Sparse Expert Models with the Authors" on Yannic Kilcher's channel in 2022 with William Fedus; "Switch Transformers" Stanford CS25 lecture in 2022; "Neural Architecture Search and Beyond" opening keynote at the 2019 Neural Architects Workshop, ICCV 2019.
Investments and boards
No public personal investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Zoph's footprint is concentrated in his research and operating roles at Google Brain, OpenAI, and Thinking Machines Lab rather than a parallel investing program.
Network
Zoph's longest-running professional relationships fall in three cohorts. The first is the USC ISI cohort under Kevin Knight, his undergraduate research advisor and co-author on the 2016 neural-machine-translation papers, with Daniel Marcu and Deniz Yuret as additional collaborators. The second is the Google Brain cohort under Quoc Le and Jeff Dean, where the Neural Architecture Search and Switch Transformer papers were produced; co-authors William Fedus, Noam Shazeer, Jonathon Shlens, and Vijay Vasudevan are the closest of these research relationships in print.
The third cohort is OpenAI and the Thinking Machines Lab founding team. John Schulman, the OpenAI co-founder and reinforcement-learning research leader, was Zoph's most material collaborator at OpenAI through the post-training program from 2022 to 2024 and his co-founder colleague at Thinking Machines from late 2024. Mira Murati, formerly Chief Technology Officer at OpenAI and now Chief Executive Officer of Thinking Machines, recruited him into the founding team. Other Thinking Machines co-founders include Lilian Weng (formerly OpenAI's Vice President of Safety Systems), Andrew Tulloch, and Luke Metz, all former OpenAI senior staff. Bob McGrew, formerly OpenAI's Chief Research Officer who departed alongside Murati and Zoph in September 2024, advises Thinking Machines, and Jared Kaplan of Anthropic advises the lab. The January 2026 split with Murati's leadership over the circumstances of Zoph's departure makes the future of these relationships an open question; Luke Metz returned to OpenAI alongside Zoph. His broader OpenAI peers include Sam Altman, Greg Brockman, Ilya Sutskever, and Andrej Karpathy, and his current reporting line is to Fidji Simo.
Position in the field
As of May 2026, Zoph occupies a structurally distinctive position. The lead authorship of the foundational Neural Architecture Search paper places him among the small group of researchers whose named contribution is core methodology with cross-lab adoption, and the lead authorship of NASNet plus the senior-author position on Switch Transformers, GLaM, and ST-MoE establishes a sustained record across architecture search and sparse language modeling. The data-augmentation family of papers is a separately influential thread.
The OpenAI post-training co-lead role from 2022 through 2024 places him in a smaller group again. The post-training program he led with John Schulman is the methodology that produced ChatGPT, GPT-4, and successor consumer-facing models, and is the proximate cause of the 2023 to 2024 frontier-AI consumer adoption phase. He is one of a small group of senior OpenAI executives whose authorship credits span pre-OpenAI architecture-search foundations and the post-training methodology behind the company's flagship product line.
The January 2026 departure from Thinking Machines Lab is the most unusual entry in his record. The split with Mira Murati's leadership over the circumstances of his exit, the public disagreement between Thinking Machines leadership and OpenAI leadership over the framing of the events, and the immediate return to OpenAI under Fidji Simo on the applications side rather than research leadership are all without close precedent among senior frontier-lab figures. The events drew sustained press coverage in Wired, Fortune, TechCrunch, and The Information.
His public-commentary cadence has historically been low. The X account @barret_zoph is used at low frequency, and the Yannic Kilcher and Stanford CS25 appearances on Switch Transformers in 2022, the Stanford HAI talk in February 2025, and a small number of conference keynotes are the principal video record.
Outlook
Open questions over the next 6 to 18 months:
- OpenAI applications scope. Whether the role under Fidji Simo grows from the narrower applications-side post-training mandate into a broader research-and-applications remit.
- Public commentary on the January 2026 departure. Whether Zoph publicly addresses the misconduct framing offered by Thinking Machines leadership and the contrasting position taken by OpenAI in the internal memo to staff.
- Research output cadence. Whether the new role at OpenAI produces named-author papers at the cadence of the 2016 to 2022 Google Brain period or the lower-cadence 2022 to 2024 OpenAI period.
- Network with Thinking Machines. The future of the John Schulman, Mira Murati, and Lilian Weng relationships as Thinking Machines moves into in-house model releases through 2026.
- Talent flow back to OpenAI. Whether Zoph's return alongside Luke Metz and Sam Schoenholz signals a wider reverse-flow from Thinking Machines to OpenAI in the period that follows.
Sources
- Barret Zoph (personal site). Personal site listing his Google Brain, OpenAI, and Thinking Machines roles, publications, and talk record.
- Barret Zoph - LinkedIn. LinkedIn profile listing the Thinking Machines Lab co-founder and CTO role and prior senior positions at OpenAI and Google Brain.
- Barret Zoph - Google Scholar. Google Scholar profile listing publications and citation counts.
- Neural Architecture Search with Reinforcement Learning. The November 2016 / ICLR 2017 paper with Zoph as lead author and Quoc Le as second author.
- Learning Transferable Architectures for Scalable Image Recognition. The CVPR 2018 NASNet paper with Zoph as lead author.
- Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. The January 2021 paper with William Fedus as lead author and Zoph as second author.
- GLaM: Efficient Scaling of Language Models with Mixture-of-Experts. The December 2021 Google sparse language-model paper.
- ST-MoE: Designing Stable and Transferable Sparse Expert Models. The February 2022 follow-up on Switch Transformer stability.
- AutoAugment: Learning Augmentation Policies from Data. The CVPR 2019 data-augmentation paper.
- RandAugment: Practical Automated Data Augmentation with a Reduced Search Space. The NeurIPS 2020 data-augmentation paper.
- SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. The Interspeech 2019 speech-recognition paper.
- Multi-Source Neural Translation. The NAACL 2016 ISI paper with Zoph as lead author and Kevin Knight as second author.
- Attention is All You Need: USC Alumni Paved Path for ChatGPT. USC ISI alumni feature on Zoph's USC and Google Brain trajectory and his September 2022 OpenAI start.
- GPT-4 contributions. OpenAI page listing Zoph among the contributors to flagship training runs, ChatML format, model safety, and refusals.
- Barret Zoph on X: I posted this note to OpenAI. The September 25, 2024 farewell post.
- OpenAI's chief research officer has left following CTO Mira Murati's exit. TechCrunch coverage of the September 25, 2024 OpenAI departures.
- Mira Murati's startup, Thinking Machines Lab, is losing two of its co-founders to OpenAI. TechCrunch coverage of the January 14, 2026 departure of Zoph and Luke Metz.
- Former OpenAI CTO Mira Murati's AI startup Thinking Machines suffers wave of defections. Fortune coverage of the January 2026 Thinking Machines departures.
- Sparse Expert Models (w/ the Authors). Yannic Kilcher's 2022 video conversation with Zoph and William Fedus on sparse expert models.
- Switch Transformers: Scaling to Trillion Parameter Models. Stanford CS25 lecture by Zoph on Switch Transformers, 2022.
- ChatGPT and the Art of Post-Training (slide deck). Slide deck for the February 2025 Stanford HAI talk by Zoph and John Schulman.
- Feature image: text-mode card generated via
scripts/make_lab_card.py, used as a fallback because no Wikipedia portrait, lab press-kit headshot, or other credit-cleared photograph of Zoph was located in May 2026.