Mark Chen

Mark Chen is the Chief Research Officer of OpenAI, the lead researcher behind DALL-E, Codex, and the visual perception capabilities of GPT-4, and a former quantitative trader who joined the lab in 2018.
Mark Chen

Mark Chen

Mark Chen is an American AI researcher who serves as Chief Research Officer of OpenAI. He led the teams that produced Image GPT, DALL-E, Codex, and the vision capabilities of GPT-4, and contributed to the Sora video model and the o-series reasoning program. As of May 2026, he runs OpenAI's research organization alongside Chief Scientist Jakub Pachocki, having been promoted from Senior Vice President of Research to Chief Research Officer in March 2025.

At a glance

  • Education: Bachelor's degree in mathematics with computer science, Massachusetts Institute of Technology (2012).
  • Current role: Chief Research Officer of OpenAI since March 2025, having previously been Senior Vice President of Research from September 2024 and earlier Head of Frontiers Research.
  • Key contributions: lead author of the 2020 Generative Pretraining from Pixels paper introducing Image GPT; lead of the DALL-E and DALL-E 2 image-generation teams; lead of Codex and lead author on the 2021 paper that introduced the HumanEval coding benchmark; co-lead of the GPT-4 vision program; co-inventor of Sora; and senior research direction for the o-series reasoning models.
  • Volunteer role: coach for the United States team at the International Olympiad in Informatics, serving as deputy leader in 2019 and as team leader in 2022 and 2024.
  • X / Twitter: @markchen90
  • LinkedIn: Mark Chen
  • Google Scholar: Mark Chen

Origins

Chen was born in 1990 to a family of Taiwanese heritage and was raised in the United States. He showed early aptitude in competitive mathematics and informatics and entered the Massachusetts Institute of Technology in 2008.

He completed a bachelor's degree in mathematics with computer science at MIT in May 2012. During his undergraduate years he spent the summer of 2011 as a visiting scholar at Harvard University. The competitive-mathematics background, common to many senior staff at OpenAI's reasoning-research function, has remained a defining feature of his public profile and is reflected in his ongoing volunteer role coaching the United States International Olympiad in Informatics team.

Career

After graduating from MIT in 2012, Chen spent six years in New York quantitative trading, working in research roles at Tech Square Trading from August 2012 to January 2016 and as a partner in quantitative research at Integral Technology LLC from July 2016 to August 2018. Public profiles also place him at Jane Street Capital, where he built machine-learning models for futures and equities trading. The period gave him a working background in large-scale machine learning applied to noisy real-world data before direct exposure to academic AI research.

In October 2018 Chen joined OpenAI as a research scientist. His earliest contributions included an early version of the model-parallel training strategy used for GPT-3, the 175-billion-parameter language model whose 2020 release defined the post-GPT-3 capability frontier, and he is a named author on the Language Models are Few-Shot Learners paper.

In June 2020 Chen led the team that produced Image GPT, the first OpenAI model to apply the GPT pretraining objective to images. The accompanying paper, Generative Pretraining from Pixels, with co-authors including Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever, demonstrated that a transformer trained to autoregressively predict pixel sequences could learn strong visual representations without 2D inductive bias. The technique of encoding pixels as tokens analogous to words in a sentence became a foundation for the subsequent image-generation line.

In January 2021 he led the team that produced DALL-E, the first publicly named OpenAI text-to-image model, and in April 2022 the team that released DALL-E 2, the higher-resolution successor based on CLIP-conditioned diffusion. In August 2021 he led the development of Codex, the GPT model fine-tuned on GitHub source code that became the engine behind GitHub Copilot. The accompanying paper, Evaluating Large Language Models Trained on Code, introduced the HumanEval benchmark for functional correctness of generated code and listed Chen as lead author.

For the March 2023 release of GPT-4, Chen served as Vision team co-lead and Deployment lead, integrating image recognition into the multimodal flagship. He also contributed to Sora, the text-to-video model previewed in February 2024 and released publicly in December 2024, and to the o-series reasoning line beginning with o1-preview in September 2024. By 2024 his published title was Head of Frontiers Research, with multimodal modeling and reasoning as the named focus areas.

On September 25, 2024, in the same announcement that confirmed Mira Murati's departure as Chief Technology Officer, OpenAI promoted Chen to Senior Vice President of Research, pairing him with Chief Scientist Jakub Pachocki at the head of the research function. The role replaced the Chief Research Officer position previously held by Bob McGrew, who had departed the same week. On March 24, 2025, OpenAI's leadership-updates announcement elevated Chen to Chief Research Officer, with the stated mandate to drive scientific progress, push the frontier in capability and safety, and integrate research and product development.

Affiliations

  • Tech Square Trading: Quantitative research, August 2012 to January 2016.
  • Jane Street Capital: Quantitative trader, dates approximate, mid-2010s.
  • Integral Technology LLC: Partner, quantitative research, July 2016 to August 2018.
  • OpenAI: Research Scientist, October 2018 onward.
  • OpenAI: Head of Frontiers Research, through September 2024.
  • OpenAI: Senior Vice President of Research, September 2024 to March 2025.
  • OpenAI: Chief Research Officer, March 2025 to present.

Notable contributions

Chen's body of public work spans the visual-data generation, code generation, and reasoning research lines at OpenAI through the post-GPT-3 era.

  • GPT-3 (June 2020). Named author on the 175-billion-parameter language-model paper that introduced the in-context-learning paradigm. Chen contributed to an early version of the model-parallel training strategy used for the model.
  • Image GPT (June 2020). Lead author on the paper that adapted the GPT pretraining objective to images by autoregressively predicting pixel sequences. The paper has been cited as the first major demonstration that the language-model architecture can learn high-quality visual representations.
  • DALL-E (January 2021). Team lead on the original text-to-image model. Chen is among the named authors of the Zero-Shot Text-to-Image Generation paper.
  • Codex and the HumanEval benchmark (August 2021). Lead author on Evaluating Large Language Models Trained on Code, the paper introducing the GPT-fine-tuned-on-code line that became the engine behind GitHub Copilot, and the HumanEval evaluation that has become a standard for measuring code-generation correctness.
  • DALL-E 2 (April 2022). Team lead on the second-generation image model. Chen is a named author of the Hierarchical Text-Conditional Image Generation with CLIP Latents paper.
  • GPT-4 (March 2023). Vision team co-lead and Deployment lead for the multimodal flagship, per OpenAI's GPT-4 contributors page. Chen is among the credited authors of the GPT-4 Technical Report.
  • Sora (February 2024 preview, December 2024 launch). Contributor to the text-to-video research that produced OpenAI's first photorealistic video-generation model. The MIT Technology Review innovator profile credits Chen with co-invention.
  • o-series reasoning research (2024 onward). Senior research direction for the line of models that introduced reinforcement learning on chain-of-thought reasoning, beginning with o1-preview in September 2024 and continuing through o3 in early 2025.
  • MIT Technology Review Innovators Under 35 (2025). Named to the Innovators Under 35 list of the most consequential technologists of his generation.
  • USA Computing Olympiad coaching. Volunteer coach for the United States team at the International Olympiad in Informatics, serving as deputy leader in 2019 and as team leader in 2022 and 2024.

Investments and boards

No public personal investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Chen's external footprint outside OpenAI is concentrated in his pro-bono coaching role at the United States International Olympiad in Informatics team rather than a parallel investing or board program.

Network

Chen's primary professional network is the OpenAI senior staff cohort with whom he has worked since 2018. His closest current peer is Jakub Pachocki, the Chief Scientist with whom he shares the twin-leadership structure of OpenAI's research function. The September 2024 X announcement of his SVP appointment, posted alongside Pachocki's promotion, reflects the public framing of the pair as research co-leads.

His broader OpenAI peer cohort includes Sam Altman, the chief executive who promoted him; Greg Brockman, the president; Ilya Sutskever, the former Chief Scientist who departed in May 2024 to found Safe Superintelligence; and Wojciech Zaremba, the OpenAI co-founder who has remained at the lab. The 2024 senior-departure cohort included Mira Murati, now Thinking Machines Lab; John Schulman, now at Anthropic; Lilian Weng; Barret Zoph; and Bob McGrew, his immediate predecessor in the Chief Research Officer role. Liam Fedus departed in March 2025 to co-found Periodic Labs, and Andrej Karpathy, one of the OpenAI co-founders, departed in February 2024 to found Eureka Labs.

His co-author network on flagship OpenAI models includes Alec Radford, Prafulla Dhariwal, Jeffrey Wu, Rewon Child, Heewoo Jun, David Luan, and Ilya Sutskever, all named co-authors on the Image GPT, DALL-E, Codex, and GPT-4 papers.

Position in the field

As of May 2026, Chen is one of two named research leaders at OpenAI, sharing the senior-research function with Chief Scientist Jakub Pachocki since September 2024. The MIT Technology Review July 2025 profile of the pair characterized the division of labor as Pachocki setting long-term technical direction and roadmap while Chen runs research operations and manages the immediate company research-program needs. His own framing in interviews is that the two roles overlap with substantial fluidity, with both leaders pulling on technical research threads as needed.

His public profile is more product-and-launch oriented than his predecessor Bob McGrew or his peer Pachocki. The DALL-E and DALL-E 2 launches, the Codex release, the GPT-4 vision-capability rollout, and the Sora preview are flagship visual-AI events of the post-2020 era, and Chen has been the named research lead on each. Industry coverage including the February 2025 Big Technology Podcast on GPT-4.5 scaling, the July 2025 MIT Technology Review profile, the September 2025 a16z conversation on vibe coding and research, and the December 2025 Core Memory podcast frames him as the operational and external-facing voice of OpenAI's research function.

The competitive-mathematics heritage shared with Pachocki (whose background is the International Olympiad in Informatics and the ACM-ICPC) is unusual among Frontier-lab research leaders. Chen's continued volunteer coaching of the United States International Olympiad in Informatics team is publicly cited as an active connection between his role and the broader competitive-programming pipeline.

Outlook

Open questions over the next 6 to 18 months:

  • Reasoning-and-multimodal integration. Whether the o-series reasoning capabilities and the visual-AI line that Chen has historically led converge into a unified multimodal frontier model, and whether the GPT-5.x family continues to combine the two paradigms.
  • Sora-line trajectory. Whether OpenAI's video-generation roadmap, against competition from Runway Gen-4, Kuaishou Kling, and Google Veo, sustains the pace of capability gains shown between Sora 1 and Sora 2.
  • Codex and coding agents. Whether Codex evolves from a model into a full agentic coding product and whether OpenAI's coding-agent line maintains capability parity with Anthropic's Claude Code and Google's Jules.
  • Twin-leadership stability. Whether the structural pairing with Jakub Pachocki continues unchanged, or whether further senior departures require restructuring the research function.
  • Public commentary cadence. Whether the February 2025 Big Technology podcast, the September 2025 a16z conversation, and the December 2025 Core Memory podcast mark a sustained shift toward a higher public-visibility cadence consistent with the Chief Research Officer role.
  • Long-term role at OpenAI. Whether Chen remains Chief Research Officer through the GPT-6 cycle and the corporate restructuring that has continued through 2025 and 2026.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.