Lilian Weng

Lilian Weng is a co-founder of Thinking Machines Lab, formerly Vice President of Research, Safety Systems at OpenAI, and the author of the Lil'Log technical blog widely read in the machine-learning community.
Lilian Weng

Lilian Weng

Lilian Weng is a Chinese-American AI researcher and engineering leader. She is a co-founder of Thinking Machines Lab and was previously Vice President of Research, Safety Systems at OpenAI, where she founded and led the Applied AI Research and Safety Systems teams over a tenure from February 2018 to November 2024. Weng is the author of Lil'Log, a long-form technical blog founded in 2017 that is widely cited in the machine-learning community for its surveys of reinforcement learning, large language models, agents, and AI safety.

At a glance

Origins

Weng completed an undergraduate degree in information systems and computer science at Peking University in Beijing before moving to the United States for graduate study. Her PhD at the School of Informatics and Computing at Indiana University Bloomington was completed in April 2014 under Filippo Menczer, with research on online attention, information diffusion, and social-network analysis on Twitter. The most-cited papers from this period are "Competition Among Memes in a World with Limited Attention" (Scientific Reports, 2012) and "Virality Prediction and Community Structure in Social Networks" (Scientific Reports, 2013), both with Menczer as senior author.

Career

After completing her PhD in 2014, Weng moved to industry data-science and machine-learning roles in the San Francisco Bay Area. Coverage in Bloomberg, citing her LinkedIn record, places her at Facebook (now Meta), Dropbox, and the financial-technology company Affirm before AI research, with her last pre-OpenAI role as a staff machine-learning engineer at Affirm.

In February 2018, Weng joined OpenAI as a research scientist on the OpenAI Robotics team. Her first project was the in-hand manipulation program, which trained a Shadow Dexterous Hand to solve a Rubik's Cube; the paper was released in October 2019. The robotics team was disbanded in 2021 as OpenAI shifted toward language-model research.

Through 2021 and 2022, Weng founded and led the Applied AI Research team at OpenAI. Per the Fellows Fund announcement, the team shipped OpenAI's first generation of developer-platform tools, including the fine-tuning API, embeddings API, moderation endpoint, and the first applied safety evaluations. She is a contributing author on "Text and Code Embeddings by Contrastive Pre-Training" (2022).

Following the launch of GPT-4 in March 2023, Weng established the Safety Systems team to consolidate adversarial robustness, content moderation, evaluations, and red-teaming for the consumer-facing products and the API. The team grew beyond 80 scientists, researchers, and policy experts. She is a contributing author on the GPT-4 Technical Report, GPT-4o System Card, and o1 System Card, and on "Rule Based Rewards for Language Model Safety" (NeurIPS 2024) and "Diverse and Effective Red Teaming with Auto-generated Rewards" (2024). In August 2024 she was promoted to Vice President of Research, Safety Systems, with the mandate broadened to include the Preparedness team and a seat on the OpenAI board's Safety and Security Committee.

On November 8, 2024, Weng announced on X that she was leaving OpenAI after almost seven years to "reset and explore something new." Her last day was November 15, 2024, two months after Chief Technology Officer Mira Murati's September 25, 2024 departure. TechCrunch covered the exit as the latest in a 2024 cohort of senior OpenAI safety-and-research-staff departures. In December 2024 she joined Fellows Fund as a Distinguished Fellow.

In February 2025, Weng announced her co-founder role at Thinking Machines Lab, founded by Mira Murati. A functional title has not been publicly disclosed; her Google Scholar lists her affiliation as Thinking Machines as of May 2026. She has continued publishing on Lil'Log, including "Why We Think" in May 2025.

Affiliations

  • Facebook (now Meta), Dropbox, and Affirm: Software-engineer and machine-learning-engineer roles, 2014 to 2018, per Bloomberg.
  • OpenAI: Research scientist, Robotics team, 2018-02 to 2021; founder and lead of the Applied AI Research team, 2021 to 2023; founder and lead of the Safety Systems team, 2023 to 2024; Vice President of Research, Safety Systems, 2024-08 to 2024-11.
  • Fellows Fund: Distinguished Fellow, December 2024 to present.
  • Thinking Machines Lab: Co-founder, 2025 to present.

Notable contributions

Weng's body of public work spans the OpenAI safety-research and applied-AI program, contributing-author roles on flagship technical reports, and the long-form technical writing on Lil'Log that has become her most-cited public artifact.

Investments and boards

No public personal investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Her footprint is concentrated in research and operating roles at OpenAI and Thinking Machines Lab plus the Distinguished Fellow advisory role at Fellows Fund.

Network

Weng's longest-running professional relationship outside industry AI research is with Filippo Menczer, her PhD advisor at Indiana University and senior co-author on her network-science papers from 2012 through 2014.

Her OpenAI cohort, with whom she worked from 2018 through 2024, includes Sam Altman, the chief executive; Greg Brockman, president and co-founder; Ilya Sutskever, chief scientist and co-founder, who founded Safe Superintelligence in June 2024; John Schulman, her senior research peer at OpenAI through August 2024 and her co-founder colleague at Thinking Machines from February 2025; and Andrej Karpathy, founding research scientist at OpenAI through his early-2023 departure.

The Thinking Machines Lab founding cohort is concentrated in former senior OpenAI staff: Mira Murati, founder and Chief Executive Officer; John Schulman, Chief Scientist; Barret Zoph, co-founder and Chief Technology Officer through January 2026 before his return to OpenAI under Fidji Simo; and other co-founders Andrew Tulloch (formerly of OpenAI and Meta AI) and Luke Metz (formerly of Google Brain and OpenAI). Bob McGrew advises; Jared Kaplan of Anthropic also advises.

Among the broader frontier-AI-safety community, Weng's record places her alongside Jan Leike (formerly OpenAI Superalignment co-lead, now at Anthropic) and Paul Christiano (formerly OpenAI alignment, now at the US AI Safety Institute), and the safety-research staff she previously led, several of whom have since moved to Anthropic, Thinking Machines, and academic positions.

Position in the field

As of May 2026, Weng occupies an unusual position among senior frontier-AI researchers. The Lil'Log blog has produced sustained influence on machine-learning practice closer in shape to a long-running academic textbook than to a typical industry-researcher publication record, regularly cited alongside the OpenAI Spinning Up reinforcement-learning curriculum and Andrej Karpathy's tutorial materials. Her Google Scholar profile lists more than 46,000 citations as of May 2026, driven primarily by contributing-author roles on flagship system cards rather than lead authorship.

The OpenAI Safety Systems leadership role from 2023 through 2024 placed her in a small group of senior executives directly responsible for the deployment-time safety posture of the company's consumer-facing products through the period that produced ChatGPT, GPT-4, and o1. TechCrunch identified her departure as a continuation of the 2024 senior-safety-staff exodus that included Jan Leike and Ilya Sutskever earlier in the year.

Her public profile is concentrated on Lil'Log, the @lilianweng account on X, and a small number of conference and event appearances rather than mainstream-media interviews or policy commentary. The October 2024 Bilibili Super Science Night keynote in Shanghai is the most-cited recent talk; the ICLR 2024 invited talk and the AAAI 2021 keynote are her principal Western conference appearances.

Outlook

Open questions over the next 6 to 18 months:

  • Functional role at Thinking Machines. Whether her co-founder role is publicly clarified into a specific title as the lab moves from a pre-product phase into model releases.
  • First Thinking Machines model release. John Schulman has publicly stated 2026 as the release year. Her specific contributions to the safety, post-training, or evaluations program of the first model are central to her near-term public record.
  • Lil'Log cadence and topics. Whether the blog continues at the 2017-to-2024 cadence and whether new posts reflect Thinking Machines internal work or stay a synthesis of public research.
  • OpenAI safety-staff network. Continued movement of former Safety Systems team members between OpenAI, Anthropic, Thinking Machines, and academia.
  • English-language public profile. Whether Weng increases her English-language conference and podcast presence beyond Lil'Log.
  • Public commentary on the November 2024 departure. Whether she addresses the 2024 senior-safety-staff exit framing or maintains the limited-statement posture since the departure.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.