Lilian Weng
Lilian Weng is a Chinese-American AI researcher and engineering leader. She is a co-founder of Thinking Machines Lab and was previously Vice President of Research, Safety Systems at OpenAI, where she founded and led the Applied AI Research and Safety Systems teams over a tenure from February 2018 to November 2024. Weng is the author of Lil'Log, a long-form technical blog founded in 2017 that is widely cited in the machine-learning community for its surveys of reinforcement learning, large language models, agents, and AI safety.
At a glance
- Education: Bachelor's degree in information systems and computer science, Peking University; PhD, School of Informatics and Computing, Indiana University Bloomington (2014), advised by Filippo Menczer, with research on online attention, social-network analysis, and meme virality.
- Current role: Co-founder of Thinking Machines Lab, since 2025.
- Key contributions: founded and led OpenAI's Applied AI Research team and the Safety Systems team that grew to more than 80 scientists, researchers, and policy experts; contributing author on the OpenAI GPT-4 Technical Report, GPT-4o System Card, o1 System Card, and Solving Rubik's Cube with a Robot Hand; author of the Lil'Log technical blog.
- X / Twitter: @lilianweng
- LinkedIn: Lilian Weng
- Personal site / blog: lilianweng.github.io
- Google Scholar: Lilian Weng
- GitHub: @lilianweng
Origins
Weng completed an undergraduate degree in information systems and computer science at Peking University in Beijing before moving to the United States for graduate study. Her PhD at the School of Informatics and Computing at Indiana University Bloomington was completed in April 2014 under Filippo Menczer, with research on online attention, information diffusion, and social-network analysis on Twitter. The most-cited papers from this period are "Competition Among Memes in a World with Limited Attention" (Scientific Reports, 2012) and "Virality Prediction and Community Structure in Social Networks" (Scientific Reports, 2013), both with Menczer as senior author.
Career
After completing her PhD in 2014, Weng moved to industry data-science and machine-learning roles in the San Francisco Bay Area. Coverage in Bloomberg, citing her LinkedIn record, places her at Facebook (now Meta), Dropbox, and the financial-technology company Affirm before AI research, with her last pre-OpenAI role as a staff machine-learning engineer at Affirm.
In February 2018, Weng joined OpenAI as a research scientist on the OpenAI Robotics team. Her first project was the in-hand manipulation program, which trained a Shadow Dexterous Hand to solve a Rubik's Cube; the paper was released in October 2019. The robotics team was disbanded in 2021 as OpenAI shifted toward language-model research.
Through 2021 and 2022, Weng founded and led the Applied AI Research team at OpenAI. Per the Fellows Fund announcement, the team shipped OpenAI's first generation of developer-platform tools, including the fine-tuning API, embeddings API, moderation endpoint, and the first applied safety evaluations. She is a contributing author on "Text and Code Embeddings by Contrastive Pre-Training" (2022).
Following the launch of GPT-4 in March 2023, Weng established the Safety Systems team to consolidate adversarial robustness, content moderation, evaluations, and red-teaming for the consumer-facing products and the API. The team grew beyond 80 scientists, researchers, and policy experts. She is a contributing author on the GPT-4 Technical Report, GPT-4o System Card, and o1 System Card, and on "Rule Based Rewards for Language Model Safety" (NeurIPS 2024) and "Diverse and Effective Red Teaming with Auto-generated Rewards" (2024). In August 2024 she was promoted to Vice President of Research, Safety Systems, with the mandate broadened to include the Preparedness team and a seat on the OpenAI board's Safety and Security Committee.
On November 8, 2024, Weng announced on X that she was leaving OpenAI after almost seven years to "reset and explore something new." Her last day was November 15, 2024, two months after Chief Technology Officer Mira Murati's September 25, 2024 departure. TechCrunch covered the exit as the latest in a 2024 cohort of senior OpenAI safety-and-research-staff departures. In December 2024 she joined Fellows Fund as a Distinguished Fellow.
In February 2025, Weng announced her co-founder role at Thinking Machines Lab, founded by Mira Murati. A functional title has not been publicly disclosed; her Google Scholar lists her affiliation as Thinking Machines as of May 2026. She has continued publishing on Lil'Log, including "Why We Think" in May 2025.
Affiliations
- Facebook (now Meta), Dropbox, and Affirm: Software-engineer and machine-learning-engineer roles, 2014 to 2018, per Bloomberg.
- OpenAI: Research scientist, Robotics team, 2018-02 to 2021; founder and lead of the Applied AI Research team, 2021 to 2023; founder and lead of the Safety Systems team, 2023 to 2024; Vice President of Research, Safety Systems, 2024-08 to 2024-11.
- Fellows Fund: Distinguished Fellow, December 2024 to present.
- Thinking Machines Lab: Co-founder, 2025 to present.
Notable contributions
Weng's body of public work spans the OpenAI safety-research and applied-AI program, contributing-author roles on flagship technical reports, and the long-form technical writing on Lil'Log that has become her most-cited public artifact.
- Lil'Log (founded 2017). Long-form technical blog on reinforcement learning, LLMs, agents, hallucinations, diffusion models, prompt engineering, RLHF, and AI safety. Notable posts include "LLM Powered Autonomous Agents" (June 2023), "What are Diffusion Models?" (July 2021), "Prompt Engineering" (March 2023), "Extrinsic Hallucinations in LLMs" (July 2024), and "Why We Think" (May 2025).
- OpenAI Applied AI Research team (2021 to 2023) and Safety Systems team (2023 to 2024). Founded and led both. Applied AI Research shipped OpenAI's first developer-platform tools (fine-tuning API, embeddings API, moderation endpoint); Safety Systems grew beyond 80 staff with responsibility for adversarial robustness, content moderation, evaluations, and red-teaming.
- GPT-4 Technical Report (March 2023), GPT-4o System Card (October 2024), and o1 System Card (December 2024). Contributing author on the OpenAI flagship-model reports across the GPT-4, GPT-4o, and o1 generations.
- Solving Rubik's Cube with a Robot Hand (October 2019) and Text and Code Embeddings by Contrastive Pre-Training (January 2022). Contributing author on the dexterous-manipulation paper and the embeddings paper underlying the embeddings API.
- "Rule Based Rewards for Language Model Safety" (NeurIPS 2024) and "Diverse and Effective Red Teaming with Auto-generated Rewards" (2024). Safety Systems team papers on rule-based reward modeling and automated red-teaming.
- Indiana University network-science papers. "Competition Among Memes in a World with Limited Attention" (Scientific Reports, 2012) and "Virality Prediction and Community Structure in Social Networks" (Scientific Reports, 2013) with Filippo Menczer.
- Public-talk record. Invited talk at the ICLR 2024 Workshop on Reliable and Responsible Foundation Models; keynote on "Asymmetric self-play for automatic goal discovery in robotic manipulation" at AAAI 2021; "AI Safety and Cultivation" keynote at the Bilibili Super Science Night in October 2024.
Investments and boards
No public personal investor activity on record in AI, semiconductors, datacenters, software, or energy as of May 2026. Her footprint is concentrated in research and operating roles at OpenAI and Thinking Machines Lab plus the Distinguished Fellow advisory role at Fellows Fund.
Network
Weng's longest-running professional relationship outside industry AI research is with Filippo Menczer, her PhD advisor at Indiana University and senior co-author on her network-science papers from 2012 through 2014.
Her OpenAI cohort, with whom she worked from 2018 through 2024, includes Sam Altman, the chief executive; Greg Brockman, president and co-founder; Ilya Sutskever, chief scientist and co-founder, who founded Safe Superintelligence in June 2024; John Schulman, her senior research peer at OpenAI through August 2024 and her co-founder colleague at Thinking Machines from February 2025; and Andrej Karpathy, founding research scientist at OpenAI through his early-2023 departure.
The Thinking Machines Lab founding cohort is concentrated in former senior OpenAI staff: Mira Murati, founder and Chief Executive Officer; John Schulman, Chief Scientist; Barret Zoph, co-founder and Chief Technology Officer through January 2026 before his return to OpenAI under Fidji Simo; and other co-founders Andrew Tulloch (formerly of OpenAI and Meta AI) and Luke Metz (formerly of Google Brain and OpenAI). Bob McGrew advises; Jared Kaplan of Anthropic also advises.
Among the broader frontier-AI-safety community, Weng's record places her alongside Jan Leike (formerly OpenAI Superalignment co-lead, now at Anthropic) and Paul Christiano (formerly OpenAI alignment, now at the US AI Safety Institute), and the safety-research staff she previously led, several of whom have since moved to Anthropic, Thinking Machines, and academic positions.
Position in the field
As of May 2026, Weng occupies an unusual position among senior frontier-AI researchers. The Lil'Log blog has produced sustained influence on machine-learning practice closer in shape to a long-running academic textbook than to a typical industry-researcher publication record, regularly cited alongside the OpenAI Spinning Up reinforcement-learning curriculum and Andrej Karpathy's tutorial materials. Her Google Scholar profile lists more than 46,000 citations as of May 2026, driven primarily by contributing-author roles on flagship system cards rather than lead authorship.
The OpenAI Safety Systems leadership role from 2023 through 2024 placed her in a small group of senior executives directly responsible for the deployment-time safety posture of the company's consumer-facing products through the period that produced ChatGPT, GPT-4, and o1. TechCrunch identified her departure as a continuation of the 2024 senior-safety-staff exodus that included Jan Leike and Ilya Sutskever earlier in the year.
Her public profile is concentrated on Lil'Log, the @lilianweng account on X, and a small number of conference and event appearances rather than mainstream-media interviews or policy commentary. The October 2024 Bilibili Super Science Night keynote in Shanghai is the most-cited recent talk; the ICLR 2024 invited talk and the AAAI 2021 keynote are her principal Western conference appearances.
Outlook
Open questions over the next 6 to 18 months:
- Functional role at Thinking Machines. Whether her co-founder role is publicly clarified into a specific title as the lab moves from a pre-product phase into model releases.
- First Thinking Machines model release. John Schulman has publicly stated 2026 as the release year. Her specific contributions to the safety, post-training, or evaluations program of the first model are central to her near-term public record.
- Lil'Log cadence and topics. Whether the blog continues at the 2017-to-2024 cadence and whether new posts reflect Thinking Machines internal work or stay a synthesis of public research.
- OpenAI safety-staff network. Continued movement of former Safety Systems team members between OpenAI, Anthropic, Thinking Machines, and academia.
- English-language public profile. Whether Weng increases her English-language conference and podcast presence beyond Lil'Log.
- Public commentary on the November 2024 departure. Whether she addresses the 2024 senior-safety-staff exit framing or maintains the limited-statement posture since the departure.
Sources
- Lil'Log. Weng's long-form technical blog, founded in 2017.
- Lilian Weng on LinkedIn and Google Scholar. Profiles listing her Thinking Machines role, prior OpenAI positions, publications, and citation counts.
- Lilian Weng (@lilianweng) on X. Personal X account.
- OpenAI loses another lead safety researcher, Lilian Weng. TechCrunch coverage of the November 2024 departure and her OpenAI role progression.
- After working at OpenAI for almost 7 years, I decide to leave. Weng's November 8, 2024 departure post.
- Fellows Fund Welcomes Lilian Weng, ex-VP of Research, Safety at OpenAI. December 2024 announcement covering her education, OpenAI tenure, and contributions.
- Lilian Weng, vice president of research and safety at OpenAI, leads risk management for AI models. Bloomberg-sourced Yahoo Tech profile, October 2024, on her Facebook, Dropbox, Affirm, and OpenAI career history.
- OpenAI safety executive calls for responsible AI development at Bilibili event. SCMP coverage of her October 2024 Bilibili Super Science Night keynote.
- Thinking Machines Lab is my next adventure. Weng's February 2025 co-founder announcement.
- Asymmetric self-play for automatic goal discovery in robotic manipulation. The January 2021 OpenAI Robotics paper underlying the AAAI 2021 keynote.
- Solving Rubik's Cube with a Robot Hand. The October 2019 OpenAI Robotics paper.
- Text and Code Embeddings by Contrastive Pre-Training. The January 2022 OpenAI embeddings paper.
- GPT-4 Technical Report, GPT-4o System Card, and o1 System Card. OpenAI flagship-model reports.
- Diverse and Effective Red Teaming with Auto-generated Rewards and Rule Based Rewards for Language Model Safety. 2024 OpenAI Safety Systems papers.
- Competition Among Memes in a World with Limited Attention and Virality Prediction and Community Structure in Social Networks. 2012 and 2013 Scientific Reports papers from Weng's PhD work with Filippo Menczer.
- ICLR Invited Talk: Lilian Weng. ICLR 2024 Workshop on Reliable and Responsible Foundation Models.
- Asymmetric Play for Automatic Goal Discovery in Robotic Manipulation, AAAI 2021. The AAAI 2021 keynote.
- Feature image: text-mode card generated via
scripts/make_lab_card.py; no credit-cleared portrait was located in May 2026.