Guillaume Lample

Guillaume Lample is a French AI researcher, co-founder and chief science officer of Mistral AI, and last author of the original LLaMA paper at Meta AI Research before leaving FAIR in 2023 to start Mistral.
Guillaume Lample

Guillaume Lample

Guillaume Lample is a French computer scientist, born October 8, 1990 in Brest, France. He is the co-founder and Chief Science Officer of Mistral AI, the Paris-headquartered foundation-model company he established in April 2023 with Arthur Mensch and Timothée Lacroix, and the last-listed senior author on the February 2023 LLaMA paper at Meta AI FAIR Paris. As of May 2026, he leads model research at Mistral following the September 2025 €2 billion Series C at a €12 billion valuation led by ASML.

At a glance

Origins

Lample was born on October 8, 1990 in Brest, the port city in Brittany on the western coast of France. He entered École Polytechnique in the class of 2011 for the engineer's degree program, the same period in which his future Mistral co-founder Arthur Mensch was also a Polytechnique student. The third co-founder, Timothée Lacroix, entered École Normale Supérieure the same year; the three came together later through the overlapping Paris machine-learning research community at FAIR Paris and Inria.

After Polytechnique, Lample completed a Master of Language Technologies at Carnegie Mellon University's School of Computer Science, advised by Alex Waibel and Chris Dyer. The most-cited project from the period was Arnold, a Doom-playing reinforcement-learning agent co-developed with fellow CMU graduate student Devendra Chaplot, which won the Track 1 prize at the 2016 ViZDoom AI Competition and produced the Playing FPS Games with Deep Reinforcement Learning paper.

Career

Lample joined Facebook AI Research (FAIR) in Paris in 2016 as a doctoral researcher under a CIFRE arrangement, the French industry-academic doctoral funding mechanism, and pursued his PhD in parallel at Sorbonne Université (then UPMC) in the LIP6 Machine Learning and Information Access (MLIA) team under Ludovic Denoyer. The thesis, "Unsupervised Machine Translation", was defended on October 17, 2019, with Marc'Aurelio Ranzato of FAIR on the jury. The thesis line generated three of his most-cited papers: Unsupervised Machine Translation Using Monolingual Corpora Only (ICLR 2018), Phrase-Based & Neural Unsupervised Machine Translation (EMNLP 2018, Best Paper Award), and Cross-lingual Language Model Pretraining (the XLM paper, 2019), each co-authored with Alexis Conneau at FAIR.

Lample stayed at FAIR Paris as a research scientist after completing the PhD and continued work across multilingual NLP, language modeling, and a side line in symbolic mathematics. The Deep Learning for Symbolic Mathematics paper at ICLR 2020 with François Charton showed that sequence-to-sequence transformers could outperform commercial computer-algebra systems including Matlab and Mathematica on symbolic integration and ODE solving.

The artifact most widely associated with Lample is the LLaMA paper, submitted to arXiv on February 27, 2023 and titled "LLaMA: Open and Efficient Foundation Language Models". The 14-author byline lists Lample last (the position commonly used for the senior author with overall research direction) with Hugo Touvron as first author. The paper introduced the foundation-model family in 7B, 13B, 33B, and 65B parameter sizes, established the training-on-public-data-only and open-weights-research-license posture that defined the LLaMA series, and triggered the wave of open-weights frontier-class language models that followed through 2023 and 2024. Several future Mistral arrivals appear on the byline alongside Lacroix at sixth position, including Baptiste Rozière, Marie-Anne Lachaux, and Thibaut Lavril.

Lample left Meta in early 2023 to co-found Mistral AI in Paris with Arthur Mensch and Timothée Lacroix on April 28, 2023, taking the Chief Science Officer title. The €105 million seed round in June 2023, led by Lightspeed Venture Partners with Eric Schmidt, Xavier Niel, and JCDecaux, was among the largest seed rounds in European technology history at the time. Mistral 7B launched on September 27, 2023 as an Apache 2.0 open-weights release, followed by Mixtral 8x7B on December 9, 2023, the first widely deployed sparse mixture-of-experts model from a frontier-class lab.

The Mistral Large flagship line launched in February 2024 alongside the Le Chat consumer assistant. The €600 million Series B closed in June 2024, followed by Mistral Large 2 in July 2024 and the Codestral, Mathstral, and Pixtral multimodal lines later in the year. The €2 billion Series C in September 2025 was led by ASML at a €12 billion valuation, and Mistral Large 3 and the Ministral 3 family followed in December 2025. Lample's role across the period has been research leadership for Mistral's training pipeline and successor model architectures.

Affiliations

  • Carnegie Mellon University, Language Technologies Institute: Master of Language Technologies student, 2014 to 2017.
  • Sorbonne Université (UPMC), LIP6 MLIA team: PhD candidate (CIFRE), 2016 to 2019.
  • Meta AI (FAIR Paris): Research Scientist, 2016 to early 2023.
  • Mistral AI: Co-founder and Chief Science Officer, April 2023 to present.

Notable contributions

Lample's published record runs from reinforcement-learning game-playing agents at CMU to the multilingual-NLP and unsupervised-machine-translation line at FAIR, to symbolic-mathematics deep learning, to the LLaMA family, to the Mistral and Mixtral models. He has approximately 64,000 Google Scholar citations and an h-index of 41 as of mid-2026.

Investments and boards

  • Mistral AI (AI): Co-founder and Chief Science Officer, April 2023 to present. Privately held; approximately $4 billion in cumulative equity funding plus $830 million in debt. Lample crossed the billionaire threshold alongside his co-founders in 2024 per Challenges magazine.

No public personal angel-investor activity is on record in AI, semiconductors, datacenters, software, or energy outside the founder-and-operator role at Mistral AI as of May 2026.

Network

Lample's longest-running professional relationships are with his Mistral co-founders Arthur Mensch (Chief Executive Officer) and Timothée Lacroix (Chief Technology Officer). Lample and Mensch overlapped at École Polytechnique in the 2011 class; Lacroix was at École Normale Supérieure the same year. The three came together professionally through the Paris machine-learning research community at FAIR Paris, Inria, and Sorbonne / LIP6. His PhD advisor was Ludovic Denoyer at Sorbonne / LIP6, and his master's-era CMU advisors were Alex Waibel and Chris Dyer. His most consistent FAIR research collaborator was Alexis Conneau, with whom he co-authored the unsupervised-machine-translation and XLM lines.

The LLaMA-paper byline links him to Hugo Touvron as first author and to several authors who later followed him to Mistral, including Baptiste Rozière, Marie-Anne Lachaux, and Thibaut Lavril. Press coverage in 2025 noted that 11 of the 14 LLaMA-paper authors had left Meta by mid-2025, with five joining Mistral. Yann LeCun was the chief AI scientist of Meta during his FAIR period.

Among broader frontier-research peers, the Mistral founding period has placed Lample alongside Sam Altman of OpenAI, Dario Amodei of Anthropic, Demis Hassabis of Google DeepMind, Aidan Gomez of Cohere, and Ilya Sutskever of Safe Superintelligence on conference and policy panels, though his public-facing presence is substantially lower-profile than Mensch's.

Position in the field

As of May 2026, Lample sits among a small group of senior research-leadership figures who moved from a frontier industrial lab into a chief-science role at an independent venture-backed company. The LLaMA-paper senior-authorship line is the structurally distinguishing credential: he is the only named co-founder of a current frontier-class lab whose name appears as the senior author on a foundational open-weights LLM paper from the prior employer.

Press coverage has consistently identified the three Mistral co-founders as filling complementary roles. Mensch is the public face on funding strategy, EU AI policy, and capability claims; Lacroix focuses on infrastructure and engineering systems; Lample carries technical leadership on model research and the training pipeline. Coverage has described Lample as research-focused and less media-facing than Mensch, with public communications channeled through product launches, X commentary, and a small number of conference and podcast appearances.

The artifact most often cited as evidence of Lample's continued technical leadership is the Mixtral 8x7B sparse mixture-of-experts release, which preceded comparable production-scale MoE releases from US-domiciled peers. Successor releases including Mistral Large 2 and Mistral Large 3 place Mistral below the latest US flagships on closed-weights benchmarks but ahead of any other European frontier-class model line.

Outlook

Open questions over the next 6 to 18 months:

  • Mistral Large 3 and successor benchmark performance. Whether Mistral closes the capability gap to the latest OpenAI, Anthropic, and Google DeepMind flagships once third-party leaderboards complete their evaluations.
  • Open-weights versus closed-weights research direction. Whether Lample-led model research continues to produce open-weights releases at frontier capability or shifts toward closed-weights flagship differentiation.
  • Bruyères-le-Châtel and Sweden compute build-out. Training capacity available to the research team through 2026 and 2027, and the resulting cadence of model releases.
  • Talent pipeline from FAIR Paris. Whether the migration of former FAIR LLaMA-team researchers to Mistral continues.
  • Research-paper output. Whether Mistral publishes technical papers on its frontier training methods at the cadence of US peers, or maintains a more product-focused publication posture.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.