François Chollet

François Chollet is a French software engineer and AI researcher, creator of the Keras deep learning framework, author of the ARC-AGI benchmark, and a co-founder of Ndea, the AI research lab pursuing artificial general intelligence through deep learning-guided program synthesis.
François Chollet

Bio

François Chollet is a French software engineer and artificial intelligence researcher, born October 20, 1989. He is the creator of the Keras deep learning framework released in March 2015, the author of the Abstraction and Reasoning Corpus (ARC-AGI) benchmark introduced in 2019, and a co-founder of Ndea, the San Francisco-based AI research lab he established in January 2025 with Mike Knoop, a co-founder of Zapier. He is also the author of the textbook Deep Learning with Python (Manning Publications) and a co-founder of the ARC Prize Foundation, the nonprofit running the annual ARC-AGI competition.

At a glance

Origins

Chollet was born on October 20, 1989, in France. He completed his secondary education in France before entering ENSTA Paris (École Nationale Supérieure de Techniques Avancées), one of the engineering schools comprising the Institut Polytechnique de Paris consortium. He graduated with a Diplôme d'Ingénieur, the French Master of Engineering equivalent, in 2012.

Career

Chollet joined the Google Brain team as a software engineer in 2015, shortly after releasing the open-source Keras deep learning library in March of that year. Keras was originally written as a high-level neural-network API on top of Theano and subsequently extended to support Microsoft Cognitive Toolkit, Apache MXNet, and TensorFlow. The library prioritized accessibility for researchers and engineers without deep mathematical-software backgrounds, and rapidly became one of the most widely used deep-learning APIs in the Python ecosystem. Keras was officially adopted as the high-level API of TensorFlow 2.0 in 2019, and continues as part of the TensorFlow ecosystem.

The 2016 paper "Xception: Deep Learning with Depthwise Separable Convolutions", presented at CVPR 2017, introduced the Xception architecture, which extended the depthwise-separable-convolution principle of MobileNet into a deeper image-classification model. Xception is among the most-cited papers in the CVPR proceedings, with citation counts exceeding 18,000 as of 2025.

The November 2019 paper "On the Measure of Intelligence" introduced the Abstraction and Reasoning Corpus (ARC-AGI) benchmark, a set of pattern-completion puzzles designed to test abstract reasoning and pattern induction capabilities that scale-trained large-language models had historically struggled to match. The benchmark and its accompanying philosophical framework became one of the most discussed evaluation frameworks for large language models through the early 2020s. The ARC benchmark has been used in research from Google DeepMind, OpenAI, Anthropic, Poetiq, and a long tail of academic and industrial AI research groups.

Chollet authored the textbook Deep Learning with Python (Manning Publications, first edition 2017, second edition 2021, third edition 2025), which has sold more than 100,000 copies and is one of the most widely used introductory deep-learning textbooks in the field. He also co-authored Deep Learning with R (Manning Publications, 2018) with J.J. Allaire, the founder of Posit (formerly RStudio).

In June 2024, Chollet and Mike Knoop launched the ARC Prize, a $1 million competition for solutions to the ARC-AGI benchmark. The competition attracted submissions from frontier AI labs, academic research groups, and individual researchers, and was characterized in industry coverage as one of the most-watched AI evaluation events of 2024. The competition was extended into a non-profit foundation, the ARC Prize Foundation, in early 2025.

Chollet departed Google in November 2024 after more than nine years at the company, citing a desire to pursue a research-led foundation-model thesis. In January 2025, he co-founded Ndea with Mike Knoop, the Zapier co-founder and former head of AI at Zapier. Ndea pursues artificial general intelligence through deep learning-guided program synthesis, an alternative paradigm to the scale-driven foundation-model approach that has dominated the frontier AI cohort. The company is headquartered in San Francisco and has reportedly raised approximately $43.5 million in disclosed funding from Y Combinator, Coatue Management, Factorial Capital, and Quiet Capital.

Affiliations

  • Google: Software engineer, 2015 to November 2024 (Senior Staff Engineer at the time of departure).
  • Ndea: Co-founder, January 2025 to present.
  • ARC Prize Foundation: Co-founder and President, early 2025 to present.

Notable contributions

  • Keras deep learning framework (March 2015). One of the most widely used Python deep-learning APIs, originally built on Theano and extended to TensorFlow, which adopted Keras as its high-level API in TensorFlow 2.0 in 2019.
  • Xception (October 2016). Image-classification architecture extending depthwise-separable convolutions, presented at CVPR 2017. Among the most-cited CVPR papers with citation counts exceeding 18,000.
  • ARC-AGI benchmark and "On the Measure of Intelligence" (November 2019). The Abstraction and Reasoning Corpus benchmark and its accompanying philosophical framework, designed to measure abstract reasoning capabilities that scale-trained models historically struggled to match.
  • ARC-AGI-2 and ARC-AGI-3 (subsequent versions). Iterations of the benchmark with increased difficulty and expanded coverage of generalization-and-reasoning capabilities.
  • ARC Prize (June 2024). The $1 million competition to solve ARC-AGI, co-launched with Mike Knoop. Extended into the ARC Prize Foundation nonprofit in early 2025.
  • Ndea co-founding (January 2025). Co-founder of the AI research lab pursuing artificial general intelligence through deep learning-guided program synthesis.
  • Textbooks. Deep Learning with Python (Manning, 2017, 2021, 2025) with sales of more than 100,000 copies; Deep Learning with R (Manning, 2018) with J.J. Allaire.
  • Public commentary on AGI capability and scaling. Chollet's longstanding public position is that pure scaling of next-token prediction has structural limitations on out-of-distribution generalization, and that program synthesis (the search for explicit programs that produce desired input-output behavior) is a complementary approach that addresses those limitations. The position is articulated through academic publications, the ARC-AGI benchmark, and active commentary on X/Twitter.

Position in the field

Chollet occupies a distinctive position among AI researchers and engineers. The combination of Keras authorship (one of the most widely used Python deep-learning APIs in history), the ARC-AGI benchmark (one of the most discussed evaluation frameworks for general-intelligence claims), the textbook Deep Learning with Python (one of the most-sold introductory deep-learning textbooks), and the Ndea co-founding (a research-led non-consensus paradigm bet) is structurally distinct from the operator-executive trajectory of other senior AI figures.

The post-2024 Ndea and ARC Prize Foundation trajectory positions Chollet as one of the principal commercial-research advocates for the program-synthesis-augmented architecture as an alternative to pure scaling. The competitive premise is contested within the broader AI research community; the empirical resolution depends on whether scale-trained foundation models continue to close the ARC-AGI gap or whether the gap proves structurally durable. Industry coverage has consistently characterized Chollet as one of the most influential public skeptics of the scaling-laws thesis from a senior research engineer with broad credibility from Keras and the ARC framework.

The figures most often paired with Chollet in coverage of the AGI-paradigm debate are Yann LeCun of AMI (the JEPA-thesis architectural alternative); Demis Hassabis of Google DeepMind (the AlphaGo-and-AlphaProof program-search-augmented frontier-lab approach); and the broader frontier-foundation-model leadership at OpenAI, Anthropic, and Google DeepMind. The position is structurally adjacent to Yann LeCun's contrarian stance on LLM scaling, while pursuing a different alternative (program synthesis rather than world models).

Outlook

Open questions and watchable signals over the next 6 to 18 months:

  • Ndea's first public artifacts. Whether Ndea produces papers, models, or demonstrations during 2026 that validate the deep-learning-guided program-synthesis thesis.
  • ARC-AGI progression. Whether scale-trained foundation models continue to close the benchmark gap, or whether the gap proves durable and validates the program-synthesis paradigm.
  • ARC Prize Foundation competition rounds. The 2026 ARC Prize competition rounds and the participating research groups, as a window into the broader research-community engagement with ARC-AGI.
  • Public commentary on scaling. Chollet's continuing public commentary on LLM-frontier capability progression, and the comparison with peer commentary from Yann LeCun, Demis Hassabis, and frontier-lab chief executives.
  • Keras ecosystem evolution. The continued evolution of the Keras framework under TensorFlow stewardship and through Chollet's continued involvement, particularly in the post-Google period.
  • Series A or adjacent fundraising. Ndea's next pricing event beyond the disclosed approximately $43.5 million seed-stage capital base.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.