Eric Zelikman

Eric Zelikman is an American computer scientist, co-founder and chief executive officer of Humans&, the human-centric AI lab, and lead author of the 2022 STaR (Self-Taught Reasoner) paper that anticipated the reasoning-model wave.
Eric Zelikman

Bio

Eric Zelikman is an American computer scientist, co-founder and chief executive officer of Humans&, the San Francisco AI lab he established in late 2025 with Georges Harik, Yuchen He, Andi Peng, and Noah Goodman. He is the lead author of STaR: Self-Taught Reasoner, the March 2022 paper that introduced rationale-bootstrapping for language-model reasoning and anticipated the reasoning-model architectures that became standard across the field. As of May 2026, Zelikman leads Humans& following the January 2026 $480 million seed round at a $4.48 billion post-money valuation, one of the largest seed rounds in venture-capital history.

At a glance

  • Education: Bachelor of Science in Symbolic Systems with departmental honors, Stanford University (2016 to 2020); doctoral candidate in computer science at Stanford from September 2021, advised by Nick Haber and Noah Goodman, on leave from March 2024.
  • Current role: Co-founder and Chief Executive Officer of Humans& since late 2025.
  • Key contributions: lead author of STaR (NeurIPS 2022), the rationale-bootstrapping technique for self-taught reasoning; lead author of Quiet-STaR (March 2024), extending the approach to implicit thinking; early member of technical staff at xAI, where he contributed to Grok 2 pretraining, Grok 3 Thinking reinforcement learning, and Grok 4 agent infrastructure.
  • X / Twitter: @ericzelikman
  • LinkedIn: Eric Zelikman
  • Personal site: zelikman.me

Origins

Zelikman entered Stanford in September 2016 to read Symbolic Systems, the program that has produced Andrej Karpathy, Reid Hoffman, and a cohort of senior figures across the technology industry. He completed the degree with departmental honors in June 2020, with a concentration in theoretical neuroscience and an undergraduate thesis on curiosity-driven spiking neural networks. The undergraduate years included research internships and the foundations of his later doctoral work on reasoning in language models.

Career

After the undergraduate degree, Zelikman spent 2020 and 2021 in industry roles including a stint at Lazard, with research internships across Microsoft Research and Google Research. He returned to Stanford in September 2021 to begin doctoral work in computer science, advised by Nick Haber and Noah Goodman in the Computation and Cognition Lab.

The principal research artifact from the doctoral period is STaR: Bootstrapping Reasoning With Reasoning, released to arXiv in March 2022 with Yuhuai Wu, Jesse Mu, and Noah Goodman as co-authors. STaR proposed a simple iterative loop in which a language model generates rationales for problems, the rationales that produce correct answers are added to the fine-tuning corpus, and incorrect rationales are reattempted given the correct answer. The technique was the first published method for training language models to reason in natural language using their own generated rationales as supervision, and is widely credited as a precursor to the reasoning-model architectures that emerged across the field through 2024 and 2025. The paper was published at NeurIPS 2022.

Quiet-STaR, released to arXiv in March 2024, extended the approach to implicit thinking by training language models to generate internal rationales between every token rather than only at problem boundaries. Subsequent papers in the period included Parsel (2023) on algorithmic reasoning through decomposition, Hypothesis Search (2024) on inductive reasoning, and the Self-Taught Optimizer (2024) on recursive code generation improvements.

Zelikman took leave from the doctoral program in March 2024 to join xAI as a member of technical staff. Per the founder framing he later provided publicly, the xAI tenure included key contributions to the pretraining data for Grok 2, the kickoff and scaling of reinforcement learning for reasoning on Grok 3 Thinking, and the agent reinforcement-learning infrastructure and recipe for Grok 4. He left xAI in approximately September 2025.

In late 2025 Zelikman co-founded Humans& in San Francisco with Georges Harik (Google's seventh employee and a longtime Silicon Valley operator), Yuchen He (formerly xAI), Andi Peng (formerly Anthropic, where she worked on post-training for Claude versions 3.5 through 4.5), and Noah Goodman (his Stanford doctoral co-advisor and a Stanford professor of psychology and computer science). The founding thesis crystallized around a deliberate rejection of the autonomous-agent framing that had dominated 2024 and 2025 AI lab positioning, with the company's stated objective being to build AI that strengthens human teams as "deeper connective tissue" for organizations and communities.

The $480 million seed round at a $4.48 billion post-money valuation announced January 20, 2026 was led by SV Angel and Harik, with Nvidia, Jeff Bezos, Google Ventures, Emerson Collective, and Forerunner Ventures participating. The round represented one of the largest seed-stage financings in venture-capital history and placed Humans& in the same capital tier as Thinking Machines Lab among 2025-vintage senior-team Insurgent labs.

Notable contributions

  • STaR (March 2022, NeurIPS 2022). Lead author of the paper introducing rationale-bootstrapping for self-taught reasoning. The technique iteratively generates and selects rationales that lead to correct answers, then fine-tunes on the resulting corpus, providing a path to reasoning capability that does not require human-labeled chain-of-thought data at scale. STaR is among the most-cited reasoning-model papers of the 2022 to 2025 period and is widely identified as a precursor to the reasoning-model architectures shipped by frontier labs from 2024 onwards.
  • Quiet-STaR (March 2024). Lead author on the extension of STaR to implicit thinking, with the model trained to produce internal rationales between every token. The paper provided one of the first formal frameworks for token-level latent reasoning.
  • xAI Grok contributions (March 2024 to September 2025). Per his published framing, principal contributions to Grok 2 pretraining data, Grok 3 Thinking reinforcement-learning kickoff and scaling, and Grok 4 agent reinforcement-learning infrastructure.
  • Humans& founding (late 2025). Co-founded the San Francisco human-centric AI lab with Georges Harik, Yuchen He, Andi Peng, and Noah Goodman.
  • Public framing. Public statements have positioned Humans& against the autonomous-agent framing dominant elsewhere in the field. Zelikman's framing on the lab's founding, "Chatbots are designed to answer questions. They're not good at asking them," anchors the human-collaboration product positioning.

Position in the field

As of May 2026, Zelikman occupies a distinctive position among 2025-vintage Insurgent-lab chief executives. The combination of a foundational reasoning-model research artifact (STaR), an early-employee track record at a frontier lab through three model generations, and the largest pre-product seed valuation outside Thinking Machines Lab is structurally unusual.

The reasoning-research lineage places Zelikman inside the technical credibility set of the field while the chief-executive role places him outside the senior research-and-product population at frontier labs. The gap between the human-collaboration founding thesis and the autonomous-agent default of his peer set is the principal positioning question for Humans&'s product trajectory.

Outlook

Open questions over the next 6 to 18 months:

  • First product launch. Whether Humans&'s first public product validates the human-collaboration positioning against existing chatbot and enterprise-collaboration surfaces from OpenAI, Anthropic, Microsoft, and Slack.
  • First in-house model release. The technical direction of the in-house frontier model program, including architecture, training corpus, and capability targets relative to peer benchmarks.
  • STaR-lineage research at Humans&. Whether the lab continues to publish reasoning-model research artifacts under Zelikman's name, and whether Quiet-STaR-style implicit-reasoning approaches feature in the in-house program.
  • Compute partnership scale. The depth of the Nvidia connection beyond the seed-round investment, and any expansion of the disclosed compute base relative to frontier-lab peers.
  • Senior recruitment trajectory. Continued hiring momentum from Anthropic, xAI, OpenAI, Meta AI, and academic research groups as the team scales from approximately 20 employees at seed announcement.
  • Public commentary. Zelikman's positioning on reasoning-model architectures, agentic AI, and the human-collaboration product premise as the year unfolds.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.