Conjecture
Conjecture is an AI alignment research company headquartered in London, founded in March 2022 by Connor Leahy, Sid Black, and Gabriel Alfour. Leahy and Black were both co-founders of EleutherAI, the volunteer open-source research collective that produced GPT-Neo, GPT-J, and the Pile training corpus during 2020 to 2022; Alfour brought a background in distributed-systems engineering. Conjecture's stated mission is to reduce the existential risk from advanced AI systems through alignment research and through public policy engagement. The company is structurally and rhetorically distinct from commercial frontier AI labs in that its founders have argued publicly that the capability progress now coming from the leading frontier labs is on a trajectory that risks catastrophic outcomes, and that alignment research is a separate discipline that should be pursued without simultaneously advancing capabilities. As of April 2026, Conjecture is one of the principal independent AI alignment research organizations globally alongside Apollo Research, METR, Transluce, and Timaeus.
At a glance
- Founded: March 2022 in London by Connor Leahy, Sid Black, and Gabriel Alfour. Leahy and Black were co-founders of EleutherAI (2020 to 2022).
- Status: Private. Operates as a hybrid commercial-and-research entity rather than a non-profit.
- Funding: Approximately $10 million-plus across seed and follow-on rounds. Investors include Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, and other technology angels and small VCs. The company has not raised at frontier-lab scale.
- CEO: Connor Leahy, Founder and Chief Executive Officer. Public face of the existential-risk-focused AI safety position. German-American; previously co-founded EleutherAI; vocal advocate for AI development pauses and international AI governance.
- Other notable leadership: Sid Black, Co-Founder. Co-author on the GPT-Neo and GPT-J open-source language model releases. Gabriel Alfour, Co-Founder. Senior research staff including Andrea Miotti and other alignment researchers.
- Open weights: N/A. Conjecture's research output is alignment methodology and policy analysis rather than foundation models.
- Flagship outputs: The Cognitive Emulations (CoEm) alignment research program; the Compendium policy framework arguing for international AI governance; published alignment research and policy papers; substantial public-engagement output through ControlAI (the affiliated AI policy advocacy group).
Origins
Conjecture was founded in March 2022 in London by Connor Leahy, Sid Black, and Gabriel Alfour, immediately after Leahy and Black's departure from active EleutherAI leadership. EleutherAI, which the two had co-founded in 2020 with Leo Gao, had by early 2022 produced the Pile training corpus, GPT-Neo (March 2021), GPT-J (June 2021), and the GPT-NeoX-20B model (announced February 2022). Leahy's evolving public position through 2021 and 2022 had moved from open-source replication of frontier models toward a critical view that capability scaling was occurring faster than alignment research could keep up, and that existential risk from advanced AI was a central concern that the field was substantially under-resourcing.
The Conjecture founding emphasized a research agenda explicitly oriented around alignment without simultaneously contributing to capability advancement. The early commercial structure was a private company rather than a non-profit, on the founders' argument that commercial discipline and revenue would produce a more durable research operation than a grant-dependent non-profit alternative. The seed-stage capital came from technology angels including Nat Friedman, Daniel Gross, the Collison brothers, and Arthur Breitman.
Through 2022 and 2023, Conjecture published the early framework papers on Cognitive Emulations (CoEm), an alignment research direction proposing to build AI systems whose internal reasoning steps emulate identifiable, auditable human cognitive processes rather than the opaque internal representations that current large language models develop. The CoEm thesis is that alignment becomes substantially more tractable if the system's reasoning is constrained to a vocabulary of human-comprehensible operations, even at the cost of capability.
The 2023 to 2024 period saw Conjecture pivot more visibly toward AI policy advocacy. Leahy became one of the more prominent public voices arguing for international AI governance, including testimony before the UK House of Lords AI Select Committee and engagement with the November 2023 Bletchley AI Safety Summit. The affiliated ControlAI advocacy group, established in 2023, has run public campaigns calling for an international treaty on advanced AI development. Conjecture published The Compendium in October 2024, a longform document framing the company's position on AI risk and governance.
The 2024 to 2026 period has continued the dual research-and-advocacy posture. The senior research team has remained small (industry coverage has reported headcount in the range of 20 to 40), with the alignment research output anchored in the CoEm direction.
Mission and strategy
Conjecture's stated mission is to reduce existential risk from advanced AI through alignment research and through public policy engagement. The strategy combines three threads. First, the Cognitive Emulations research program developing alignment methodology that constrains AI internal reasoning to auditable human-comprehensible operations. Second, public policy advocacy through the affiliated ControlAI group, with explicit campaign focus on international AI governance and on capability-development moratoria. Third, applied AI safety consulting and research engagements with selected commercial customers, providing the revenue base for the broader research operation.
The competitive premise is that commercial frontier labs face structural conflicts of interest between commercial pressure and safety research, and that an independent research-and-advocacy organization can pursue alignment work that frontier labs would not pursue at the necessary scale or with the necessary public-policy posture.
Models and products
- Cognitive Emulations (CoEm) alignment research program. Methodology research on alignment-tractable AI architectures.
- Published alignment papers and policy analysis. Active publication output through ArXiv, the Alignment Forum, and selected academic venues.
- The Compendium (October 2024). Long-form policy framework document arguing for international AI governance.
- ControlAI advocacy group. Affiliated public-policy and campaigning organization.
- Selective AI safety consulting engagements. With governments, frontier labs, and other organizations.
Distribution channels are predominantly research publications, public policy submissions, and consulting engagements rather than commercial product offerings.
Benchmarks and standing
Conjecture is not evaluated against horizontal AI benchmarks because the company does not produce foundation models. The organization's standing is measured through alignment-research publication output, citation impact at AI safety venues, public-policy engagement scale, and the visibility of Connor Leahy and other senior staff in international AI governance discussions.
Industry coverage has consistently grouped Conjecture with Apollo Research, METR, Transluce, Timaeus, and Goodfire as the principal independent AI alignment research organizations of the post-2022 era, alongside earlier-cohort organizations including the Center for AI Safety, MIRI (Machine Intelligence Research Institute), and the Berkeley Center for Human-Compatible AI (CHAI). Conjecture's distinguishing positioning within that group is the explicit existential-risk framing and the policy-advocacy posture.
Leadership
As of April 2026, Conjecture's senior leadership includes:
- Connor Leahy, Founder and Chief Executive Officer. EleutherAI co-founder. Public face of existential-risk-focused AI safety advocacy.
- Sid Black, Co-Founder. EleutherAI co-founder; co-author on GPT-Neo and GPT-J.
- Gabriel Alfour, Co-Founder.
- Andrea Miotti, Director of ControlAI affiliated advocacy group.
- Senior research staff across the Cognitive Emulations program and adjacent alignment research areas.
Funding and backers
Approximately $10 million-plus in cumulative private capital across seed and follow-on rounds. Investors include Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, and other technology angels and small VCs. The company has explicitly chosen not to raise frontier-lab-scale capital, which the founders have publicly framed as consistent with a research-rather-than-capability-development posture.
Industry position
Conjecture occupies a distinctive position among AI alignment research organizations as a hybrid for-profit-research-and-advocacy entity with an explicit existential-risk framing and an unusually visible public profile through Connor Leahy's policy engagement. Industry coverage has consistently characterized Conjecture as one of the principal independent voices in the AI safety discourse, with views that have sometimes diverged from the broader frontier-lab safety positioning (Anthropic in particular).
The structural risks are two. First, the commercial-revenue base is small relative to what frontier-lab alignment teams operate with, which limits research scale. Second, the explicit advocacy posture has produced public disagreement with frontier-lab leadership, which limits the consulting-engagement opportunities that provide the company's principal revenue.
Competitive landscape
- Anthropic. Commercial frontier lab with explicit safety positioning. Different scale (multi-billion-dollar capital base, large staff). Conjecture's founders have publicly argued that frontier labs face conflicts of interest that independent organizations do not.
- Apollo Research, METR, Transluce, Timaeus, Goodfire. Independent AI alignment research peer organizations of the post-2022 cohort. Different specific research focuses but overlapping organizational positioning.
- UK AI Safety Institute, US AI Safety Institute. Government AI safety research bodies. Different governance structure (state-affiliated rather than independent).
- EleutherAI. Founder Leahy's prior organization. Open-research focus rather than alignment focus.
- MIRI (Machine Intelligence Research Institute), CHAI (Berkeley Center for Human-Compatible AI), Center for AI Safety. Earlier-cohort AI safety research organizations.
- Schmidt Sciences, Open Philanthropy. AI safety philanthropic-funding peer organizations supporting independent alignment research.
Outlook
- Continued Cognitive Emulations alignment research output.
- ControlAI campaign activity through 2026 to 2027 international AI governance discussions.
- Public-policy engagement at AI safety summits and similar international venues.
- The competitive dynamic with frontier-lab safety teams as alignment research matures.
- Connor Leahy's continued public advocacy posture and the resulting visibility for the company's positioning.
Sources
- Conjecture official site. Company reference.
- Connor Leahy LinkedIn. Founder reference.
- The Compendium. October 2024 policy framework document.
- ControlAI. Affiliated AI policy advocacy group.
- Cognitive Emulations explainer (LessWrong). Research direction reference.