Johannes Hagemann
Johannes Hagemann is a German artificial intelligence engineer and AI researcher. He is the co-founder and chief technology officer of Prime Intellect, the 2023 San Francisco artificial intelligence company building decentralized training infrastructure and the open-weights INTELLECT foundation-model line, alongside Vincent Weisser as chief executive officer. He was previously a research engineer at Aleph Alpha (2022 to 2023), where he led the build of the Heidelberg lab's distributed training framework, and holds a master's degree in IT-Systems Engineering from the Hasso Plattner Institute.
At a glance
- Education: Bachelor of Science in Informatik from TU Dortmund University (2016 to 2019); Master of Science in IT-Systems Engineering from the Hasso Plattner Institute at the University of Potsdam (2020 to 2023).
- Current role: Co-founder and chief technology officer of Prime Intellect since 2023.
- Key contributions: Architect of the Aleph Alpha distributed training framework (2022 to 2023); first author of Efficient Parallelization Layouts for Large-Scale Distributed Model Training (WANT@NeurIPS 2023, oral); co-author on OpenDiLoCo (2024); co-author on the INTELLECT-1 (December 2024), INTELLECT-2 (May 2025), and INTELLECT-3 (November 2025) technical reports.
- X / Twitter: @johannes_hage
- LinkedIn: johannes-hagemann-393b72142
- Personal site: hagemann.ai
- GitHub: JohannesHa
Origins
Hagemann is German and pursued his undergraduate studies at TU Dortmund University in North Rhine-Westphalia from 2016 to 2019, completing a Bachelor of Science in Informatik (computer science). He has stated on his personal site that reading the Scaling Hypothesis in 2020 shifted his focus entirely toward large language models.
In 2020 he enrolled at the Hasso Plattner Institute at the University of Potsdam, the German technical institute focused on IT-systems engineering and digital health, completing the Master of Science in IT-Systems Engineering in 2023. His Google Scholar profile lists his verified institutional email at student.hpi.de during this period, and his OpenReview profile lists code-generation research from 2021 to 2022 as an early research interest. The 2022 paper Less Is More: A Comparison of Active Learning Strategies for 3D Medical Image Segmentation, with Hasso Plattner Institute co-authors, dates to this period and reflects an early intersection between deep learning and applied medical imaging at the institute.
Career
Hagemann's early professional roles spanned a co-founding position at the small German design-and-development consultancy dnd development and design UG, a co-founder role at the project Space Browser, and a software-engineering position at Bunch, the Berlin gaming-and-social platform. The pre-AI portion of the career also includes an advisory role at VitaDAO, the decentralized longevity-research-funding DAO co-founded by Vincent Weisser in 2021, where Hagemann served as a strategy advisor. The VitaDAO connection is the documented origin of the working relationship between the two future Prime Intellect co-founders.
In early 2022 Hagemann joined Aleph Alpha, the German foundation-model company founded by Jonas Andrulis in Heidelberg in 2019. His personal site describes the role as helping build a novel distributed training framework from scratch with state-of-the-art efficiency, with a research focus on large-scale parallelization and distributed-systems engineering for the Luminous foundation-model line. The November 2023 paper Efficient Parallelization Layouts for Large-Scale Distributed Model Training, with Hagemann as first author and Aleph Alpha co-founder Samuel Weinbach among the co-authors, distilled the parallelization research into a published recommendation set. The paper was selected for an oral presentation at the WANT@NeurIPS 2023 workshop on advancing neural-network training. The Aleph Alpha tenure ran through 2023.
In late 2023 Hagemann co-founded Prime Intellect with Vincent Weisser, with Weisser as chief executive officer and Hagemann as chief technology officer. The founding thesis was that geographically dispersed compute, coordinated by efficient low-bandwidth training algorithms and crypto-economic incentives, could break the centralized-data-center concentration of frontier-AI training and enable an open-source path to superintelligence. The company is incorporated as a Delaware C corporation and headquartered in San Francisco.
The Prime Intellect technical-leadership role has produced a sustained research output. The July 2024 release of OpenDiLoCo, the open-source replication and scaling of Google DeepMind's Distributed Low-Communication training research, established the algorithmic foundation for the subsequent INTELLECT line. The November 2024 INTELLECT-1 release was the first 10B-parameter language model trained across geographically distributed compute, with Hagemann as a co-author on the December 2024 technical report. The May 2025 INTELLECT-2 release was the first 32B-parameter model trained via globally distributed reinforcement learning, with Hagemann as a co-author on the accompanying technical report. The November 2025 INTELLECT-3 release extended the line to a 106B-parameter Mixture-of-Experts reasoning model with 12B active parameters.
Affiliations
- TU Dortmund University: Bachelor of Science in Informatik, 2016 to 2019.
- Hasso Plattner Institute (University of Potsdam): Master of Science in IT-Systems Engineering, 2020 to 2023.
- Bunch: Software engineer, Berlin, dates per public profile data.
- VitaDAO: Strategy advisor, dates per public profile data.
- Aleph Alpha: AI research engineer, distributed training framework, early 2022 to 2023.
- Prime Intellect: Co-founder and chief technology officer, late 2023 to present.
Notable contributions
Hagemann's public output is concentrated on distributed-and-decentralized training systems, large-scale-parallelization research, and the foundational papers and product releases of the Prime Intellect INTELLECT line.
- Prime Intellect (2023). Co-founder and chief technology officer of the decentralized-AI-training infrastructure company. Public artifacts include the INTELLECT-1 release (November 2024, the first 10B-parameter language model trained across geographically distributed compute on the Llama-3 architecture), the INTELLECT-2 release (May 2025, the first 32B-parameter model trained via globally distributed reinforcement learning on the QwQ-32B base), and the INTELLECT-3 release (November 2025, a 106B-parameter Mixture-of-Experts reasoning model with 12B active parameters).
- OpenDiLoCo (July 2024). Open-source replication and scaling of Google DeepMind's Distributed Low-Communication training research. Co-author on the accompanying paper presented at the ES-FoMo-II workshop, the algorithmic foundation for the INTELLECT line.
- INTELLECT-1 Technical Report (December 2024). Multi-author paper describing the November 2024 INTELLECT-1 distributed-training run.
- INTELLECT-2 Technical Report (May 2025). Multi-author paper on the globally decentralized reinforcement-learning training of the INTELLECT-2 reasoning model.
- TOPLOC (ICML 2025). Co-author on the locality-sensitive hashing scheme for trustless verifiable inference.
- METAGENE-1 (NeurIPS 2024 Workshop). Co-author on the metagenomic foundation model for pandemic monitoring, a Prime Intellect collaboration with the University of Southern California and Nucleic Acid Observatory.
- PRIME-RL (2025). Co-author on the asynchronous and decentralized reinforcement-learning training framework underlying the prime-rl open-source repository.
- Efficient Parallelization Layouts for Large-Scale Distributed Model Training (WANT@NeurIPS 2023, oral; published in COLM). First-author paper at Aleph Alpha distilling the parallelization research for large-language-model training and recommending micro-batch-size-1 layouts.
- Public-talk record. The Cognitive Revolution podcast (February 2025) with Vincent Weisser on the Prime Intellect distributed-training thesis; Sequoia Capital's Training Data podcast (2025) with Will Brown on the RL Environments Hub; Ray Summit 2025 talk; Prime Intellect Day 2025 talk.
Investments and boards
Entries below are limited to AI, semiconductors, datacenters, software, and energy.
- Prime Intellect (AI): Co-founder and chief technology officer, 2023 to present. Cumulative private capital approximately $70 million across three publicly disclosed rounds through December 2025.
No public personal angel-investor activity on record outside the Prime Intellect role in AI, semiconductors, datacenters, software, or energy as of May 2026.
Network
Hagemann's longest-running professional partnership is with Vincent Weisser, his Prime Intellect co-founder and chief executive officer, with the working relationship documented through Hagemann's earlier strategy-advisor role at VitaDAO, the decentralized longevity-research-funding DAO that Weisser co-founded in 2021. The Prime Intellect senior team has expanded through the 2024-to-2026 period, with notable collaborators including Sami Jaghouar on the OpenDiLoCo and INTELLECT engineering work and Will Brown on the Environments Hub product line.
The Aleph Alpha period (2022 to 2023) produced sustained research collaborations with Samuel Weinbach, the Aleph Alpha co-founder and chief operating officer who was a co-author on the Efficient Parallelization Layouts paper, and with Hasso Plattner Institute co-authors Konstantin Dobler, Maximilian Schall, and Gerard de Melo. The Hasso Plattner Institute and TU Dortmund networks anchor the German AI research community ties, with the institute's distributed-systems and IT-systems-engineering programs as the principal academic-research origin.
The Prime Intellect investor base provides the broader network anchor. Founders Fund, Menlo Ventures, Distributed Global, and CoinFund are the institutional investors. Individual investors include Andrej Karpathy, Clem Delangue at Hugging Face, Tri Dao at Together AI, Dylan Patel at SemiAnalysis, and Emad Mostaque formerly of Stability AI. Among decentralized-AI peer founders the Prime Intellect chief-technology-officer position runs in parallel with Karan Malhotra at Nous Research, with overlap on the open-weights distribution thesis with Aidan Gomez at Cohere and the broader LAION and EleutherAI open-source AI research community.
Position in the field
As of May 2026, Hagemann occupies a structurally distinctive position among insurgent-lab chief technology officers through the combination of the German Hasso Plattner Institute systems-engineering background, the operating credential built at Aleph Alpha on a sovereign-AI foundation-model program, and the decentralized-training architectural bet at Prime Intellect. The career arc through TU Dortmund, the Hasso Plattner Institute, Aleph Alpha, and Prime Intellect maps onto a technical-founder profile uncommon among the broader insurgent-AI cohort, where chief-technology-officer paths typically run through US frontier labs, prior big-tech research positions, or quantitative-trading firms.
Industry coverage has consistently characterized the Prime Intellect technical program under Hagemann's leadership as the principal commercial demonstration of the decentralized-AI-training thesis at large-foundation-model scale, alongside Nous Research on the post-training and open-weights side. The INTELLECT-1, INTELLECT-2, and INTELLECT-3 releases are the most-public decentralized-training program demonstrations to date. Hagemann's first-author position on the Efficient Parallelization Layouts paper provides an additional technical-research credential that anchors the Prime Intellect engineering output in the broader academic large-language-model-training literature.
The Aleph Alpha distributed-training-framework operating credential gives Hagemann an unusual chief-technology-officer profile in the insurgent-AI cohort: practical foundation-model training-stack experience built at a European sovereign-AI lab, combined with the Hasso Plattner Institute systems-engineering background.
Outlook
Open questions over the next 6 to 18 months:
- INTELLECT-4 and successor model release. Capability profile of the next-generation INTELLECT model relative to peer commercial open-weights offerings, and the technical architecture of the next-generation distributed-training run.
- PRIME-RL and the Environments Hub. Adoption trajectory of the prime-rl framework and the Environments Hub product line relative to peer post-training and reinforcement-learning platforms.
- Decentralized-training scaling. Whether the INTELLECT line scales beyond the 100B-parameter Mixture-of-Experts class through fully distributed training, and whether geographically distributed runs replace the centralized-cluster pattern of INTELLECT-3.
- TOPLOC adoption. Whether the trustless-verifiable-inference research line in the TOPLOC paper becomes a published deployment in the Prime Intellect commercial platform.
- Public-talk and research-paper cadence. Whether Hagemann continues the technical-publication schedule on distributed-training systems and whether Prime Intellect publishes additional technical papers on the INTELLECT architecture.
- Senior engineering recruitment and retention. Whether the Prime Intellect engineering team continues to expand through 2026 and 2027 as commercial scale-up intensifies.
Sources
- Johannes Hagemann personal site. Self-published biographical, project, and research-interests information.
- Johannes Hagemann LinkedIn profile. Public profile listing the Prime Intellect, Aleph Alpha, Bunch, and educational roles.
- Johannes Hagemann on X (@johannes_hage). Public X account.
- Johannes Hagemann on GitHub (JohannesHa). Public GitHub profile listing Prime Intellect open-source repositories including prime-rl and prime-diloco.
- Johannes Hagemann Google Scholar profile. Publication record with verified Hasso Plattner Institute student email.
- Johannes Hagemann OpenReview profile. Academic profile listing institutional affiliations and dates including Hasso Plattner Institute (2020 to 2023), TU Dortmund (2016 to 2019), and Aleph Alpha (2022 to 2023).
- Hasso Plattner Institute. Hagemann's master's-degree institution.
- Efficient Parallelization Layouts for Large-Scale Distributed Model Training. November 2023 first-author paper at Aleph Alpha presented at WANT@NeurIPS 2023.
- OpenDiLoCo paper. July 2024 ES-FoMo-II workshop paper on the open-source globally distributed low-communication training framework.
- INTELLECT-1 Technical Report. December 2024 arXiv paper.
- INTELLECT-2 Technical Report. May 2025 arXiv paper.
- INTELLECT-3: A 100B+ MoE trained with large-scale RL. November 26, 2025 release announcement.
- TOPLOC: A Locality Sensitive Hashing Scheme for Trustless Verifiable Inference. ICML 2025 paper on the trustless-verifiable-inference research line.
- METAGENE-1: A Metagenomic Foundation Model for Pandemic Monitoring. NeurIPS 2024 Workshop paper on the metagenomic foundation model.
- OpenDiLoCo on GitHub. Open-source replication and scaling of distributed-low-communication training.
- prime-rl on GitHub. Open-source asynchronous and decentralized reinforcement-learning training framework.
- $15M to Build The Open Superintelligence Stack. Prime Intellect blog post on the February 2025 $15 million Founders Fund-led seed extension.
- Distributed Training, Decentralized AI: Prime Intellect's Master Plan to Make AI Too Cheap to Meter. The Cognitive Revolution podcast (February 2025) with Vincent Weisser and Johannes Hagemann.
- Building the GitHub for RL Environments: Prime Intellect's Will Brown & Johannes Hagemann. Sequoia Capital's Training Data podcast episode 78.