Nous Research
Nous Research is an open-source artificial intelligence research startup founded in 2023 by Karan Malhotra, Jeffrey Quesnelle, and a small founding team. The company is headquartered in the United States and develops open-weights fine-tuned language models in the Hermes family, the Forge agentic-AI development platform, and the DisTrO decentralized training network built on the Solana blockchain. Nous has been characterized in industry coverage as one of the principal commercially-funded organizations in the open-source AI movement, and in April 2025 closed a $50 million Series A led by Paradigm at a reported $1 billion valuation, an unusually large round for an open-source AI organization.
At a glance
- Founded: 2023 in the United States by Karan Malhotra, Jeffrey Quesnelle, and a small founding team that emerged from open-source AI research collaborations on Discord.
- Status: Private. Series A closed April 2025.
- Funding: Approximately $65 million cumulative funding, including a $50 million Series A in April 2025 led by Paradigm at a reported $1 billion token valuation. Earlier seed rounds with smaller participation.
- CEO: Karan Malhotra, co-founder. Senior figure in the open-source AI fine-tuning community.
- Other notable leadership: Jeffrey Quesnelle (co-founder; lead author of the YaRN long-context paper). Senior research and engineering leadership across the Hermes, Forge, and DisTrO product lines.
- Open weights: Yes. Hermes models are released open-weights through Hugging Face. Training code, fine-tuning recipes, and other research artifacts are released under permissive licenses.
- Flagship outputs: Hermes 4 (latest fine-tuned open-weights line, including 70-billion-parameter variants), Forge (agentic-AI development platform), DisTrO (decentralized training network), YaRN (long-context fine-tuning method).
Origins
Nous Research emerged in 2023 from open-source AI research collaborations on Discord, with founders Karan Malhotra, Jeffrey Quesnelle, and other collaborators producing fine-tuned variants of open-weights base models (initially Meta AI / FAIR's Llama 2). The Nous-Capybara 7B fine-tune in August 2023 and the Hermes 7B fine-tune shortly after established the Nous Research brand as a leading source of high-quality open-weights instruction-tuned models built on third-party base models.
The 2023 to 2024 release sequence built out the Hermes family across multiple base models and parameter scales. The Hermes line produced fine-tuned variants of Llama 2 (7B, 13B, 70B), Llama 3 (8B, 70B), and other base models, with each variant focused on improving instruction-following, reasoning, and other capabilities through Nous-curated post-training. The cumulative download count for Hermes models on Hugging Face exceeded 55 million by 2025, making Nous one of the most-downloaded fine-tuning organizations in the open-source AI ecosystem.
The YaRN (Yet another RoPE Extension) research paper in 2023, lead-authored by Jeffrey Quesnelle, was a academic contribution that introduced an efficient method for extending the context window of pre-trained language models. YaRN became widely adopted in the open-source community and was a structurally important contribution to long-context AI research.
The 2024 release of Genstruct (a synthetic-data-generation model) and Forge (an agentic-AI development platform) extended Nous Research beyond pure fine-tuning into adjacent open-source AI infrastructure. The DeMo (Decoupled Momentum Optimization) research paper in 2024, co-authored with OpenAI's Diederik P. Kingma, contributed methodology for distributed training across heterogeneous compute infrastructure.
The DisTrO decentralized training network, announced in 2024 and 2025, is the most distinctive Nous Research contribution. Built on the Solana blockchain, DisTrO coordinates heterogeneous compute (consumer-grade RTX 4090, datacenter A100, and H100 GPUs) into a distributed training network that allows AI models to be trained without centralized infrastructure. The crypto-and-AI integration is unusual among open-source AI organizations and reflects the founder team's research interest in censorship-resistant and infrastructure-decentralized AI capability.
The April 2025 $50 million Series A round led by Paradigm at a reported $1 billion token valuation was significant for two reasons: it was an unusually large round for an open-source AI organization, and the use of a token-based valuation framework reflects the broader Nous Research crypto-and-AI strategic positioning. The funding supports continued development of the Hermes line, Forge agentic platform, and DisTrO decentralized training network.
The 2026 trajectory has emphasized commercial-product development through the Hermes 4 release, the Forge agentic platform, and the Hermes Agent self-learning AI runtime. Industry coverage has positioned Nous Research as the principal organization at the intersection of open-source AI, decentralized infrastructure, and crypto-aligned AI development.
Mission and strategy
Nous Research's stated mission is to advance open-source AI research and to develop AI infrastructure that is accessible to the global research community. The framing differs from peer open-source AI nonprofits (Allen Institute for AI, EleutherAI, LAION) in its commercial-investor structure and its explicit integration of decentralized-infrastructure (Solana-based) AI training and deployment.
The strategy combines four threads. First, the Hermes family of open-weights fine-tuned language models, distributed through Hugging Face for community use and as commercial-API offerings for enterprise deployment. Second, the Forge agentic-AI platform and the Hermes Agent self-learning AI runtime, providing developer infrastructure for agentic AI applications. Third, the DisTrO decentralized training network on Solana, providing infrastructure for distributed AI training that does not depend on centralized cloud providers. Fourth, research outputs (YaRN, DeMo, Genstruct) that contribute to the open-source AI methodology base.
The competitive premise is that open-source AI infrastructure benefits from commercial investment and from decentralized deployment. The combination of high-quality fine-tuning (Hermes), agentic-development platform (Forge), and decentralized training infrastructure (DisTrO) produces a strategically distinctive open-source AI offering that complements the partial-open-weights releases from commercial frontier labs and the fully-open-research outputs from nonprofit research organizations.
The crypto-and-AI integration is a structural strategic differentiator. Paradigm's $50 million Series A reflects the crypto-VC ecosystem's interest in AI applications and provides Nous Research with continued access to crypto-aligned capital and customer base.
Models and products
- Hermes 4 (70B and earlier variants). Latest open-weights instruction-tuned language model line. Multiple parameter scales building on Llama and other open-weights base models.
- Hermes 3 (405B and other variants). 2024 release including a 405-billion-parameter Llama-based fine-tune.
- Earlier Hermes generations. Back-catalog of Hermes fine-tunes across 7B, 8B, 13B, 70B, and other parameter scales since August 2023.
- Hermes Agent. Self-learning AI runtime that integrates Hermes models with agentic-AI capability.
- Forge. Agentic-AI development platform, released December 2024 and updated through 2026.
- DisTrO. Decentralized training network on Solana. Coordinates heterogeneous compute (consumer GPUs alongside datacenter GPUs) for distributed AI training.
- YaRN. Long-context fine-tuning method published in 2023.
- DeMo (Decoupled Momentum Optimization). Distributed-training methodology research published in 2024.
- Genstruct. Synthetic-data-generation model released July 2024.
- Nous Research API and Forge platform. Commercial offerings for paying enterprise and developer customers.
The principal distribution channels are Hugging Face for open-weights model releases, the Nous Research API and Forge platform for commercial customers, and GitHub for training code and other research artifacts.
Benchmarks and standing
Hermes models are consistently ranked among the leading open-source instruction-tuned models on community benchmarks (LMArena, AlpacaEval, MT-Bench, and other). Specific benchmark positions vary by base model and parameter scale, with Hermes fine-tunes generally outperforming the base-model release on instruction-following and other post-training-relevant tasks.
The cumulative 55-million-download count for Hermes models on Hugging Face is among the highest for any open-source AI organization and indicates community-and-developer adoption. The 120-plus mobile applications and products powered by Hermes are an indicator of commercial traction beyond research-community use.
YaRN long-context methodology has been adopted across the broader open-source AI community and is cited in subsequent academic AI research on long-context language modeling. DeMo and other Nous Research methodology contributions have similar academic-community citation profiles.
The company's standing in the open-source AI ecosystem is anchored on the founding-era fine-tuning leadership, the methodology contributions (YaRN, DeMo), the Forge agentic-AI platform, and the distinctive DisTrO decentralized training network. The Paradigm-led Series A reinforces the commercial credibility of the open-source-AI commercial-investor model.
Leadership
As of April 2026, Nous Research's senior leadership includes:
- Karan Malhotra, co-founder. Senior figure in the open-source AI fine-tuning community. Public face for Nous Research on company strategy, the Hermes line, and the broader open-source AI movement.
- Jeffrey Quesnelle, co-founder. Lead author of the YaRN long-context paper. Senior research leadership for the open-weights model and methodology research.
The company has hired aggressively from the open-source AI research community, with volunteer and contractor contribution alongside the core team. Specific senior research and engineering leadership beyond the two co-founders has been less broadly profiled in international media than for some peer commercial AI organizations.
Funding and backers
Nous Research's funding history through April 2026 includes approximately $65 million cumulative across multiple rounds. The Series A of $50 million in April 2025 was led by Paradigm at a reported $1 billion token valuation. Earlier seed and angel rounds totaled approximately $15 million.
Paradigm is the principal venture-capital lead and is one of the largest crypto-focused VC firms globally. The Paradigm-led structure reflects the company's distinctive positioning at the intersection of open-source AI and decentralized infrastructure. Other investors include crypto-aligned VCs and individual investors connected to the Solana ecosystem.
The token-valuation framework (a $1 billion token valuation, distinct from a conventional equity-valuation framework) reflects the company's intention to release a token associated with the DisTrO network and other crypto-and-AI integrations. As of April 2026, the specific token-issuance details have not been broadly publicly disclosed.
Industry position
Nous Research occupies a structurally distinctive position in the open-source AI ecosystem. The combination of the Hermes fine-tuning leadership, the Forge agentic-AI platform, the DisTrO decentralized training network, the Paradigm-led commercial investor base, and the crypto-and-AI strategic positioning produces a profile that no other open-source AI organization matches at the same combination of attributes.
Industry coverage has frequently characterized Nous Research as one of the most commercially ambitious open-source AI organizations, with the Solana-based decentralized training network as a particularly distinctive technical-and-commercial differentiator. The company's willingness to combine open-source AI commitments with commercial-investor capital and crypto-infrastructure integration distinguishes it from peer nonprofit open-source AI research organizations.
Strategic risks include the broader regulatory environment for crypto-aligned AI infrastructure (US and international policy on token issuance and decentralized AI), the competitive pressure from frontier labs that release increasingly capable open-weights base models (which constrains Nous's fine-tuning-relative-improvement headroom), and the operational complexity of running a multi-product organization spanning fine-tuning, agentic-development, and decentralized-training infrastructure simultaneously. Strategic strengths include the founder-team open-source-AI credentials, the Hermes commercial-traction track record, the Paradigm strategic-investor relationship, and the distinctive crypto-and-AI integration.
Competitive landscape
Nous Research collaborates with and competes for community-and-developer adoption with several open-source AI organizations:
- Allen Institute for AI, EleutherAI, Hugging Face, LAION, BigScience, MILA. Peer open-AI-research organizations, with collaboration through Hugging Face distribution and other infrastructure.
- Meta AI / FAIR, Mistral AI, DeepSeek, Alibaba Qwen, Cohere. Commercial open-weights-base-model providers. Nous Research fine-tunes their releases; the base-model providers are upstream collaborators rather than direct competitors.
- Anthropic, OpenAI, Google DeepMind. Closed-weights frontier labs. Less direct competitive overlap given Nous's open-source positioning.
- Together AI, Replicate, Modal. AI infrastructure platforms competing for AI-developer attention. DisTrO is structurally distinct in its decentralized-infrastructure approach.
- Bittensor, Akash, and other decentralized-AI-infrastructure projects. Direct peers in the decentralized AI infrastructure category.
- Crypto-AI projects including various Solana-ecosystem AI projects. Both partners and competitors in the broader crypto-AI ecosystem.
Outlook
Several open questions affect Nous Research's trajectory in 2026 and 2027:
- The Hermes 5 release timing and capability profile.
- The development of the Forge agentic-AI platform and the Hermes Agent self-learning runtime; commercial traction is the central commercial question.
- The DisTrO decentralized training network progression, including any major model trained on the network and the scaling characteristics across heterogeneous compute.
- The specific token-issuance details associated with the $1 billion token valuation, and the regulatory environment for crypto-aligned AI infrastructure.
- The competitive dynamic with frontier-lab open-weights releases, particularly as base models become increasingly capable and reduce fine-tuning headroom.
- Continued senior research-and-engineering talent recruitment, particularly given the company's distinctive crypto-and-AI positioning.
- US and international regulatory developments affecting open-source-AI distribution and decentralized AI infrastructure.
Sources
- Nous Research official site. Company overview and product information.
- The Block: Paradigm leads $50 million Series A round for decentralized AI project Nous Research. April 2025 Series A coverage.
- Yahoo Finance: Crypto VC giant Paradigm makes $50 million bet on decentralized AI startup Nous Research. Series A and token valuation context.
- Hermes Agent. Agentic-AI runtime documentation.
- arXiv: YaRN: Efficient Context Window Extension of Large Language Models. YaRN methodology paper.
- GitHub: NousResearch/DisTrO. Decentralized training network.
- Nous Research on Hugging Face. Open-weights model distribution.