Rnj-1 (Ramanujan)
Rnj-1, pronounced "range-1" and named in homage to the Indian mathematician Srinivasa Ramanujan, is an 8-billion-parameter dense open-weights language model family developed by Essential AI, the foundation-model startup co-founded in 2023 by transformer paper co-authors Ashish Vaswani and Niki Parmar. The model is released as a base and instruction-tuned pair, both trained from scratch by the Essential AI team and optimized for code generation and STEM reasoning. Rnj-1 was released open-weights under the Apache 2.0 license through the EssentialAI organization on Hugging Face, and is the company's first contribution to the open-source canon.
At a glance
- Lab: Essential AI
- Released: December 2025. Base and Instruct variants released together.
- Modality: Text. 8-billion-parameter dense base model and instruction-tuned variant.
- Open weights: Yes. Released under Apache 2.0 with unrestricted commercial use through the EssentialAI organization on Hugging Face.
- Context window: 32,000 tokens. Native pretraining at the standard length, with YaRN applied to extend the context window to 32k.
- Pricing: Free for self-hosted deployment under Apache 2.0. Hosted access through OpenRouter and other community inference providers.
- Distribution channels: EssentialAI/rnj-1 on Hugging Face, Ollama, LM Studio, OpenRouter, and adjacent inference platforms.
Origins
Essential AI was founded in 2023 in San Francisco by Ashish Vaswani and Niki Parmar after the two left Adept AI, the enterprise-agent startup they had co-founded in 2021 with former Google director David Luan and others. Both founders carry an unusually direct claim on the modern foundation-model wave: they were credited authors on the 2017 "Attention Is All You Need" paper that defined the transformer architecture underlying essentially every contemporary frontier language model. The departure from Adept and the founding of Essential AI was characterized in industry coverage as the senior-research half of the Adept founding team striking out to pursue a research-led foundation-model thesis.
Essential AI emerged from stealth in December 2023 with a $56.5 million Series A led by March Capital and including Google, Nvidia, AMD, KB Investment, Franklin Venture Partners, and Thrive Capital. Earlier seed capital of $8.3 million from Thrive had funded the 2023 stealth period. A subsequent Series B of $175 million at a $1 billion post-money valuation, led by Lightspeed Venture Partners with Thrive Capital participating, brought the company to unicorn status with cumulative private capital exceeding $240 million as of early 2026.
The 2024 to 2025 period was comparatively quiet on the public-product side, with the company focused on foundation-model research and full-stack enterprise-automation prototypes. Rnj-1 was released in December 2025 as Essential AI's first open-weights contribution, framed in the company's announcement as "Building Instruments of Intelligence" and characterized as a contribution to the open-source canon rather than a productized release.
The Ramanujan name pays tribute to Srinivasa Ramanujan, the early-20th-century Indian mathematician whose unaided derivations of advanced mathematical results from sparse formal training have become a recurring metaphor in machine learning research for capability emergent from limited explicit supervision. The model name is "Rnj-1," with "Ramanujan" as the unabbreviated form, and the spoken pronunciation "range-1" mirrors a tradition of Sanskrit-derived ML codenames in the academic research community.
The release was paired with the Series B unicorn round and increased the company's public-attention surface in early 2026. The Rnj-1 line has been positioned as a recurring research surface, with subsequent releases under the Rnj name expected as the company's open-source research output continues.
Capabilities
Rnj-1 is built specifically for code and STEM workloads. The model family includes a base model for fine-tuning and a research substrate, and an instruction-tuned variant for direct dialogue and tool use. Three capability features distinguish Rnj-1 from peer 8-billion-parameter open-weights models.
The first is the architecture inheritance. Rnj-1 is an 8-billion-parameter dense model that roughly follows the open-source Gemma 3 architecture, employing global self-attention with YaRN applied to extend the context to 32,000 tokens. The architectural choice is a deliberate alignment with the dominant dense small-model design pattern, which positions Rnj-1 in direct comparison with Gemma 3, Llama-3.1-8B, Qwen-3-8B, and adjacent peers rather than in a novel-architecture niche.
The second is the code-and-STEM training emphasis. The Rnj-1 training corpus and post-training methodology emphasize algorithmic code generation, software engineering, and STEM reasoning. The instruction-tuned variant has been optimized for code-completion and code-editing tasks rather than for general dialogue, with corresponding benchmark profile that leads on coding evaluations and approaches frontier-tier performance on advanced mathematics evaluations.
The third is the from-scratch training provenance. Rnj-1 is the first publicly released model trained from scratch by the Essential AI team using its own training stack, rather than being fine-tuned from a third-party open-weights base. The from-scratch provenance is the structural argument for the model line's research credibility: capability claims trace directly to Essential AI's training methodology rather than to a third party's underlying base.
Benchmarks and standing
Rnj-1's principal disclosed benchmarks emphasize code generation, software engineering, and mathematical reasoning.
On algorithmic code generation benchmarks including HumanEval+, MBPP+, and BigCodeBench, both Rnj-1 Base and Rnj-1 Instruct compete with the strongest open-weights models in the 8-billion-parameter class, in some cases outperforming larger models including the GPT-OSS 20B variant under shared evaluation conditions.
On SWE-bench Verified, the standard real-repository software-engineering benchmark, Rnj-1 reports 20.8 percent in bash-only agentic mode. The bash-only score places Rnj-1 ahead of Gemini 2.0 Flash and Qwen-2.5-Coder-32B-Instruct under the same agentic framework. Essential AI characterizes the result as "an order of magnitude stronger" than comparably sized peer models on SWE-bench, with Rnj-1's small-model SWE-bench performance approaching the capabilities of substantially larger models.
On AIME 2025, the advanced mathematics competition benchmark, Rnj-1 Instruct reports mathematical-reasoning performance characterized in the company's announcement as on par with the strongest open-weights models. The exact score has been reported in line with the 8-billion-parameter open-weights frontier on advanced math.
On code infilling evaluations including HumanEval-FIM Python (average), the Rnj-1 base model scores 82.49 percent and Rnj-1 Instruct scores 86.21 percent. Code infilling is a structural capability for IDE integration and agentic coding workflows.
The standard horizontal language model benchmarks (Artificial Analysis Intelligence Index, LMArena, GPQA Diamond, AIME 2025, HumanEval+, SWE-bench Verified) provide the principal comparison framework. Rnj-1 places in the upper tier of the 8-billion-parameter open-weights segment on coding-and-STEM-focused benchmarks, while general-purpose dialogue benchmarks (LMArena head-to-head and adjacent human-preference evaluations) have been less emphasized in Essential AI's positioning.
Industry coverage has consistently characterized Rnj-1 as one of the strongest small open-weights coding-and-STEM models of late 2025, distinguished from peer releases by the from-scratch training provenance and the transformer-paper founder credibility.
Access and pricing
Rnj-1 weights are distributed through the EssentialAI organization on Hugging Face under the Apache 2.0 license, with separate model repositories for the base model and the instruction-tuned variant. The principal repository is EssentialAI/rnj-1 on Hugging Face.
Self-hosted deployment of the open-weights variants is free under the Apache 2.0 license. The 8-billion-parameter dense profile is feasible to deploy on consumer-grade GPU configurations and on a wide range of commodity server hardware, including CPU-based deployment for cost-sensitive applications.
Hosted API access is available through community inference providers. OpenRouter exposes Rnj-1 Instruct at standard per-token pricing for the parameter scale; LM Studio, Ollama, and the broader local-deployment community provide alternative distribution paths. Essential AI has not published a hosted API for Rnj-1 directly under its own brand.
The model is positioned as a research and developer release rather than as a productized enterprise offering. Essential AI's commercial revenue model centers on the in-development full-stack enterprise-automation product line rather than on hosted Rnj-1 inference.
Comparison
Direct competitors and adjacent open-weights small language models:
- Gemma 3 (Google DeepMind). The architectural lineage of Rnj-1. Direct peer at the 8-billion-parameter dense scale; Rnj-1 reports leading benchmark performance on code-and-STEM evaluations against Gemma 3 at comparable parameter scales.
- Llama 3.1 8B and Llama 3.2 (Meta AI). The principal US open-weights small-model line. Broader community ecosystem and longer-running tooling support; Rnj-1 emphasizes the code-and-STEM benchmark lead.
- Qwen 2.5 and Qwen 3 small variants (Alibaba Qwen). Chinese open-weights peers with broad capability coverage and Apache 2.0 licensing. Qwen-2.5-Coder is the closest direct competitor on coding-specific benchmarks; Rnj-1 reports leading performance on SWE-bench Verified bash-only mode against Qwen-2.5-Coder-32B-Instruct.
- GPT-OSS 20B (OpenAI). OpenAI's open-weights release. Rnj-1 reports outperforming GPT-OSS 20B on certain code-generation evaluations despite the smaller parameter count.
- Mistral small variants and Codestral (Mistral AI). European open-weights peers. Codestral is the principal direct coding-model competitor; Rnj-1 emphasizes the from-scratch training provenance and the broader code-and-STEM scope.
- DeepSeek small variants and DeepSeek-Coder (DeepSeek). Chinese open-weights peers. Strong coding-specific performance; Rnj-1 emphasizes the Apache 2.0 license and US-origin training provenance.
- Phi-3 and Phi-4 (Microsoft). Direct small-model peers from a frontier-adjacent lab. Different curation provenance.
- Zamba2 and AFM-4.5B. Adjacent insurgent small-model peers from Zyphra and Arcee AI respectively. Different architectural and commercial positioning.
Rnj-1's distinctive position among 2025 vintage open-weights small language models: Apache 2.0 licensing, the from-scratch training provenance under the transformer paper co-author founder team, the code-and-STEM optimization that produces leading benchmark results in those categories at the 8-billion-parameter scale, and the Essential AI Series B unicorn-tier institutional-investor backing.
Outlook
Open questions for Rnj-1 over the next 6 to 18 months:
- Successor cadence. Rnj-1 was Essential AI's first open-weights release. Whether subsequent releases extend the family at adjacent parameter scales, additional modalities, or specialized capability variants will signal the company's open-source research investment.
- Public-product visibility. Essential AI's commercial product surface has been comparatively under-disclosed publicly. Whether the in-development enterprise-automation product line is launched in 2026, and how Rnj-1 relates to the productized offering, will shape the company's commercial trajectory.
- Strategic-investor distribution. The Series A investor cohort across Google, Nvidia, AMD, and the Series B Lightspeed lead provides distribution-relevant strategic relationships. Whether those relationships translate into specific deployment-or-distribution channels for the Rnj line is an open question.
- Coding benchmark stability. SWE-bench Verified and adjacent coding evaluations are evolving rapidly, with frontier models compressing performance differences quarter over quarter. Rnj-1's leading SWE-bench bash-only score among small models will be tested by successive peer releases.
- Series C or growth round trajectory. The Series B closed at $1 billion post-money. Whether Essential AI raises a Series C and at what terms will affect the operating runway for both Rnj research and the in-development enterprise-automation line.
Sources
- Essential AI: Announcing Rnj-1. Official announcement and technical writeup.
- Hugging Face: EssentialAI/rnj-1. Model weights and model card.
- AI Business Review: Transformer Co-Creator Launches Rnj-1. Release coverage.
- Benchable: EssentialAI Rnj-1 Instruct benchmarks. Independent benchmark summary.
- OpenRouter: EssentialAI Rnj-1 Instruct. Hosted inference performance metrics.
- Medium: Rnj-1, the best Coding and STEM Small LLM. Independent technical review.
- Attention Is All You Need (2017). Founders' transformer paper.
- Business Wire: Essential AI raises $56.5M Series A. Series A reference.