Mistral Large 2
Mistral Large 2 is a closed-weights large language model developed by Mistral AI, released in July 2024 as the second generation of the Mistral Large flagship series. It supports text generation, multilingual dialogue, tool use, and function calling, and is accessible through the Mistral API, Hugging Face, and a set of major cloud provider partnerships. As of April 2026, it holds a competitive position in the open evaluation leaderboards for the third-party-measured period before Mistral Large 3's December 2025 release, and remains the primary model with complete third-party benchmark coverage from the Mistral AI portfolio.
At a glance
- Lab: Mistral AI
- Released: July 2024
- Modality: Text only
- Open weights: No. Mistral Large 2 is a closed-weights model. Smaller Mistral-family models (Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, Nemo, Pixtral, Ministral) carry open-weights licensing; the Mistral Large series does not.
- Context window: 128,000 tokens
- Pricing: Per-token pricing through the Mistral API (La Plateforme) and partner cloud endpoints; free for self-hosted research use does not apply to this model; commercial access requires API credentials or a cloud provider agreement
- Distribution channels: Mistral API (La Plateforme), Hugging Face (model card and gated API access, not open weights), Vertex AI, Amazon Bedrock, Azure AI Foundry, IBM Watsonx
Origins
Mistral AI was founded in April 2023 in Paris by Arthur Mensch, Guillaume Lample, and Timothee Lacroix, three researchers who had previously worked at Meta AI and Google DeepMind. The founding thesis positioned the company as a European counterweight to US-domiciled frontier labs, combining open-weights releases for smaller models with closed-weights commercial products for the frontier tier.
The company's first model family, Mistral 7B, launched in September 2023 as an open-weights Apache 2.0 release that quickly became the most-downloaded model in its parameter class on Hugging Face. Mixtral 8x7B followed in December 2023, introducing sparse mixture-of-experts architecture to production-scale deployment and broadening Mistral's developer reach into the mid-size model segment. Mixtral 8x22B arrived in April 2024, extending the MoE line to larger parameter counts.
The Mistral Large series runs on a separate track from the Mixtral open-weights family. Mistral Large, the first generation, released in February 2024 as a closed-weights frontier model positioned to compete with GPT-4 and Claude 3 at the time. Mistral Large 2 followed in July 2024 with a substantially larger parameter count, extended context window to 128,000 tokens, and improved performance on coding and multilingual tasks relative to its predecessor. Mistral Large 3, the successor, released in December 2025 as a sparse MoE model with 41 billion active parameters and 675 billion total parameters, and represents the current state of the Mistral Large line.
Mistral Large 2's release came at a point when the lab had raised its Series B at a $6 billion valuation in June 2024 and had approximately 400 employees. The model's benchmark results on coding, multilingual output, and function calling received favorable coverage in the AI developer community on release, establishing it as a competitive alternative to closed frontier models in the cost-performance range below GPT-4o and Claude 3.5 Sonnet at the time.
Capabilities
Mistral Large 2 is a text-only model and does not support image or audio inputs. Its primary capabilities are text generation, multi-turn dialogue, multilingual understanding and generation, tool use, function calling, and instruction following.
Multilingual performance is the most historically distinctive characteristic of the Mistral Large series. Mistral AI, headquartered in Paris, has prioritized European language coverage throughout its model development, and Mistral Large 2 performs measurably better than most non-European frontier labs on French, German, Spanish, Italian, Portuguese, and other European languages, both in generation quality and in following language-specific instructions. This capability is commercially relevant in regulated European sectors where outputs need to meet standards in national languages.
Tool use and function calling in Mistral Large 2 follow a structured JSON format, compatible with the OpenAI function-calling API convention to facilitate migration. This compatibility, combined with Mistral's enterprise customer base in Europe, has made the model a common choice for agentic applications that must comply with European data residency and sovereignty requirements.
The architecture of Mistral Large 2 has not been formally disclosed in a research paper. It is a dense transformer, not a mixture-of-experts model -- that architecture appeared with Mixtral 8x7B and Mixtral 8x22B in the open-weights line, and later in Mistral Large 3. Mistral Large 2's parameter count has not been officially published, though third-party estimates based on inference cost and latency place it in the 123 billion to 130 billion parameter range.
Codestral, a separate 22-billion-parameter model released by Mistral in September 2024, covers the specialized coding-model segment and is not a variant of Mistral Large 2. Mistral Large 2 handles general code generation and completion at competitive quality, but Codestral is the lab's dedicated coding product.
Benchmarks and standing
As of April 2026, third-party benchmark coverage for Mistral AI centers on Mistral Large 2. Mistral Large 3 (December 2025) had not yet been added to most third-party leaderboards at the time of publication. April 2026 standings for Mistral Large 2:
- LMArena General ELO: #10, score 1045
- LMArena Coding ELO: #9, score 1042
- LMArena Vision ELO: #8, score 998
- SWE-bench Verified: #9, at 47.2 percent
- HumanEval+: #8, at 78.9 percent
Comparable figures for the closed frontier leaders from the same period: Claude Opus 4.7 scores 57.28 on the Artificial Analysis Intelligence Index; GPT-5.5 scores 60.24; Gemini 3.1 Pro scores 57.18. Among open-weights peers, Llama 4 Maverick benchmarks in the mid-40s on the same index.
Mistral Large 2's SWE-bench score of 47.2 is materially below Claude Opus 4.7's 74.0 and GPT-5.5's top-tier results, but remains ahead of many models in its approximate price tier. The HumanEval+ score of 78.9 percent places it in the top ten for function-completion coding as of the measurement date.
These results reflect third-party data current through April 2026. Mistral Large 3's entrance into leaderboard coverage will update this picture; the December 2025 release specs (675 billion total parameters, 41 billion active) suggest substantially improved benchmark performance over Large 2 across all measured categories.
Benchmark leadership is point-in-time and shifts with each lab release cycle. The figures above should be treated as a reference snapshot rather than a stable ranking.
Access and pricing
Mistral Large 2 is available through several distribution channels:
Mistral API (La Plateforme): The primary access point is mistral.ai, where the model is available via API key under per-token pricing. Current pricing tiers are published on the Mistral pricing page. Developers and enterprise customers use this endpoint for production deployments.
Hugging Face: Mistral hosts a model card and limited access tooling on Hugging Face, but the weights for Mistral Large 2 are not publicly downloadable. The Hugging Face presence provides model documentation and API access through Hugging Face Inference Endpoints, not open-weights download.
Partner cloud endpoints: Mistral Large 2 is available as a managed endpoint on Vertex AI (Google Cloud), Amazon Bedrock, Azure AI Foundry, and IBM Watsonx. These endpoints let organizations access the model within existing cloud infrastructure agreements without a separate Mistral account. Pricing on partner endpoints typically differs from La Plateforme direct pricing.
Le Chat: The consumer-facing Le Chat assistant at chat.mistral.ai provides access to Mistral models under a tiered subscription structure (Free, Pro, Team, Enterprise). Le Chat exposes Mistral Large 2 and its successors through a chat interface rather than an API.
There is no self-hosted version of Mistral Large 2 with full weights available. Users seeking self-hosted open-weights Mistral models are directed to Mixtral 8x7B, Mixtral 8x22B, Nemo 12B, Pixtral 12B, or the Ministral series, all of which are available on Hugging Face under open licenses.
Comparison
Direct competitors to Mistral Large 2 in the closed-weights text model category, as of April 2026:
- GPT-5.5 (OpenAI). The benchmark leader at 60.24 on the Intelligence Index, ahead of Mistral Large 2 on every publicly measured axis. GPT-5.5 is more expensive per token. Mistral Large 2 is competitive for European-language workloads where GPT-5.5 pricing and data residency terms create friction.
- Claude Opus 4.7 (Anthropic). Second on the Intelligence Index at 57.28, with SWE-bench Verified at 74.0 against Mistral Large 2's 47.2. Claude Opus 4.7 leads clearly in coding and reasoning benchmarks; Mistral Large 2 competes on European-language tasks and cost within European cloud procurement frameworks.
- Gemini 3.1 Pro (Google DeepMind). Third on the Intelligence Index at 57.18, with a 2 million-token context window substantially larger than Mistral Large 2's 128,000-token limit. For very-long-document workloads the context comparison is decisive; in standard-length production use cases the gap is less material.
- Llama 4 (Meta AI). Llama 4 Maverick is not a closed-weights peer but is the closest open-weights alternative. Its Intelligence Index score in the mid-40s places it near Mistral Large 2 on composite evaluations. Organizations that prioritize self-hosted deployment will prefer Llama 4 Maverick; those that need European jurisdiction or Mistral's enterprise agreements will stay on Mistral Large 2.
- DeepSeek V4 (DeepSeek). Competitive on benchmark scores and significantly lower cost per token, but origin from a Chinese lab introduces supply-chain and policy considerations for European enterprise buyers that reduce its practical substitutability in the market Mistral serves most directly.
- Qwen 3 (Alibaba). Similar dynamics to DeepSeek V4 for European buyers. Strong multilingual scores including on Asian languages; origin considerations apply equally.
Mistral Large 2's distinctive position in this comparison set is European-jurisdiction availability, strong European-language performance, and an enterprise customer relationship built around French and broader EU regulatory compliance. It is not the benchmark leader in any category, but it serves a market segment -- European enterprise, regulated sectors, data-sovereignty-constrained deployments -- where those structural advantages often outweigh raw benchmark placement.
Outlook
Open questions for Mistral Large 2 and the Mistral Large line over the next 6 to 18 months:
- Mistral Large 3 benchmark coverage. The December 2025 release of Mistral Large 3 (675 billion total parameters, 41 billion active, sparse MoE) should deliver substantially improved scores across the categories where Large 2 trails the frontier. When third-party leaderboards complete their Large 3 evaluations, that data will supersede the April 2026 snapshot in this profile.
- Multimodal expansion. Mistral Large 2 is text-only. Pixtral 12B added vision capability at the open-weights level in December 2024. Whether a future Mistral Large generation integrates native multimodality -- following the Pixtral approach -- is an open product question.
- EU AI Act compliance positioning. Mistral AI's European-jurisdiction compute capacity and regulatory relationship provide structural leverage as the EU AI Act's enforcement provisions mature. The degree to which this translates into procurement preference in the public sector and regulated industries will shape Large 3 and Large 4 revenue.
- Commercial licensing volume. Mistral's stated target of more than $1 billion ARR by end of 2026 depends heavily on enterprise and API revenue from the Large series. Whether Large 2 (now the secondary product behind Large 3) retains commercial relevance as a lower-cost tier or is retired depends on Mistral's product-line management.
- Open-weights competition at the frontier tier. DeepSeek V4, Qwen 3, and Llama 4 have made the frontier open-weights tier more competitive than at Mistral Large 2's July 2024 release. If frontier-capable open-weights models continue improving, the closed-weights Mistral Large products face structural price pressure from the self-hosted tier.
Sources
- Mistral AI: Mistral Large 2 announcement. July 2024 release post with capability and benchmark details.
- Hugging Face: Mistral Large Instruct 2407 model card. Model metadata and access documentation.
- Mistral AI: La Plateforme API documentation. API reference, pricing, and model availability.
- Wikipedia: Mistral AI. Company history, funding, and model family reference.
- Artificial Analysis Intelligence Index. Composite frontier benchmark scores; April 2026 data used in this profile.
- LMArena Leaderboard. Human preference ELO evaluations for Mistral Large 2 across general, coding, and vision categories.
- TechCrunch: Mistral AI launches Mistral Large 2. Coverage at launch with context on competitive positioning.