Claude Opus 4.7
Claude Opus 4.7 is Anthropic's flagship multimodal text model, released in November 2025 as the largest and most capable model in the Claude 4 generation. It processes text and images, supports hybrid extended-thinking inference for computationally intensive tasks, and is distributed through the Anthropic API, claude.ai, Claude Code, Amazon Bedrock, and Google Cloud Vertex AI. As of April 2026, it holds the second position on most major frontier benchmarks behind GPT-5.5, with a leading position among all frontier models on the SWE-bench Verified software engineering benchmark.
At a glance
- Lab: Anthropic
- Released: November 2025
- Modality: Text and multimodal (vision)
- Open weights: No (closed)
- Context window: 200,000 tokens (extends to 1,000,000 tokens for select customers)
- Pricing: Per-token pricing on the Anthropic API; claude.ai Free, Pro, Max, Team, and Enterprise subscription tiers
- Distribution channels: Anthropic API, claude.ai (web and mobile), Claude Code CLI, Amazon Bedrock, Google Cloud Vertex AI
Origins
The Claude family originated in 2023 when Anthropic launched its first public model, Claude 1, as a safety-oriented alternative to GPT-4. The initial release positioned Constitutional AI (CAI), Anthropic's methodology for training models to follow a written set of principles through self-critique and revision, as the defining technical contribution. Claude 2 followed in 2023 with expanded context and improved instruction following.
Claude 3, released in March 2024, introduced the tiered naming convention still in use: Opus (the largest and most capable variant), Sonnet (mid-tier), and Haiku (the smallest and fastest). Claude 3 Opus briefly led the Artificial Analysis Intelligence Index at its release before GPT-4o narrowed the gap. Claude 3.5 Sonnet, released in June 2024, extended the family's coding performance substantially, and remained a widely used production model through late 2025.
The Claude 4 generation opened in mid-2025. Claude 4.0 Opus introduced expanded agentic capabilities, with Anthropic adding native tool use, computer use, and Model Context Protocol (MCP) integrations. The 4.5 and 4.6 point releases refined reasoning performance and multimodal handling. Claude Opus 4.7 shipped in November 2025 with the addition of hybrid extended-thinking inference, which allows users to toggle between fast standard inference and a slower deliberate-reasoning mode that produces intermediate chain-of-thought steps before generating a final answer. This hybrid approach was positioned as a structural response to OpenAI's o-series reasoning line, which had operated as a separate product rather than a mode of a unified model.
Anthropic has not published architecture details for Claude Opus 4.7. The parameter count, mixture-of-experts configuration, and training data composition have not been disclosed as of April 2026.
Capabilities
Claude Opus 4.7's strongest documented capability is software engineering. On SWE-bench Verified, a benchmark testing real-world repository bug-fixing on actual GitHub issues, Claude 4.5 Opus recorded a score of 74.0, the leading score across all public frontier models as of April 2026. This performance has translated into enterprise adoption: teams using Claude for code review, refactoring, test generation, and repository-scale reasoning frequently cite Anthropic models as the preferred choice over GPT-5.5 for engineering-intensive tasks.
Vision is supported natively. Claude Opus 4.7 processes images and text in the same context window, handling chart interpretation, document analysis, screenshot-based workflows, and visual question answering. The vision capability is available through the API and across all claude.ai subscription tiers.
Hybrid extended thinking is the architectural feature introduced with the 4.7 release. In standard mode, the model produces outputs through direct inference at low latency. In extended-thinking mode, it generates a chain-of-thought reasoning trace before finalizing an answer. The reasoning trace is surfaced to the user in the API response. This mode improves performance on tasks where intermediate steps matter: multi-step mathematical derivations, long-horizon planning, and problems that benefit from explicit verification. Extended thinking adds compute cost and latency, and is best suited to tasks where accuracy justifies the additional time.
Agentic capabilities are first-class. Claude Opus 4.7 supports tool use, function calling, computer use (the ability to control a desktop interface through the API), and MCP integrations. Claude Code, Anthropic's coding agent CLI, runs on top of Claude Opus 4.7 and extends these capabilities into software development workflows through IDE integrations, terminal commands, and multi-file code editing. Managed Agents, Anthropic's hosted agent platform, provides stable session management and durable state for long-running agentic workflows.
The 200,000-token context window handles documents, codebases, and conversation histories that exceed what most models support. For select API customers, Anthropic extends this to 1,000,000 tokens, enabling use cases such as full-codebase indexing and long-document analysis at a scale beyond the standard window.
Benchmarks and standing
As of April 2026, Claude Opus 4.7 holds the second position across most major frontier benchmarks, behind GPT-5.5 in most categories.
On the Artificial Analysis Intelligence Index, which aggregates performance across a set of reasoning, language, and multimodal tasks into a composite score, Claude Opus 4.7 scores 57.28. This places it ahead of Gemini 3.1 Pro at 57.18 and behind GPT-5.5 at 60.24. The gap to the leader is roughly five points on the composite.
LMArena ELO scores, based on human preference judgments in head-to-head comparisons, place Claude Opus 4.7 at 1298 (general), 1398 (coding), and 1287 (vision). The coding score at 1398 is the second-highest on the LMArena coding leaderboard, behind GPT-5.5 at 1456 and ahead of all other evaluated frontier models.
On domain-specific benchmarks: GPQA Diamond, which tests graduate-level scientific reasoning, at 91.8% (#2); HumanEval+, a function-completion coding benchmark, at 93.8% (#2); ARC-AGI Challenge, a novel problem-solving benchmark, at 88.7 (#2); AIME 2025, the competitive mathematics benchmark, at 93.3% (#2).
The single category where Anthropic holds a first-place position across frontier models is SWE-bench Verified. Claude 4.5 Opus records 74.0 against GPT-5.5 at 68.5, a gap that Anthropic's enterprise sales motion treats as a distinguishing capability claim for engineering buyers.
Frontier benchmark standings change with each major model release. The positions above are as of April 2026 and will shift as labs ship updates.
Access and pricing
Claude Opus 4.7 is available through the following channels.
The Anthropic API at https://www.anthropic.com/api provides programmatic access for text and vision tasks, tool use, extended thinking, computer use, and MCP integrations. Pricing follows a per-token model published on the Anthropic API pricing page; Claude Opus 4.7 is the highest-priced tier in the Claude API offering, consistent with its position as the flagship model.
claude.ai is the consumer and professional product surface. The Free tier provides access to Claude Opus 4.7 with usage limits. The Pro tier ($20/month) removes most limits and adds higher usage ceilings. The Max tier ($100/month or $200/month depending on usage level) provides the highest usage caps, priority access, and access to extended-thinking mode. The Team tier adds team management and administrative controls for collaborative use. The Enterprise tier provides advanced compliance features, expanded context, and custom deployment options for organizational use.
Claude Code is Anthropic's CLI-based coding agent, available as a separate product integrated with Claude Opus 4.7. It provides IDE integrations, terminal-based multi-file code editing, and agentic code execution capabilities.
Amazon Bedrock provides a managed deployment of Claude Opus 4.7 within the AWS ecosystem. This is the primary channel for enterprises with existing AWS agreements or requirements for data residency and compliance configurations.
Google Cloud Vertex AI provides an equivalent managed deployment within the Google Cloud ecosystem, maintained through Anthropic's cloud partnership with Google.
Comparison
Direct competitors to Claude Opus 4.7 in the frontier text and multimodal category, as of April 2026:
- GPT-5.5 (OpenAI). The primary benchmark leader, scoring 60.24 on the Artificial Analysis Intelligence Index versus Claude Opus 4.7's 57.28. GPT-5.5 leads on most LMArena ELO categories and on GPQA Diamond (94.2% versus 91.8%), ARC-AGI Challenge (92.3 versus 88.7), and AIME 2025 (96.7% versus 93.3%). The exception is SWE-bench Verified, where Anthropic's Claude 4.5 Opus leads at 74.0. The competitive dynamic between the two is close: neither has a decisive advantage across all capability categories, and enterprise buyers frequently evaluate both on cost, compliance posture, and task-specific fit rather than aggregate benchmark score.
- Gemini 3.1 Pro (Google DeepMind). The third-place model on the Artificial Analysis Intelligence Index at 57.18, within a point of Claude Opus 4.7. Gemini 3.1 Pro's strategic advantage is distribution rather than raw capability: it is embedded into Google Search, Workspace, and Android, giving it access paths that neither Anthropic nor OpenAI can replicate through API channels. Claude Opus 4.7's advantage over Gemini 3.1 Pro on the LMArena coding leaderboard (1398 versus a lower position) reflects the different enterprise target profiles of the two models.
- Grok 4.20 (xAI). Real-time access to content from the X platform differentiates Grok 4.20 from Claude Opus 4.7 for tasks requiring current-events data. On aggregate capability benchmarks, Grok 4.20 trails both Claude Opus 4.7 and GPT-5.5. For buyers outside the X ecosystem, the capability comparison and pricing are the primary decision criteria, where Claude Opus 4.7's SWE-bench position gives it a clearer case for software engineering buyers.
- DeepSeek V4 (DeepSeek). An open-weights Chinese frontier model that benchmarks at a competitive level while being available for self-hosted deployment at near-zero marginal inference cost. For enterprise buyers that can operate self-hosted models and are not constrained by data-sovereignty requirements, DeepSeek V4 presents a cost argument that Claude Opus 4.7's closed-weights pricing cannot match. Anthropic's response to this substitution risk is its safety positioning and enterprise support structure, which open-weights deployments do not provide by default.
Outlook
Open questions for the next 6 to 18 months following Claude Opus 4.7's release:
- Claude 5 timeline. Anthropic has not publicly disclosed when Claude 5 or the next major-version step beyond the 4.x line will ship. The Claude 4 generation has moved in point-release increments (4.0, 4.5, 4.6, 4.7), and a major-version step could reset the benchmark competitive picture significantly.
- $800 billion valuation. Reports in April 2026 indicated that Anthropic was receiving investor offers at an $800 billion valuation that the company had declined to act on. The outcome of those offers, whether Anthropic accepts a new round, holds out for a higher mark, or moves toward an IPO, is the near-term financial question with the most significant strategic implications for the company's trajectory.
- Computer use and agentic expansion. Computer use through the API and Claude Code are early deployments of agentic capability at the product level. The degree to which those capabilities mature, gain reliability in production environments, and generate recurring enterprise revenue is the main product-side question through 2026 and 2027.
- Pricing pressure from frontier competition. Both GPT-5.5 and open-weights alternatives like DeepSeek V4 create pricing pressure at the high end of the API market. Anthropic's ability to sustain per-token pricing for Claude Opus 4.7 depends on the durability of its capability differentiation and enterprise relationships.
Sources
- Anthropic: Introducing Claude Opus 4.7. Official launch announcement, November 2025.
- Artificial Analysis Intelligence Index. Composite benchmark scores; Claude Opus 4.7 at 57.28, April 2026 data.
- LMArena Leaderboard. Head-to-head ELO scores across general, coding, and vision categories.
- SWE-bench Verified leaderboard. Software engineering benchmark covering real repository bug-fixing tasks; Anthropic's leading position documented here.
- Anthropic: Claude Code. Product page for the CLI coding agent built on Claude Opus 4.7.
- Anthropic API pricing. Per-token pricing for Claude Opus 4.7 and other Claude models.
- Bloomberg: Anthropic Attracts Investor Offers at an $800 Billion Valuation. April 2026 valuation context.