Magic
Magic is an American artificial intelligence research company founded in 2022 by Eric Steinberger and Sebastian De Ro. The company is headquartered in San Francisco and develops AI software-engineering systems built on a proprietary Long-Term Memory (LTM) architecture, which is designed to support ultra-long context windows substantially larger than the standard transformer attention pattern allows. Magic released its first publicly demonstrated model, LTM-2, in August 2024 with a reported 100-million-token context window, and has positioned its longer-term research as a path toward artificial general intelligence through autonomous software-engineering agents.
At a glance
- Founded: 2022 in San Francisco by Eric Steinberger and Sebastian De Ro.
- Status: Private. Pre-frontier-model in the conventional sense; LTM-2 has been demonstrated but not deployed at scale.
- Funding: Approximately $466 million cumulative across four reported rounds. Most recent major round: $320 million in August 2024, with investors including Eric Schmidt, Alphabet's CapitalG, Atlassian, Elad Gil, Jane Street, Nat Friedman and Daniel Gross, and Sequoia Capital.
- CEO: Eric Steinberger (co-founder)
- Other notable leadership: Sebastian De Ro (co-founder), Russ Salakhutdinov (Chief Scientist; Carnegie Mellon professor and former director of AI research at Apple)
- Open weights: None. The LTM family is closed.
- Flagship products: LTM-2 (Long-Term Memory v2, August 2024). Successor models in development.
Origins
Magic was founded in 2022 by Eric Steinberger and Sebastian De Ro, who met through ClimateScience.org, an environmental volunteer organization. Steinberger had been a researcher and engineer with a stated long-term interest in artificial general intelligence. De Ro came from FireStart, a German business-process management firm where he had been Chief Technology Officer. The founding thesis combined a focus on autonomous software engineering with a research bet on context-window scaling as the principal architectural lever for capability.
The company was founded in the same Bay Area cohort as the early generation of LLM-era AI startups, but the LTM research direction was distinct from the transformer-and-scale orthodoxy that dominated the contemporaneous founding wave. The thesis is that sufficiently long context windows allow models to maintain detailed knowledge of entire codebases and to operate as continuous coding agents rather than as turn-based co-pilots.
In August 2024, Magic announced LTM-2, the second generation of the Long-Term Memory architecture. The reported context window of 100 million tokens was described in industry coverage as substantially larger than any contemporaneous frontier-model deployment, allowing the demonstration of large-codebase tasks (review, debug, modify) that exceed the working-set capacity of mainstream transformers.
The August 2024 round of $320 million brought total funding to approximately $466 million. The investor list included Eric Schmidt, Alphabet's growth-equity arm CapitalG, Atlassian (the developer-tooling company), AI investors Nat Friedman and Daniel Gross, prominent angel investor Elad Gil, Jane Street, and Sequoia Capital.
Mission and strategy
Magic's stated mission is "to build superhuman AI software engineers." The framing distinguishes the company from co-pilot-shaped competitors (GitHub Copilot, Cursor, OpenAI Codex, Anthropic Claude Code) by emphasizing autonomous capability rather than developer augmentation. Steinberger has publicly framed Magic's research as a path to AGI through software-engineering capability specifically, on the basis that programming requires extensive context, multi-step reasoning, and goal-directed planning of the kind that generalizes to broader intelligence.
The strategy combines two threads. First, the LTM architectural research line, which targets capability gains from context-window scaling rather than from parameter scaling alone. Second, autonomous software-engineering products, building toward a system that can take complex coding tasks and produce complete implementations without continuous human direction. The two threads reinforce each other: long context allows the system to hold an entire codebase in working memory, which is necessary for autonomous operation at production scale.
The competitive premise is that the dominant transformer-and-scale paradigm pursued by OpenAI, Anthropic, and Google DeepMind hits practical limits on context-window length, and that an architecturally different approach can produce capability advantages on tasks where context is the binding constraint. Software engineering on large codebases is the canonical example, but the broader thesis extends to any agentic task where state must persist across long horizons.
Models and products
- LTM-2. Long-Term Memory v2, released August 2024 with a reported 100-million-token context window. Closed weights. Demonstrated on large-codebase tasks but not broadly deployed as a commercial product as of April 2026.
- LTM-3 and successor models. In development. Public details have not been released.
- Magic developer products. The company's product roadmap includes commercial AI software-engineering tools built on the LTM architecture, but a broadly available consumer product comparable to Cursor or GitHub Copilot has not been launched as of April 2026.
The product strategy positioning is explicitly aimed at autonomous engineering rather than developer assistance, a deliberate contrast with the existing AI-coding-tools market.
Benchmarks and standing
LTM-2 has not been comprehensively evaluated on standardized coding benchmarks (SWE-bench Verified, HumanEval+) as of April 2026. The model has been demonstrated in technical-blog and conference contexts on tasks that exceed the context-window limits of transformer competitors, but published quantitative benchmark comparisons are limited.
The benchmark profile reflects the architectural focus: capability claims are framed in terms of context-window scaling rather than benchmark-leaderboard performance. Whether LTM architectures translate to leadership on the standardized leaderboards depends on factors (training data scale, engineering quality of the deployed system, post-training process) that are still developing.
The company's standing in the industry rests on the LTM architectural thesis, the founders' research credibility, the depth of investor support, and the strategic alignment with software-engineering automation as a high-value commercial market.
Leadership
As of April 2026, Magic's senior leadership includes:
- Eric Steinberger, Chief Executive Officer and co-founder. Public face for the company on the LTM research thesis and the autonomous-software-engineering strategy. Has stated publicly that he committed at age 15 to building superhuman AI.
- Sebastian De Ro, co-founder. Former Chief Technology Officer at FireStart, a German business-process management company. Technical leadership on LTM architecture and engineering.
- Russ Salakhutdinov, Chief Scientist. Carnegie Mellon University professor of computer science. Former Director of AI Research at Apple (2016 to 2020). Brings academic-research credentials and senior machine-learning leadership.
The company has hired aggressively from research and engineering programs adjacent to the LTM architectural focus. Specific senior-leadership additions beyond the named cohort have not been broadly profiled in industry coverage.
Funding and backers
Magic's funding history through April 2026 includes approximately $466 million across four reported rounds. The most significant round was the August 2024 $320 million raise, with investors including Eric Schmidt, Alphabet's CapitalG, Atlassian, Nat Friedman, Daniel Gross, Elad Gil, Jane Street, and Sequoia Capital.
The investor base is unusually dense in AI insiders. Nat Friedman (former GitHub CEO, current Meta Superintelligence Labs leadership) and Daniel Gross (Meta Superintelligence Labs leadership; co-founder of Safe Superintelligence) participate as the prominent Friedman-Gross investing partnership. Elad Gil is a high-profile AI investor with positions across the founder cohort. Eric Schmidt's participation reinforces the strategic-credibility signal.
The valuation at the August 2024 round has been reported at approximately $1.5 billion, which is moderate compared to peer Insurgent labs. The funding scale is sufficient to support the LTM research program but is below the largest Insurgent rounds (SSI, Thinking Machines Lab, AMI, Reflection AI).
Industry position
Magic occupies a structurally distinctive position among Insurgent labs through the LTM architectural focus. The combination of the long-context-window thesis, the autonomous software-engineering positioning, the senior-investor credentials, and the differentiated technical direction produces a profile no other Insurgent lab matches.
Strategic risks include the absence of a broadly deployed commercial product comparable to peer coding-AI offerings, the competitive pressure from frontier labs as they expand their own context-window capabilities (notably Anthropic's 256k-token context in Claude and Google DeepMind's 2-million-token context in Gemini variants), and the open question of whether LTM architectural advantages translate to commercial coding-tool wins against established competitors with extensive deployment ecosystems.
Strategic strengths include the differentiated architectural thesis, the depth of investor support, the senior research credibility through Salakhutdinov's involvement, and the focused product orientation toward a commercially valuable market (autonomous software engineering). The Atlassian strategic-investor relationship provides a potential commercial channel into developer-tools markets.
Competitive landscape
Magic competes with several Frontier and Insurgent labs:
- OpenAI. Direct competitor on autonomous coding via Codex.
- Anthropic. Direct competitor on coding agents via Claude Code, with Anthropic's #1 position on SWE-bench Verified the leading benchmark signal.
- Reflection AI. Closest peer Insurgent on the autonomous-coding thesis. Asimov competes directly with Magic's LTM-based products.
- Cursor. The leading IDE-based coding co-pilot, founded by former OpenAI researchers. Competitor on the developer-tools market that Magic is also targeting, though with a co-pilot framing rather than autonomous-agent framing.
- GitHub Copilot. Microsoft and OpenAI's joint coding co-pilot. Established market position with Microsoft distribution.
- Codeium / Windsurf. Coding-agent competitor, also a Specialized lab in the broader taxonomy.
- Google DeepMind. Less direct competition; DeepMind's coding capabilities ship via Gemini and Gemma rather than as standalone products.
Outlook
Several open questions affect Magic's trajectory in 2026 and 2027:
- LTM-3 and subsequent model releases. The capability profile of successor models will indicate whether the LTM architectural thesis continues to scale.
- Commercial product launch. A broadly available autonomous-coding product would test the market thesis against the established Cursor and GitHub Copilot incumbents.
- The Atlassian strategic relationship. Atlassian's investor role suggests potential product integration with Atlassian developer tools (Jira, Bitbucket).
- Continued senior-talent recruitment, particularly research engineers familiar with LTM-style long-context architectures.
- Frontier-lab response. As OpenAI, Anthropic, and Google DeepMind continue to expand context windows in their own models, the architectural moat from LTM may compress.
- Possible follow-on funding rounds at higher valuations if LTM-3 demonstrates clear capability advantages.
Sources
- TechCrunch: Generative AI coding startup Magic lands $320M investment from Eric Schmidt, Atlassian and others. August 2024 funding round.
- CapitalG: Magic, Reimagining Software Engineering with AI. Lead-investor strategic perspective.
- Tracxn: Magic AI 2026 Company Profile. Funding history reference.
- Maginative: Magic AI Secures $117 Million to Build an AI Software Engineer. Earlier funding round context.
- Magic.dev official site. Product page and company information.