Liquid AI

Liquid AI is a Boston-based AI research company spun out of MIT CSAIL in 2023, developer of Liquid Foundation Models (LFMs) built on a liquid-neural-network architecture optimized for on-device and edge deployment.
Liquid AI

Liquid AI

Liquid AI is an American artificial intelligence research company founded in 2023 by Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus, spun out of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The company is headquartered in Boston, Massachusetts, and develops Liquid Foundation Models (LFMs), a model family built on the liquid-neural-network architecture that the founding team developed at MIT. Liquid AI's positioning emphasizes on-device and edge deployment of capable models that are substantially smaller and more efficient than the transformer-family flagships from OpenAI, Anthropic, and Google DeepMind.

At a glance

  • Founded: 2023 in Boston (MIT spinout from CSAIL).
  • Status: Private.
  • Funding: Approximately $297 million cumulative. Most significant round: $250 million Series A in December 2024 at a $2.35 billion valuation, led by AMD, with participation from OSS Capital, PagsGroup, and other investors.
  • CEO: Ramin Hasani (co-founder)
  • Other notable leadership: Mathias Lechner (co-founder), Alexander Amini (co-founder), Daniela Rus (co-founder; Director of MIT CSAIL).
  • Open weights: Mixed. Several LFM variants have been released as open weights; some commercial variants are closed.
  • Flagship models: LFM family (1.3B, 7B, 40B parameter variants), with successor models in development.

Origins

Liquid AI was founded in 2023 as a spinout from MIT CSAIL, building on academic research the founding team had conducted on liquid neural networks. Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus had collaborated at MIT on a research line that produced "liquid time-constant networks," a neural-architecture family that adjusts internal dynamics continuously rather than relying on the fixed-attention pattern characteristic of transformers. Daniela Rus, the senior co-founder, is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of MIT CSAIL.

The founding thesis combined a research bet on liquid neural networks as an architectural alternative to transformers with a commercial bet on on-device and edge AI as a high-value market segment. The architectural advantage of liquid networks, in the founders' framing, is dramatically improved parameter efficiency: capability per parameter is reportedly higher than for transformer-family models, particularly on time-series and continuous-input tasks where the liquid architecture's continuous-time dynamics align with the underlying data structure.

The company emerged from stealth in December 2023 with seed funding and the public introduction of the LFM model family. The first LFM models, released in September 2024 and expanded through early 2025, included variants at 1.3 billion, 7 billion, and 40 billion parameters. The releases were positioned as evidence that the liquid-network architectural advantage scales beyond research demonstrations into deployable models with capability comparable to transformer-family models several times larger.

In December 2024, Liquid AI raised $250 million in a Series A round at a $2.35 billion valuation, led by AMD with participation from OSS Capital, PagsGroup, and other investors. AMD's lead-investor role is significant because LFM efficiency on commodity hardware is a strategic complement to AMD's silicon strategy. Cumulative funding through April 2026 is approximately $297 million.

In 2026, Liquid AI announced a strategic partnership with Mercedes-Benz to embed LFMs into the Mercedes-Benz User Experience (MBUX) infotainment system across third- and fourth-generation MBUX deployments, with first North American rollout scheduled for the second half of 2026. The Mercedes-Benz integration is the most prominent commercial reference for Liquid AI's edge-AI strategy.

Mission and strategy

Liquid AI's stated mission is to "build the most capable and efficient AI systems at every scale." Hasani and the founders have framed the strategy as a deliberate counter to the parameter-count-and-data-center-dependent approach pursued by frontier labs. The premise is that a substantial share of practical AI applications cannot afford the latency, cost, and connectivity assumptions of frontier-tier inference, and that on-device and edge AI is a structural market that requires architecturally different model families.

The strategy combines three threads. First, fundamental research and continued model development on liquid neural networks, scaled into the LFM family. Second, edge and on-device distribution: LFMs are positioned to run on phones, laptops, vehicles, factory equipment, and other constrained-compute environments where transformer-family flagships are not viable. Third, strategic partnerships with hardware and product companies (notably AMD on silicon and Mercedes-Benz on automotive) that align Liquid AI's efficiency thesis with the partner's product economics.

The competitive premise is that frontier-tier capability and edge-tier deployment have become separable markets, and that LFM architectural advantages produce a moat in the edge market that transformer-family competitors cannot match without comparable architectural innovation. Industry coverage has noted that Liquid AI's approach (much smaller models, much faster inference, deployment on consumer hardware) is structurally distinct from the parameter-scaling consensus that has dominated frontier AI through 2025.

Models and products

  • LFM family. Liquid Foundation Models with 1.3 billion, 7 billion, and 40 billion parameter variants released through 2024 and early 2025. Built on the liquid-neural-network architecture rather than the standard transformer. Optimized for parameter efficiency and inference cost.
  • Successor models. In development. Public details on the next-generation LFM line have not been disclosed.
  • Edge-deployment platform. Liquid AI has invested in tooling and infrastructure for embedding LFMs in mobile, automotive, industrial, and consumer-electronics products. The Mercedes-Benz MBUX integration is the leading reference deployment.

The company's commercial model combines open-weights releases of selected LFM variants on Hugging Face with closed-weights commercial variants distributed through enterprise partnerships. The Mercedes-Benz partnership is the most public commercial deployment as of April 2026.

Benchmarks and standing

LFM models have not consistently appeared on the standardized frontier benchmarks (Artificial Analysis Intelligence Index, LMArena, GPQA Diamond, SWE-bench Verified) because the model line is designed for a different point on the capability-versus-efficiency frontier. Where LFMs have been evaluated on small-model and edge-deployment benchmarks, the architectural advantages have produced competitive results at substantially smaller parameter counts.

Liquid AI's published claims include capability "up to 1,000 times smaller" than frontier LLMs on specialized applications, with comparable output quality on those applications. The capability claims have not been independently audited at scale, but the model releases through 2024 and 2025 have been broadly received as supporting the architectural thesis.

The standing in the industry rests on the founders' MIT credentials, the AMD strategic-investor relationship, the Mercedes-Benz commercial deployment, and the architectural differentiation from the transformer-family majority of the market.

Leadership

As of April 2026, Liquid AI's senior leadership includes:

  • Ramin Hasani, Chief Executive Officer and co-founder. Postdoctoral researcher at MIT CSAIL prior to founding Liquid AI. Co-developer of the liquid-time-constant-network research line. Public face for the company on the LFM architectural thesis and commercial strategy.
  • Mathias Lechner, co-founder. Researcher at MIT CSAIL with deep credentials in liquid-neural-network architecture and continuous-time dynamics.
  • Alexander Amini, co-founder. Researcher at MIT CSAIL contributing to the autonomous-systems and robotics research line that informs LFM applications in physical-world contexts.
  • Daniela Rus, co-founder and senior advisor. Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of MIT CSAIL. Brings academic-research credentials and senior leadership in robotics and AI research.

The team includes additional senior-research and engineering leadership recruited from MIT, Boston-area academic networks, and enterprise AI deployment roles. Specific senior-leadership additions beyond the named cohort have not been broadly profiled in industry coverage.

Funding and backers

Liquid AI's funding history through April 2026 includes a seed round in late 2023 (terms not fully disclosed publicly), the December 2024 $250 million Series A at a $2.35 billion valuation led by AMD, and additional smaller follow-on financings that bring cumulative funding to approximately $297 million.

The investor base reflects the strategic positioning. AMD's lead-investor role in the Series A is the most strategically significant: AMD silicon is a natural deployment target for LFM workloads, and Liquid AI's efficiency thesis benefits AMD's competitive position against NVIDIA in markets where parameter efficiency matters more than peak capability. OSS Capital and PagsGroup add venture-capital scale. Strategic-corporate participation in subsequent financings has not been disclosed publicly.

The $2.35 billion valuation places Liquid AI in the mid-tier of Insurgent labs by capital scale, behind the largest pre-product cohort (SSI, Thinking Machines Lab, AMI Labs) but well above many smaller Insurgents.

Industry position

Liquid AI occupies a structurally distinctive position among Insurgent labs through the architectural and edge-deployment focus. The combination of the liquid-neural-network architectural thesis, the MIT CSAIL founder credentials, the AMD strategic partnership, the Mercedes-Benz commercial deployment, and the focused edge-AI commercial market produces a profile no other company has matched.

Strategic risks include the open question of whether liquid neural networks scale to frontier-tier capability or remain confined to the smaller-parameter and edge-deployment segments, the competitive pressure as frontier labs invest in their own efficient inference (notably Microsoft AI's Phi family, Mistral AI's Ministral series, Google DeepMind's Gemma family), and the commercial complexity of selling into automotive, industrial, and consumer-electronics customers with long product cycles.

Strategic strengths include the architectural differentiation from the transformer-family majority, the AMD silicon alignment, the Mercedes-Benz reference deployment, the academic credibility through MIT CSAIL, and the focused commercial positioning on a market segment (edge AI) that is large but underserved by frontier-tier providers.

The April 2026 industry context, including the Mercedes-Benz second-half-2026 rollout schedule and continuing AMD and partner activity, supports the commercial premise underlying Liquid AI's strategy.

Competitive landscape

Liquid AI competes with several Frontier and Insurgent labs:

  • Microsoft AI. The Phi family of small open-weights models is the closest competitor on edge-and-efficient deployment, though Phi uses transformer architectures rather than liquid networks.
  • Google DeepMind. The Gemma family of small open-weights models competes on parameter-efficient deployment.
  • Mistral AI. The Ministral 3 line (3B, 7B, 14B variants) directly competes on edge-deployment positioning.
  • Meta AI / FAIR. Smaller Llama variants compete on open-weights and edge deployment, though Meta's strategic focus has shifted toward closed-weights frontier with Muse Spark.
  • DeepSeek and Alibaba Qwen. Open-weights competitors at smaller parameter scales.
  • Apple. Apple's on-device 3B Apple Foundation Model is a direct competitor on the on-device deployment thesis, though Apple distributes through its own hardware exclusively.
  • NVIDIA Nemotron and adjacent edge-AI offerings. Competitor on the silicon-and-model integrated edge stack.

Outlook

Several open questions affect Liquid AI's trajectory in 2026 and 2027:

  • The Mercedes-Benz MBUX rollout in the second half of 2026 and any subsequent automotive partnerships, which determine whether the edge-AI commercial thesis scales.
  • LFM successor model releases and whether the architectural thesis scales toward larger parameter classes that compete more directly with frontier-tier models.
  • Continued AMD strategic relationship and any hardware-specific optimizations that strengthen the silicon-plus-model commercial position.
  • Open-weights releases versus closed-weights commercial monetization mix, which shapes both developer adoption and revenue growth.
  • Competitive response from Microsoft Phi, Google Gemma, Mistral Ministral, and Apple Foundation Models on the edge-deployment frontier.
  • Possible follow-on funding rounds or potential strategic-acquisition interest from automotive, industrial, or hardware partners.

Sources

About the author
Nex Tomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.