Alibaba Qwen / DAMO
Alibaba Qwen is the artificial intelligence research and product line operated by Alibaba Group, a Chinese technology and e-commerce conglomerate, with research and engineering executed through the company's DAMO Academy and Tongyi Lab and with commercial distribution through Alibaba Cloud. The Qwen family of large language and multimodal models, spanning the Qwen3.6, Qwen3.5-Omni, Qwen-VL, Qwen-Audio, and Wan video lines, is among the most-downloaded open-weights model series globally and is distributed under the permissive Apache 2.0 license. As of April 2026, Qwen sits in the leading group of Chinese open-weights frontier developers alongside DeepSeek and competes with the Llama line from Meta AI / FAIR for the global open-weights leadership position.
At a glance
- Founded: DAMO Academy established in 2017 in Hangzhou as Alibaba's research arm. Tongyi Qianwen / Qwen large-language-model program launched April 2023. Open-weights releases began September 2023.
- Status: Subsidiary research and product unit of Alibaba Group, listed on the New York Stock Exchange and Hong Kong Stock Exchange as BABA / 9988.HK.
- Funding: Alibaba Group internal R&D budget. Public-company financial reporting; no separate disclosed funding round for the Qwen unit.
- CEO: Eddie Wu (Group CEO of Alibaba; leads the AI task force formed in March 2026).
- Other notable leadership: Wu Zeming (Alibaba Group CTO), Zhou Jingren (Alibaba Cloud CTO; assumed direct oversight of Qwen following the March 2026 reorganization), Zhou Hao (formerly senior staff research scientist at Google DeepMind, joined April 2026 to lead post-training research). Lin Junyang, the founding head of the Qwen team, departed in March 2026.
- Open weights: Yes, predominantly. Most Qwen models ship under the Apache 2.0 license. The 2026 flagship Qwen3.6-Max-Preview is closed-weights and API-only; smaller Qwen3.6 variants remain open-weights.
- Flagship models: Qwen3.6-Max-Preview (April 2026, closed-weights, API-only), Qwen3.6-27B (April 2026, open-weights), Qwen3.5-Omni (multimodal, April 2026).
Origins
Alibaba's research investment in artificial intelligence dates to the formation of DAMO Academy in October 2017, announced as a $15 billion three-year research commitment focused on machine learning, IoT, fintech, and quantum computing. DAMO Academy was structured as Alibaba's primary fundamental-research organization, complementing the product engineering teams in Alibaba Cloud and Alibaba's e-commerce businesses. By 2019, DAMO researchers had published StructBERT, one of the early Chinese-developed pre-trained language models built on the BERT architecture.
The Tongyi Qianwen program (commonly referenced internationally as Qwen) was announced in April 2023, with Qwen-7B released open-weights in September 2023 as the first model in the family. The release pattern that followed was unusually prolific: Qwen-14B, Qwen-72B, the multimodal Qwen-VL line, Qwen-Audio, Qwen-Image, and the Qwen-Math and Qwen-Coder specialized families followed across 2024 and 2025, with most models released under Apache 2.0 licensing and distributed through Hugging Face. The number of distinct Qwen variants released by late 2025 exceeded 45 across text, vision, audio, image, and video modalities.
The Qwen3 family launched in early 2025, introducing a hybrid "thinking and non-thinking" reasoning architecture that allowed the same model to operate in low-latency mode for short-context tasks and in extended-reasoning mode for complex problems. Qwen3.5 followed in February 2026, with Qwen3.5-Omni in April 2026 introducing native cross-modal interaction across image, video, audio, and text. The Qwen3.6 family in April 2026 included both open-weights releases (Qwen3.6-27B, Qwen3.6-35B-A3B) and the closed-weights flagship Qwen3.6-Max-Preview, the first time Alibaba gated its top-tier Qwen model behind API-only access.
A leadership transition occurred in March 2026. Lin Junyang, the founding head of the Qwen unit, departed Alibaba; Yu Bowen, head of Qwen post-training, departed in the same period. Alibaba responded by forming a foundation-model task force led by Group CEO Eddie Wu, Group CTO Wu Zeming, and Alibaba Cloud CTO Zhou Jingren, and by hiring Zhou Hao from Google DeepMind to lead Qwen post-training research. The reorganization separates pre-training, post-training, text, and multimodality into distinct teams, replacing the more vertically integrated structure that Lin had operated.
Mission and strategy
Alibaba's stated AI mission, as articulated by Eddie Wu in the March 2026 task-force formation announcement, is to maintain a leading position in foundation-model research while integrating Qwen capability across Alibaba's commercial business units. The company has framed AI as "a long-term strategic priority" and committed to maintaining the open-source release cadence that has driven Qwen adoption.
The strategy combines three threads. First, prolific open-weights releases under the Apache 2.0 license, building developer-community momentum and making Qwen a default option for Chinese and international developers building on open foundations. Second, monetization through Alibaba Cloud's DashScope and Bailian platforms, where Qwen models are sold as managed APIs and integrated with Alibaba Cloud's compute infrastructure rather than licensed directly. Third, vertical integration with Alibaba's commercial businesses, including e-commerce search and recommendation, the AutoNavi maps and travel businesses, and emerging robotics applications through DAMO's RynnBrain line.
The competitive premise is that open-weights distribution drives developer adoption, which drives Alibaba Cloud's compute and API revenue, and that the cloud-as-distribution model is structurally distinct from the closed-weights API-revenue model that dominates US frontier labs. The closed-weights gating of Qwen3.6-Max-Preview in April 2026 represents a partial departure from this strategy and signals that Alibaba intends to capture frontier-tier capability rents directly when the capability gap warrants premium pricing.
Models and products
- Qwen3.6-Max-Preview. April 2026 flagship. Closed-weights, API-only access through Alibaba Cloud DashScope and Bailian. Reported top-rank performance on SWE-Bench Pro, Terminal-Bench 2.0, and several agentic-coding benchmarks.
- Qwen3.6-27B and Qwen3.6-35B-A3B. April 2026 open-weights releases under Apache 2.0. Dense and mixture-of-experts variants positioned for community fine-tuning and on-premises deployment.
- Qwen3.5 and Qwen3.5-Omni. February to April 2026 releases. Qwen3.5-Omni provides native cross-modal interaction across image, video, audio, and text in a single model.
- Qwen 3 (Qwen3-235B-A22B, Qwen3-32B, smaller variants). 2025 releases. Hybrid thinking-mode architecture. Apache 2.0 licensing.
- Qwen2.5 series. 2024 releases. The series that established Qwen's position as the leading open-weights option for Chinese-language tasks and a competitive option for English-language work.
- Qwen-VL, Qwen-Audio, Qwen-Image. Multimodal lines in vision, audio, and image-generation domains. Open-weights through 2024 and 2025.
- Wan. Video-generation model line, released open-weights December 2024 through February 2025.
- DAMO research lines. RynnBrain (robotics foundation model), DAMO-2K (text), and other specialist releases distributed alongside the main Qwen line.
The commercial distribution channels for paying customers are Alibaba Cloud DashScope (the developer API) and Bailian (the enterprise model-as-a-service platform). Open-weights models are distributed through Hugging Face and ModelScope (Alibaba's Hugging Face equivalent). Qwen is also embedded in Alibaba Group consumer products, including search and the late-2026 announced AI-in-cars integration with Chinese automakers.
Benchmarks and standing
Qwen3.6-Max-Preview reports top-rank performance on six agentic-coding and reasoning benchmarks at release, including SWE-Bench Pro, Terminal-Bench 2.0, SkillsBench, QwenClawBench, QwenWebBench, and SciCode. The closed-weights flagship is positioned to compete directly with the closed-weights frontier from OpenAI, Anthropic, and Google DeepMind on agentic and coding tasks.
Qwen3.6-27B, the open-weights companion, has been characterized in industry coverage as outperforming substantially larger open-weights mixture-of-experts models on agentic coding benchmarks, with reports of dense-model leadership against 397B-parameter MoE competitors.
Qwen models have been historically dominant on Hugging Face download metrics for Chinese-language tasks and competitive on English-language benchmarks. The Qwen download counts and community fine-tune ecosystem are among the largest globally, behind only Meta AI / FAIR's Llama family.
Benchmark leadership is point-in-time and rotates with the release cadence of competing labs. The strategic claim of the Qwen line is consistency of capability across an unusually broad model portfolio rather than peak benchmark leadership on any single model.
Leadership
As of April 2026, leadership oversight of the Qwen and DAMO AI program includes:
- Eddie Wu, Group CEO of Alibaba. Leads the AI task force formed March 2026 to coordinate foundation-model development across Alibaba's business units.
- Wu Zeming, Group CTO of Alibaba. Co-leads the AI task force.
- Zhou Jingren, Alibaba Cloud CTO. Direct oversight of the Qwen technical organization following the March 2026 reorganization. Alibaba Cloud research leader with multiple Qwen-line technical contributions.
- Zhou Hao, head of Qwen post-training research. Joined April 2026 from Google DeepMind, where he had been a senior staff research scientist. The hire reinforces Alibaba's effort to maintain Qwen capability after the March 2026 departures.
The leadership transition followed the departures of Lin Junyang (founding head of the Qwen team) and Yu Bowen (Qwen post-training lead) in March 2026. Alibaba publicly disputed press characterizations that framed the departures as a coordinated resignation event, while acknowledging the broader reorganization of the team structure away from a startup-style integrated unit toward a more functional separation of pre-training, post-training, text, and multimodality teams.
Funding and backers
Alibaba does not disclose AI-specific R&D figures separate from its consolidated public-company reporting. The DAMO Academy was originally announced with a $15 billion three-year research commitment in 2017. Since 2023, Alibaba has expanded AI investment through both DAMO and Alibaba Cloud, and the company has stated publicly that AI infrastructure investment is a long-term strategic priority.
The capital structure is the publicly listed Alibaba Group, which had market capitalization in the $200 billion to $300 billion range across 2025 and 2026. Alibaba's cash position has supported continuous investment in compute infrastructure for Qwen training and inference, including deployment of domestic Chinese AI accelerators. Alibaba Cloud's revenue trajectory and AI-services growth are followed in the company's quarterly earnings reports and provide the principal commercial validation of the Qwen monetization thesis.
Industry position
Alibaba Qwen and DAMO occupy a structurally distinctive position among Chinese AI labs. The combination of unmatched release breadth across modalities, the Apache 2.0 open-weights default, the cloud-as-distribution monetization model, and the parent-company scale produces a profile no peer Chinese lab matches and no US lab matches at the same scale of permissive open-weights distribution. Industry coverage has frequently characterized Qwen as the most consequential open-weights model series globally outside of Meta AI / FAIR's Llama line.
Strategic risks include the leadership-transition execution following the March 2026 reorganization, intensifying domestic Chinese competition from DeepSeek (the open-weights frontier challenger) and from Tencent, ByteDance, and Moonshot's product-distribution focused strategies, and the partial closed-weights pivot represented by Qwen3.6-Max-Preview that may erode developer-community goodwill if it expands. Strategic strengths include the model-portfolio breadth, the cloud-distribution moat, the parent-company financial and infrastructure base, and the global developer-community position.
Competitive landscape
Alibaba Qwen competes with several frontier and open-weights labs:
- DeepSeek. Direct open-weights competitor in China. DeepSeek and Qwen are the two leading Chinese open-weights frontier developers; DeepSeek's V4 and Qwen's 3.6 series are the principal head-to-head benchmark competitors as of April 2026.
- Meta AI / FAIR. Direct open-weights competitor globally. Llama and Qwen are the dominant non-Chinese and Chinese permissive-license open-weights lines respectively.
- OpenAI, Anthropic, Google DeepMind. Closed-weights frontier competitors. Qwen3.6-Max-Preview is positioned to compete on capability, while the open-weights Qwen line competes on cost and developer ergonomics.
- Mistral AI. European open-weights competitor at smaller scale than Qwen's full portfolio.
- Tencent Hunyuan, Baidu, ByteDance Seed, MiniMax, Moonshot, Z.AI, StepFun. Peer Chinese frontier labs. Each pursues distinct strategies; Qwen's distinguishing features are the open-weights default, the model-portfolio breadth, and the Alibaba Cloud distribution channel.
- Reflection AI. US-based Insurgent positioning itself as the open-weights frontier challenger to DeepSeek; Qwen is the second-order target given its position as the second principal Chinese open-weights line.
Outlook
Several open questions affect Alibaba Qwen's trajectory in 2026 and 2027:
- The post-reorganization Qwen release cadence and the capability of the next major model release after Qwen3.6.
- The balance between open-weights and closed-weights distribution. The expansion or rollback of the Qwen3.6-Max-Preview closed-weights model is a key signal.
- The execution and technical contribution of Zhou Hao and the post-March-2026 leadership structure.
- Alibaba Cloud's AI services revenue growth as the commercial validation of the open-weights-driven cloud-distribution thesis.
- The degree of integration between Qwen and Alibaba's commercial business units, including the announced auto-industry voice-agent integrations.
- Continued senior-talent recruitment from Chinese and international AI research organizations, particularly given the post-March-2026 personnel transitions.
- US export-control developments affecting Alibaba's compute infrastructure and the broader Chinese AI hardware-and-software ecosystem.
Sources
- Alibaba Cloud: Alibaba Introduces Qwen3, Setting New Benchmark in Open-Source AI with Hybrid Reasoning. Qwen3 release context.
- Pandaily: Alibaba Approves Qwen Lead Lin Junyang's Resignation; CTO Zhou Jingren Assumes Control, DeepMind's Zhou Hao Joins. March 2026 leadership transition coverage.
- VentureBeat: Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release. Reorganization analysis.
- MarkTechPost: Alibaba Qwen Team Releases Qwen3.6-27B. April 2026 open-weights release coverage.
- TokenMix: Qwen3.6-Max-Preview Review. Closed-weights flagship benchmark coverage.
- CNBC: Alibaba's Qwen AI is coming to cars. Commercial integration context.
- Wikipedia: Qwen. Comprehensive series-history reference.
- Qwen on Hugging Face. Open-weights distribution channel.