LMArena (LMSYS)
LMArena (formerly known as LMSYS, Large Model Systems Organization) is an open-research collaboration that operates Chatbot Arena, the principal crowdsourced foundation-model evaluation leaderboard globally. The organization was originally founded in 2023 as LMSYS by Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Hao Zhang, and other founding-team members from the UC Berkeley Sky Computing Lab and the broader Berkeley BAIR AI research community. As of April 2026, LMArena is one of the principal foundation-model evaluation organizations globally, with the Chatbot Arena leaderboard positioned as the principal crowdsourced foundation-model leaderboard alongside Artificial Analysis and other commercial-foundation-model leaderboards.
At a glance
- Founded: 2023 as LMSYS at the UC Berkeley Sky Computing Lab. Renamed LMArena in 2024.
- Status: Open-research collaboration with cross-institution participation. Spinout commercial entity (Arena Intelligence) for commercial Arena platform expansion in 2024.
- Funding: Andreessen Horowitz and other investor backing for Arena Intelligence commercial entity.
- CEO / Lead: Wei-Lin Chiang, Chief Executive Officer of Arena Intelligence (the commercial entity).
- Other notable leadership: Lianmin Zheng, Ying Sheng, Hao Zhang, Co-Founders.
- Open weights: Yes, partial. Selected research outputs released open-source through GitHub.
- Flagship outputs: Chatbot Arena crowdsourced foundation-model evaluation leaderboard; SGLang structured-generation framework; published research output on foundation-model evaluation.
Origins
LMSYS was founded in 2023 at the UC Berkeley Sky Computing Lab by Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Hao Zhang, and other founding-team members. The 2023 founding period built Chatbot Arena, the principal crowdsourced foundation-model evaluation leaderboard with pairwise model comparison via human-rater preferences. Chatbot Arena rapidly became one of the principal foundation-model leaderboards globally through 2023 to 2024.
The 2024 renaming to LMArena and the subsequent spinout of Arena Intelligence as the commercial entity (with Andreessen Horowitz backing) anchored commercial-product expansion. The 2024 to 2026 period has continued Chatbot Arena leaderboard growth and published research output on foundation-model evaluation.
Mission and strategy
LMArena's mission is to advance crowdsourced foundation-model evaluation through Chatbot Arena and other evaluation infrastructure. The strategy combines two threads. First, Chatbot Arena crowdsourced foundation-model evaluation. Second, published research output on foundation-model evaluation methodology.
Distribution channels include the Chatbot Arena public leaderboard, published research output through major academic venues, and the Arena Intelligence commercial-product expansion.
Models and products
- Chatbot Arena. Crowdsourced foundation-model evaluation leaderboard. Pairwise model comparison via human-rater preferences.
- SGLang. Structured-generation framework. Open-source.
- Published research output. On foundation-model evaluation methodology.
- Arena Intelligence. Commercial entity for commercial Arena platform expansion.
Distribution channels include the Chatbot Arena public leaderboard, published research output, and the Arena Intelligence commercial-product expansion.
Benchmarks and standing
LMArena's evaluation framework focuses on Chatbot Arena leaderboard activity and published research output. Chatbot Arena has been consistently characterized in foundation-model industry coverage as the principal crowdsourced foundation-model leaderboard globally, with cross-foundation-model comparison metrics through human-rater preferences.
Leadership
As of April 2026, LMArena's senior leadership includes:
- Wei-Lin Chiang, Chief Executive Officer of Arena Intelligence.
- Lianmin Zheng, Co-Founder.
- Ying Sheng, Co-Founder.
- Hao Zhang, Co-Founder.
- Senior research staff across the Chatbot Arena and SGLang programs.
Funding and backers
Andreessen Horowitz and other investor backing for Arena Intelligence commercial entity.
Industry position
LMArena occupies a distinctive position as one of the principal foundation-model evaluation organizations globally, with the Chatbot Arena crowdsourced foundation-model leaderboard, the SGLang structured-generation framework, and published research output on foundation-model evaluation methodology.
Competitive landscape
- Artificial Analysis. Commercial foundation-model leaderboard peer.
- Apollo Research, METR, Transluce, Timaeus. AI safety evaluation peer organizations.
- Berkeley BAIR, UC Berkeley Sky Computing Lab. Founding-team university affiliation.
- Hugging Face Open LLM Leaderboard. Alternative open-weights model leaderboard.
- Allen Institute for AI (Ai2), EleutherAI. Open-research peer organizations with evaluation cooperation.
Outlook
- The continued Chatbot Arena leaderboard expansion through 2026 to 2027.
- Continued Arena Intelligence commercial-product expansion.
- Continued published research output on foundation-model evaluation.
Sources
- Chatbot Arena. Crowdsourced foundation-model evaluation leaderboard.
- LMArena official site. Organization reference.
- SGLang on GitHub. Structured-generation framework.
- UC Berkeley Sky Computing Lab. Founding-team university affiliation.
- Wei-Lin Chiang LinkedIn. Co-Founder reference.