CoreWeave

CoreWeave is the publicly listed American GPU-cloud company headquartered in Roseland, New Jersey, founded in 2017 by Mike Intrator, Brian Venturo, and Brannin McBee, the principal alternative to the hyperscale-cloud incumbents for frontier AI compute and a strategic NVIDIA partner.
CoreWeave

CoreWeave

CoreWeave is an American GPU-cloud-infrastructure company headquartered in Roseland, New Jersey, founded in 2017 by Mike Intrator, Brian Venturo, and Brannin McBee. The company operates a specialized cloud focused on the high-density GPU compute required by frontier AI model training and inference, with strategic partnerships with NVIDIA, Microsoft, and OpenAI anchoring the customer base. CoreWeave listed on NASDAQ (CRWV) in March 2025 in one of the largest US technology IPOs of the year. As of April 2026, CoreWeave is the principal AI-specialized cloud provider in the US market, with multi-billion-dollar reserved-capacity backlogs from frontier-AI labs and continued infrastructure expansion across new data-center markets in the United States and Europe.

At a glance

  • Founded: 2017 in Hoboken, New Jersey, by Mike Intrator, Brian Venturo, and Brannin McBee. Originally founded as Atlantic Crypto for cryptocurrency-mining infrastructure; pivoted to AI cloud beginning 2019.
  • Status: Public. Listed on NASDAQ (CRWV) since March 2025 IPO at approximately $40 billion valuation.
  • Funding: Cumulative private capital exceeded $12 billion before the IPO, including the Series C of $1.1 billion at $19 billion valuation in May 2024 led by Coatue with NVIDIA strategic-investor participation. Public-market capitalization since IPO has been in the multi-tens-of-billions range.
  • CEO: Mike Intrator, Co-Founder and Chief Executive Officer.
  • Other notable leadership: Brian Venturo, Co-Founder and Chief Strategy Officer (former Chief Technology Officer). Brannin McBee, Co-Founder and Chief Development Officer. Nitin Agrawal, Chief Financial Officer (since 2023).
  • Open weights: N/A. CoreWeave is an infrastructure provider, not a model producer.
  • Flagship products: CoreWeave Cloud (GPU-on-demand and reserved-capacity cloud); Mission Control (cluster orchestration for AI training); high-density NVIDIA GB200 NVL72 and Hopper-class compute infrastructure.

Origins

CoreWeave was founded in 2017 as Atlantic Crypto by Mike Intrator, Brian Venturo, and Brannin McBee, with the original commercial premise of operating cryptocurrency-mining infrastructure on Ethereum and other proof-of-work blockchains. Intrator and Venturo had backgrounds in commodities trading at JPMorgan and Hudson River Trading; McBee had a quantitative-finance background. The mining business produced operational expertise in high-density compute, low-latency power delivery, and data-center economics that translated directly into the subsequent AI-cloud business.

The 2019 pivot rebranded the company as CoreWeave with a strategic shift to GPU cloud services for visual-effects rendering, video-game streaming, and emerging AI training applications. The 2020 to 2022 period saw measured commercial growth, with CoreWeave becoming a recognized alternative to hyperscale clouds for GPU-intensive workloads.

The structurally consequential inflection came in 2023. Microsoft's accelerated AI infrastructure scale-up included a multi-billion-dollar reserved-capacity contract with CoreWeave, providing the capital and customer-relationship anchor for rapid infrastructure expansion. NVIDIA's strategic-investor participation in the May 2024 Series C ($1.1 billion at $19 billion valuation) cemented the supply-chain-priority relationship for high-allocation GPU access during the global GPU shortage of 2023 to 2025. CoreWeave's customer base expanded to include OpenAI, with the company reporting multi-billion-dollar contract values across frontier-AI-lab customers.

The March 2025 IPO on NASDAQ (CRWV) at approximately $40 billion valuation provided public-market access. The public listing followed approximately $12 billion in cumulative private capital and one of the largest US technology IPO offerings of the year. Post-IPO, CoreWeave reported a multi-year reserved-capacity backlog exceeding $30 billion as of late 2025, anchored on long-term contracts with frontier-AI customers.

Mission and strategy

CoreWeave's stated mission is to provide the specialized cloud infrastructure that AI development requires, with a focus on high-density GPU compute, low-latency InfiniBand networking, and the operational scale that frontier-AI training demands. The strategy is differentiated from the hyperscale clouds (AWS, Microsoft Azure, Google Cloud Platform) along three axes. First, AI-specialized infrastructure: high-density NVIDIA-GPU clusters with the InfiniBand fabric, water cooling, and power-delivery characteristics required for trillion-parameter-class model training. Second, NVIDIA strategic-partner status: priority allocation on supply-constrained GPU SKUs (H100, GB200 NVL72) ahead of hyperscale competitors. Third, customer-relationship breadth across frontier-AI labs that hyperscale clouds cannot fully serve given competitive dynamics with their internal AI businesses.

The competitive premise is that frontier-AI compute is structurally separate from general-purpose cloud workloads and that an AI-specialized cloud can sustain higher margins and customer retention than a general-purpose hyperscaler in the AI segment. The premise has been validated by the multi-billion-dollar reserved-capacity backlog and the IPO valuation.

Models and products

  • CoreWeave Cloud. The core GPU cloud platform. On-demand and reserved-capacity NVIDIA GPU access (H100, H200, B200, GB200 NVL72) with InfiniBand networking and high-density configurations.
  • Mission Control. Cluster orchestration and observability platform for AI training workloads. Manages distributed-training jobs across thousands of GPUs.
  • CoreWeave Object Storage. S3-compatible object storage optimized for AI training-dataset access patterns.
  • AI Training Infrastructure. The reserved-capacity offering for frontier-AI labs. Multi-year contracts with dedicated cluster allocations.
  • AI Inference Infrastructure. Lower-latency, higher-utilization compute for production model serving.

Distribution channels include direct enterprise sales for reserved-capacity contracts, self-serve cloud access for on-demand workloads, and strategic-partner relationships with NVIDIA, Microsoft, and major frontier-AI labs.

Benchmarks and standing

CoreWeave is not evaluated against foundation-model benchmarks. The company's standing is measured on infrastructure performance metrics (cluster MFU, training throughput, time-to-training-completion) and commercial metrics (revenue growth, reserved-capacity backlog, customer concentration).

Industry coverage has characterized CoreWeave as the principal AI-specialized cloud globally, with the strategic-partner status with NVIDIA and the reserved-capacity contracts with Microsoft and OpenAI as the principal validating data points. The MLPerf training-benchmark results published by NVIDIA in cooperation with CoreWeave have positioned the company's infrastructure as competitive with hyperscale-cloud AI training systems on per-GPU and full-cluster training-throughput metrics.

Leadership

As of April 2026, CoreWeave's senior leadership includes:

  • Mike Intrator, Co-Founder and Chief Executive Officer.
  • Brian Venturo, Co-Founder and Chief Strategy Officer.
  • Brannin McBee, Co-Founder and Chief Development Officer.
  • Nitin Agrawal, Chief Financial Officer.
  • Senior infrastructure-engineering and customer-engagement leadership across the data-center, networking, and platform organizations.

The founder-led leadership has remained intact through the IPO transition, with the three co-founders continuing in operational roles and the post-IPO public-market reporting cadence anchored by Agrawal as CFO.

Funding and backers

Cumulative private capital exceeded $12 billion before the March 2025 IPO. Notable rounds included the Series A in 2019 with Hudson Bay Capital, the Series B of $221 million in 2022, the Series C of $1.1 billion at $19 billion valuation in May 2024 led by Coatue with NVIDIA strategic-investor participation, and the Series D in late 2024. The March 2025 NASDAQ IPO (CRWV) at approximately $40 billion valuation provided public-market access. Public-market capitalization through 2025 to 2026 has been in the multi-tens-of-billions range with periodic volatility tied to AI-infrastructure-cycle commentary.

Industry position

CoreWeave occupies a structurally distinctive position as the principal AI-specialized cloud globally, with the NVIDIA strategic-partner status, the multi-billion-dollar reserved-capacity backlog, the public-market capitalization, and the operational track record at frontier-AI training scale. Industry coverage has consistently characterized CoreWeave as the leading hyperscale-cloud alternative for AI-training workloads, with the reserved-capacity contract structure providing revenue visibility that the hyperscale clouds cannot match in the equivalent AI-segment business.

The competitive landscape against hyperscalers is structurally interesting. Microsoft Azure, AWS, and Google Cloud all operate large AI-training infrastructures internally and serve external AI customers; CoreWeave operates in a vendor-neutral mode that has been characterized as more customer-aligned for frontier-AI labs that perceive competitive risk in hosting on hyperscale clouds owned by AI competitors.

Competitive landscape

  • Microsoft Azure, AWS, Google Cloud Platform. The hyperscale-cloud competitors. Each operates AI-specialized compute offerings; competitive dynamics are mediated by the parent-company AI businesses.
  • Lambda Labs, Nebius (formerly Yandex Cloud). AI-specialized cloud peers with smaller reserved-capacity scale.
  • Crusoe Energy, Applied Digital. AI-data-center-infrastructure peers with different business-model structures (energy-stranded data centers, modular construction).
  • NVIDIA DGX Cloud. NVIDIA's first-party cloud offering. Operates in partnership with hyperscale clouds rather than as a direct CoreWeave competitor.
  • Together AI, Replicate. Inference-specialized AI-cloud peers with different workload focus.

Outlook

  • The continued data-center expansion across US and European markets through 2026 to 2027.
  • The reserved-capacity backlog conversion to recognized revenue.
  • Continued NVIDIA strategic-partner allocation priority on next-generation GPU SKUs (Rubin and successors).
  • Public-market valuation trajectory tied to AI-infrastructure-cycle commentary.
  • Customer-concentration evolution as the frontier-AI customer base broadens.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.