Michael Rabbat

Michael Rabbat is a Canadian-American electrical engineer, co-founder and Vice President of World Models at AMI Labs, and an associate industry member of Mila who led the V-JEPA video world-model research line at Meta FAIR.
Michael Rabbat

Michael Rabbat

Michael Rabbat is an electrical engineer and machine-learning researcher, co-founder and Vice President of World Models at AMI, and an associate industry member of Mila, the Quebec Artificial Intelligence Institute. He spent 2017 to 2025 at Meta's Fundamental AI Research organization, where he was a founding member of the Montreal FAIR lab and led the V-JEPA video world-model research line that became the technical foundation of AMI's founding thesis. As of May 2026, he runs AMI's Montreal office and is a senior researcher on the company's world-model program following the $1.03 billion seed round announced March 9, 2026.

At a glance

Origins

Rabbat was raised in the United States and pursued electrical engineering across three American research universities. He completed his Bachelor of Science at the University of Illinois at Urbana-Champaign in 2001, then a Master of Science at Rice University in Houston, Texas, in 2003. The Rice period exposed him to statistical signal processing and the wavelet-and-compressed-sensing community that has shaped much of the discipline since.

He moved to the University of Wisconsin-Madison in 2003 for doctoral work, joining the group of Robert Nowak, then a leading researcher in statistical signal processing, sensor networks, and decentralized estimation. He completed his PhD in 2006 with a dissertation on distributed optimization in sensor networks, work co-authored with Nowak and presented at the International Symposium on Information Processing in Sensor Networks. The thesis line on in-network data processing and gossip algorithms produced several of his most-cited papers across the next decade and continues to shape his outlook on distributed and federated learning.

Career

Rabbat joined McGill University in Montreal in 2007 as Assistant Professor in the Department of Electrical and Computer Engineering. He was promoted to Associate Professor in 2013 and remained on the McGill faculty through 2018, when he transitioned to an adjunct professor role that he retains as of May 2026. The McGill years produced a sustained research line on distributed algorithms, optimization, and graph signal processing. His 2010 paper "Gossip Algorithms for Distributed Signal Processing", co-authored with Alex Dimakis, Anna Scaglione, and others, is among his most-cited works.

In 2017 he joined Facebook AI Research (FAIR) as one of the four founding members of the new Montreal lab, alongside Joelle Pineau, Pascal Vincent, and Nicolas Ballas. The Montreal lab was Yann LeCun's expansion of FAIR into Canada, and was launched in partnership with Mila under Pineau's directorship. Rabbat held the title of Director of Research Science at FAIR through the 2025 restructuring. The FAIR years pulled his research focus toward representation learning, self-supervised learning, and the scaling problems that come with frontier-tier model training.

The middle FAIR years anchored Rabbat in the Joint Embedding Predictive Architecture (JEPA) program LeCun proposed in 2022. Rabbat held the senior-author position on I-JEPA (CVPR 2023), V-JEPA (2024), and V-JEPA 2 (June 2025), the three published model generations of the JEPA program. He was also a co-author on DINOv2, the visual self-supervised foundation model that has been cited more than 10,000 times.

In late 2025 Rabbat co-founded AMI with Yann LeCun, Alexandre LeBrun, Saining Xie, Pascale Fung, and Laurent Solly, taking the role of Vice President of World Models. The AMI launch followed LeCun's departure from Meta on November 19, 2025. AMI's $1.03 billion seed round at a $3.5 billion pre-money valuation was announced March 9, 2026 at approximately $4.5 billion post-money. Rabbat heads AMI's Montreal office, one of four operating sites alongside the Paris headquarters, New York, and Singapore, and retains his McGill adjunct appointment and Mila affiliation in parallel.

Affiliations

  • University of Illinois at Urbana-Champaign: BSc in Electrical Engineering, 1997 to 2001.
  • Rice University: MSc in Electrical and Computer Engineering, 2001 to 2003.
  • University of Wisconsin-Madison: PhD in Electrical Engineering, 2003 to 2006.
  • McGill University: Assistant Professor (2007 to 2013), Associate Professor (2013 to 2018), Adjunct Professor (2018 to present).
  • Meta AI / FAIR: Founding member of FAIR Montreal and Director of Research Science, 2017 to 2025.
  • Mila, the Quebec Artificial Intelligence Institute: Associate industry member, late 2010s to present.
  • AMI: Co-founder and Vice President of World Models, late 2025 to present.

Notable contributions

Rabbat's published record divides into two phases. The first, anchored at McGill from 2007 to 2018, sits in distributed signal processing, sensor networks, and gossip-style optimization algorithms. The second, anchored at FAIR from 2017 onward, sits in representation learning, self-supervised pre-training, and the JEPA family of world models that became the technical foundation of AMI.

  • Distributed optimization and gossip algorithms (2004 to 2015). His PhD work with Robert Nowak on "Distributed optimization in sensor networks" (IPSN 2004) opened the in-network estimation line that ran through his McGill years. The 2010 paper "Gossip Algorithms for Distributed Signal Processing" remains a standard reference.
  • I-JEPA (CVPR 2023). With first author Mahmoud Assran and senior authors Rabbat, LeCun, and Ballas. Demonstrated that self-supervised learning could predict missing image regions in latent space rather than at the pixel level, a structural alternative to mask-and-reconstruct methods like Masked Autoencoders.
  • V-JEPA (2024). "Revisiting Feature Prediction for Learning Visual Representations from Video" extended JEPA to video, training on internet-scale unlabeled video data.
  • V-JEPA 2 and V-JEPA 2-AC (June 2025). Scaled V-JEPA to more than one million hours of internet video, then post-trained an action-conditioned variant on 62 hours of robot-interaction data from the Droid dataset. The system performed zero-shot pick-and-place on Franka arms with no environment-specific training, planning roughly sixteen times faster than diffusion-based world models with comparable success rates.
  • DINOv2 (2023). Co-author on the Meta self-supervised visual foundation model. The paper has been cited more than 10,000 times on Google Scholar and remains one of the most widely deployed open-weights vision encoders.
  • Federated learning research at FAIR. His 2023 FLOW Seminar talk on pre-training and initialization in federated learning, and the earlier EPFL lecture on federated learning at scale, are public statements of his distributed-training line.
  • Recent AMI-era pre-prints. The dblp record lists 2026 pre-prints titled "Learning Latent Action World Models In The Wild," "Parallel Stochastic Gradient-Based Planning for World Models," and "V-JEPA 2.1: Unlocking Dense Features in Video Self-Supervised Learning" extending the V-JEPA program toward AMI's physical-world AI roadmap.

Investments and boards

  • AMI (AI): Co-founder and Vice President of World Models, late 2025 to present. AMI announced a $1.03 billion seed round at a $3.5 billion pre-money valuation on March 9, 2026.

No public personal angel-investor activity is on record in AI, semiconductors, datacenters, software, or energy as of May 2026.

Network

Rabbat's longest research relationships sit in the FAIR Montreal cohort. The four founding researchers (Joelle Pineau, Pascal Vincent, Nicolas Ballas, and Rabbat) have been close colleagues since 2017. Ballas is the most-frequent senior co-author on the JEPA-family papers. Vincent is a co-author on I-JEPA. Pineau led FAIR through 2025 before leaving for Cohere.

The JEPA collaboration with Yann LeCun is the most consequential of his FAIR-era research relationships. The LeCun-Rabbat-Ballas senior-author triad has anchored the program through three full model generations. The professional relationship carried directly into AMI's founding cohort. Mahmoud Assran, first author on I-JEPA and V-JEPA 2, is among the most-cited junior collaborators across the line.

The McGill networks-and-signal-processing community is the deeper academic layer of his ties, including McGill colleague Mark Coates and PhD advisor Robert Nowak. The Mila industrial-affiliate community connects him to Yoshua Bengio and the broader Montreal AI ecosystem.

The AMI founding cohort places Rabbat alongside Yann LeCun, Alexandre LeBrun, Saining Xie, Pascale Fung, and Laurent Solly. Industry coverage has paired Rabbat's recruitment with LeCun's research credentials as the principal technical-credibility anchor for AMI's world-model thesis, given that Rabbat led the JEPA research line at FAIR rather than LeCun directly.

Position in the field

Rabbat occupies an unusual structural position among senior AI researchers of his cohort. The combination of a distributed-systems doctoral lineage, a decade of academic faculty at McGill, and an eight-year period leading a major research line at a frontier industrial lab is rare. The h-index of 62 on Google Scholar reflects sustained publication impact across two distinct research lines.

The Vice President of World Models role places him in the operating leadership of the most-watched non-LLM frontier-research bet of 2026. Industry coverage has framed Rabbat's recruitment as one of the principal technical-credibility data points for AMI's $4.5 billion post-money seed valuation, since the JEPA program he led at FAIR is the published research foundation that AMI's roadmap continues.

The dual academic appointment at McGill and Mila alongside the AMI operating role mirrors the structure used by Yann LeCun and Saining Xie at NYU and AMI. The pattern of senior researchers retaining academic affiliations alongside frontier-lab leadership is a defining feature of the 2026 Insurgent-lab cohort.

Outlook

Open questions over the next 6 to 18 months:

  • First AMI publications under Rabbat's name. Whether AMI will produce papers led by his Montreal group, and whether they extend the V-JEPA line or open new world-modeling directions.
  • V-JEPA 3 and beyond. The annual JEPA cadence (I-JEPA 2023, V-JEPA 2024, V-JEPA 2 in 2025) suggests a 2026 release is likely. Whether the next generation appears under Meta or AMI ownership is open.
  • Robotics and physical-world integration. The V-JEPA 2-AC manipulation result is most directly aligned with AMI's physical-world AI focus. Whether AMI announces a robotics platform, partnership, or scaled deployment is a watchable signal.
  • Montreal hiring. Whether the AMI Montreal site grows into a substantial research operation, and which FAIR or Mila researchers move with him, will indicate the depth of AMI's Canadian footprint.
  • Federated and distributed learning at AMI. Whether Rabbat's earlier distributed-learning line is integrated into AMI's training infrastructure or held separately is open given the company's seed-stage compute investment.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.