Michael Rabbat
Michael Rabbat is an electrical engineer and machine-learning researcher, co-founder and Vice President of World Models at AMI, and an associate industry member of Mila, the Quebec Artificial Intelligence Institute. He spent 2017 to 2025 at Meta's Fundamental AI Research organization, where he was a founding member of the Montreal FAIR lab and led the V-JEPA video world-model research line that became the technical foundation of AMI's founding thesis. As of May 2026, he runs AMI's Montreal office and is a senior researcher on the company's world-model program following the $1.03 billion seed round announced March 9, 2026.
At a glance
- Education: Bachelor of Science in Electrical Engineering, University of Illinois at Urbana-Champaign (2001); Master of Science in Electrical and Computer Engineering, Rice University (2003); Doctor of Philosophy in Electrical Engineering, University of Wisconsin-Madison (2006), advised by Robert Nowak.
- Current roles: Co-founder and Vice President of World Models at AMI since late 2025; associate industry member at Mila; adjunct professor at McGill University Department of Electrical and Computer Engineering since 2018.
- Key contributions: Senior author on the I-JEPA (CVPR 2023), V-JEPA (2024), and V-JEPA 2 (2025) papers; co-author on DINOv2 (2023, more than 10,000 citations); founding work on distributed optimization and gossip algorithms during the McGill years; founding member of FAIR Montreal in 2017 alongside Joelle Pineau, Pascal Vincent, and Nicolas Ballas.
- Recognition: h-index of 62 (Google Scholar, May 2026); cited more than 24,000 times across publications spanning signal processing, distributed optimization, and self-supervised learning.
- LinkedIn: michael-rabbat-66a00b7
- Google Scholar: Michael Rabbat
- OpenReview: Michael Rabbat
- DBLP: Michael G. Rabbat
Origins
Rabbat was raised in the United States and pursued electrical engineering across three American research universities. He completed his Bachelor of Science at the University of Illinois at Urbana-Champaign in 2001, then a Master of Science at Rice University in Houston, Texas, in 2003. The Rice period exposed him to statistical signal processing and the wavelet-and-compressed-sensing community that has shaped much of the discipline since.
He moved to the University of Wisconsin-Madison in 2003 for doctoral work, joining the group of Robert Nowak, then a leading researcher in statistical signal processing, sensor networks, and decentralized estimation. He completed his PhD in 2006 with a dissertation on distributed optimization in sensor networks, work co-authored with Nowak and presented at the International Symposium on Information Processing in Sensor Networks. The thesis line on in-network data processing and gossip algorithms produced several of his most-cited papers across the next decade and continues to shape his outlook on distributed and federated learning.
Career
Rabbat joined McGill University in Montreal in 2007 as Assistant Professor in the Department of Electrical and Computer Engineering. He was promoted to Associate Professor in 2013 and remained on the McGill faculty through 2018, when he transitioned to an adjunct professor role that he retains as of May 2026. The McGill years produced a sustained research line on distributed algorithms, optimization, and graph signal processing. His 2010 paper "Gossip Algorithms for Distributed Signal Processing", co-authored with Alex Dimakis, Anna Scaglione, and others, is among his most-cited works.
In 2017 he joined Facebook AI Research (FAIR) as one of the four founding members of the new Montreal lab, alongside Joelle Pineau, Pascal Vincent, and Nicolas Ballas. The Montreal lab was Yann LeCun's expansion of FAIR into Canada, and was launched in partnership with Mila under Pineau's directorship. Rabbat held the title of Director of Research Science at FAIR through the 2025 restructuring. The FAIR years pulled his research focus toward representation learning, self-supervised learning, and the scaling problems that come with frontier-tier model training.
The middle FAIR years anchored Rabbat in the Joint Embedding Predictive Architecture (JEPA) program LeCun proposed in 2022. Rabbat held the senior-author position on I-JEPA (CVPR 2023), V-JEPA (2024), and V-JEPA 2 (June 2025), the three published model generations of the JEPA program. He was also a co-author on DINOv2, the visual self-supervised foundation model that has been cited more than 10,000 times.
In late 2025 Rabbat co-founded AMI with Yann LeCun, Alexandre LeBrun, Saining Xie, Pascale Fung, and Laurent Solly, taking the role of Vice President of World Models. The AMI launch followed LeCun's departure from Meta on November 19, 2025. AMI's $1.03 billion seed round at a $3.5 billion pre-money valuation was announced March 9, 2026 at approximately $4.5 billion post-money. Rabbat heads AMI's Montreal office, one of four operating sites alongside the Paris headquarters, New York, and Singapore, and retains his McGill adjunct appointment and Mila affiliation in parallel.
Affiliations
- University of Illinois at Urbana-Champaign: BSc in Electrical Engineering, 1997 to 2001.
- Rice University: MSc in Electrical and Computer Engineering, 2001 to 2003.
- University of Wisconsin-Madison: PhD in Electrical Engineering, 2003 to 2006.
- McGill University: Assistant Professor (2007 to 2013), Associate Professor (2013 to 2018), Adjunct Professor (2018 to present).
- Meta AI / FAIR: Founding member of FAIR Montreal and Director of Research Science, 2017 to 2025.
- Mila, the Quebec Artificial Intelligence Institute: Associate industry member, late 2010s to present.
- AMI: Co-founder and Vice President of World Models, late 2025 to present.
Notable contributions
Rabbat's published record divides into two phases. The first, anchored at McGill from 2007 to 2018, sits in distributed signal processing, sensor networks, and gossip-style optimization algorithms. The second, anchored at FAIR from 2017 onward, sits in representation learning, self-supervised pre-training, and the JEPA family of world models that became the technical foundation of AMI.
- Distributed optimization and gossip algorithms (2004 to 2015). His PhD work with Robert Nowak on "Distributed optimization in sensor networks" (IPSN 2004) opened the in-network estimation line that ran through his McGill years. The 2010 paper "Gossip Algorithms for Distributed Signal Processing" remains a standard reference.
- I-JEPA (CVPR 2023). With first author Mahmoud Assran and senior authors Rabbat, LeCun, and Ballas. Demonstrated that self-supervised learning could predict missing image regions in latent space rather than at the pixel level, a structural alternative to mask-and-reconstruct methods like Masked Autoencoders.
- V-JEPA (2024). "Revisiting Feature Prediction for Learning Visual Representations from Video" extended JEPA to video, training on internet-scale unlabeled video data.
- V-JEPA 2 and V-JEPA 2-AC (June 2025). Scaled V-JEPA to more than one million hours of internet video, then post-trained an action-conditioned variant on 62 hours of robot-interaction data from the Droid dataset. The system performed zero-shot pick-and-place on Franka arms with no environment-specific training, planning roughly sixteen times faster than diffusion-based world models with comparable success rates.
- DINOv2 (2023). Co-author on the Meta self-supervised visual foundation model. The paper has been cited more than 10,000 times on Google Scholar and remains one of the most widely deployed open-weights vision encoders.
- Federated learning research at FAIR. His 2023 FLOW Seminar talk on pre-training and initialization in federated learning, and the earlier EPFL lecture on federated learning at scale, are public statements of his distributed-training line.
- Recent AMI-era pre-prints. The dblp record lists 2026 pre-prints titled "Learning Latent Action World Models In The Wild," "Parallel Stochastic Gradient-Based Planning for World Models," and "V-JEPA 2.1: Unlocking Dense Features in Video Self-Supervised Learning" extending the V-JEPA program toward AMI's physical-world AI roadmap.
Investments and boards
- AMI (AI): Co-founder and Vice President of World Models, late 2025 to present. AMI announced a $1.03 billion seed round at a $3.5 billion pre-money valuation on March 9, 2026.
No public personal angel-investor activity is on record in AI, semiconductors, datacenters, software, or energy as of May 2026.
Network
Rabbat's longest research relationships sit in the FAIR Montreal cohort. The four founding researchers (Joelle Pineau, Pascal Vincent, Nicolas Ballas, and Rabbat) have been close colleagues since 2017. Ballas is the most-frequent senior co-author on the JEPA-family papers. Vincent is a co-author on I-JEPA. Pineau led FAIR through 2025 before leaving for Cohere.
The JEPA collaboration with Yann LeCun is the most consequential of his FAIR-era research relationships. The LeCun-Rabbat-Ballas senior-author triad has anchored the program through three full model generations. The professional relationship carried directly into AMI's founding cohort. Mahmoud Assran, first author on I-JEPA and V-JEPA 2, is among the most-cited junior collaborators across the line.
The McGill networks-and-signal-processing community is the deeper academic layer of his ties, including McGill colleague Mark Coates and PhD advisor Robert Nowak. The Mila industrial-affiliate community connects him to Yoshua Bengio and the broader Montreal AI ecosystem.
The AMI founding cohort places Rabbat alongside Yann LeCun, Alexandre LeBrun, Saining Xie, Pascale Fung, and Laurent Solly. Industry coverage has paired Rabbat's recruitment with LeCun's research credentials as the principal technical-credibility anchor for AMI's world-model thesis, given that Rabbat led the JEPA research line at FAIR rather than LeCun directly.
Position in the field
Rabbat occupies an unusual structural position among senior AI researchers of his cohort. The combination of a distributed-systems doctoral lineage, a decade of academic faculty at McGill, and an eight-year period leading a major research line at a frontier industrial lab is rare. The h-index of 62 on Google Scholar reflects sustained publication impact across two distinct research lines.
The Vice President of World Models role places him in the operating leadership of the most-watched non-LLM frontier-research bet of 2026. Industry coverage has framed Rabbat's recruitment as one of the principal technical-credibility data points for AMI's $4.5 billion post-money seed valuation, since the JEPA program he led at FAIR is the published research foundation that AMI's roadmap continues.
The dual academic appointment at McGill and Mila alongside the AMI operating role mirrors the structure used by Yann LeCun and Saining Xie at NYU and AMI. The pattern of senior researchers retaining academic affiliations alongside frontier-lab leadership is a defining feature of the 2026 Insurgent-lab cohort.
Outlook
Open questions over the next 6 to 18 months:
- First AMI publications under Rabbat's name. Whether AMI will produce papers led by his Montreal group, and whether they extend the V-JEPA line or open new world-modeling directions.
- V-JEPA 3 and beyond. The annual JEPA cadence (I-JEPA 2023, V-JEPA 2024, V-JEPA 2 in 2025) suggests a 2026 release is likely. Whether the next generation appears under Meta or AMI ownership is open.
- Robotics and physical-world integration. The V-JEPA 2-AC manipulation result is most directly aligned with AMI's physical-world AI focus. Whether AMI announces a robotics platform, partnership, or scaled deployment is a watchable signal.
- Montreal hiring. Whether the AMI Montreal site grows into a substantial research operation, and which FAIR or Mila researchers move with him, will indicate the depth of AMI's Canadian footprint.
- Federated and distributed learning at AMI. Whether Rabbat's earlier distributed-learning line is integrated into AMI's training infrastructure or held separately is open given the company's seed-stage compute investment.
Sources
- Michael Rabbat Mila profile. Mila Quebec Artificial Intelligence Institute associate-industry-member page.
- Michael Rabbat AI at Meta profile. Meta AI personnel page documenting his FAIR-era role and education history.
- Michael Rabbat Google Scholar profile. Citation record covering distributed optimization, gossip algorithms, and self-supervised learning.
- Michael Rabbat dblp profile. Computer-science publication-record index, including the 2026 AMI-era pre-prints.
- Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture. The 2023 I-JEPA paper, with Rabbat as senior co-author alongside LeCun and Ballas.
- Revisiting Feature Prediction for Learning Visual Representations from Video. The 2024 V-JEPA paper extending JEPA to video.
- V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning. The 2025 V-JEPA 2 paper with Rabbat in the senior-author triad alongside LeCun and Ballas.
- Introducing the V-JEPA 2 world model and new benchmarks for physical reasoning. Meta AI blog post announcing V-JEPA 2 in June 2025.
- DINOv2: Learning Robust Visual Features without Supervision. The 2023 self-supervised vision foundation-model paper Rabbat co-authored at FAIR.
- Yann LeCun's AMI Labs raises $1.03B to build world models. TechCrunch on the March 2026 AMI seed announcement and founding leadership team.
- Advanced Machine Intelligence (AMI) is Enabling the Next AI Revolution. Cathay Innovation announcement of the AMI seed and founding leadership.
- Celebrating Five Years of AI Breakthroughs from Our Montreal Research Lab. Meta press post documenting the 2017 founding of FAIR Montreal with Rabbat among the four founding members.
- FLOW Seminar #98: Mike Rabbat (Meta) On the Impact of Pre-Training and Initialization in FL. Federated Learning One World Seminar talk, April 2023.
- Federated Learning at Scale (Prof. Mike Rabbat, Meta AI). EPFL Center for Intelligent Systems lecture, July 2022.
- Distributed optimization in sensor networks. The 2004 IPSN paper with Robert Nowak that opened Rabbat's distributed-systems research line.