Reka AI

Reka AI is the Singapore-based multimodal AI company founded in 2022 by former DeepMind, Google Brain, and Meta researchers Dani Yogatama, Yi Tay, and Qi Liu, developer of the Reka Core, Flash, and Edge multimodal models for enterprise customers.
Reka AI

Reka AI

Reka AI is a multimodal artificial intelligence company headquartered in Singapore with research offices in San Francisco and London, founded in May 2022 by Dani Yogatama, Yi Tay, Qi Liu, and Aaron Ng. Yogatama was a Senior Staff Research Scientist at Google DeepMind; Tay was a Staff Research Scientist at Google Research with a prolific publication record (more than 100 papers and over 17,000 citations) on transformer-architecture variants and language-model scaling; Liu was a research scientist at Meta AI / FAIR and an Assistant Professor at the University of Hong Kong; Ng was the founding engineer. Reka develops the Reka Core, Flash, and Edge multimodal foundation models, with explicit positioning as a Singapore-domiciled alternative to US and Chinese frontier-model providers and as a multimodal-by-default model line that handles text, image, video, and audio input through a single model rather than through bolted-on modality adapters. As of April 2026, Reka AI is the principal frontier-class AI company headquartered outside the US and China, and one of the few Singapore-based AI Insurgents at frontier-research scale.

At a glance

  • Founded: May 2022 in Singapore by Dani Yogatama, Yi Tay, Qi Liu, and Aaron Ng.
  • Status: Private. September 2024 acquisition discussions with Snowflake at a reported $1 billion valuation did not close; the company has continued operating independently.
  • Funding: Approximately $103 million Series A in June 2024 led by DST Global Partners, with NVIDIA, Snowflake Ventures, Smash Capital, and existing investors participating. Earlier seed rounds with Index Ventures, Sequoia Capital India (now Peak XV), and DST.
  • CEO: Dani Yogatama, Co-Founder and Chief Executive Officer. Indonesian-born; PhD CMU; former Senior Staff Research Scientist at Google DeepMind. Adjunct Associate Professor at the University of Southern California's Viterbi School of Engineering.
  • Other notable leadership: Yi Tay, Co-Founder and Chief Scientist. PhD Nanyang Technological University. Former Staff Research Scientist at Google Research; co-author on T5, UL2, and a substantial body of transformer-scaling research. Qi Liu, Co-Founder. PhD University of Oxford; former Meta AI researcher. Aaron Ng, Co-Founder and founding engineer.
  • Open weights: Yes, partial. Selected smaller Reka models (Reka Flash 21B research checkpoints) released open-weights through Hugging Face. Reka Core and most production model versions are closed-weights.
  • Flagship products: Reka Core, Flash, and Edge — the principal multimodal foundation-model line. Reka Studio for enterprise model deployment. Reka Vision for video understanding. The Reka API for direct developer access.

Origins

Reka AI was founded in May 2022 in Singapore by four researchers who had spent the prior years at frontier AI labs. The founding cohort emerged from the Google research diaspora that produced multiple frontier-AI Insurgents through 2022 to 2023, alongside Anthropic, Cohere, Inflection AI, Adept, Character.AI, and Mistral AI. What distinguished Reka from those peers was the Singapore-headquartered structure and the explicit multimodal-by-default architectural focus. Singapore offered a regulatory and immigration environment friendlier than US or European alternatives for the founders' international team, and the multimodal positioning differentiated the company from text-first frontier-model competitors at a moment when the multimodal frontier was less crowded.

Yogatama brought research credentials in language modeling and meta-learning from his time at Google DeepMind, where he had been a Senior Staff Research Scientist. Tay brought a substantial publication record on transformer-architecture variants — including co-authorship on T5 (the encoder-decoder text-to-text transformer) and UL2 (a unified language-model objective combining causal, prefix, and span-corruption pre-training) — from his time at Google Research. Liu brought multimodal-research and theoretical-machine-learning credentials from Meta AI and the University of Hong Kong. Ng anchored the engineering organization.

The April 2024 release of Reka Core, Flash, and Edge was the company's principal public-facing model launch. Reka Core was positioned as a frontier-class multimodal model, and Reka's published benchmark results positioned the model competitively against GPT-4 and Claude 3 Opus on multimodal evaluations including video understanding (a category where most peer models still required separate frame-extraction-and-text-summarization pipelines rather than native video reasoning). The June 2024 Series A of $103 million was led by DST Global Partners with NVIDIA, Snowflake Ventures, and Smash Capital participating.

The September 2024 acquisition discussions with Snowflake at a reported $1 billion valuation reflected the strategic-investor relationship that had developed during the Series A and a broader Snowflake interest in vertical AI offerings for enterprise data customers. The transaction did not close — public reporting attributed the breakdown to valuation disagreement and to Reka's continued strategic positioning as an independent multimodal-foundation-model provider rather than a captive vendor — and Reka has continued operating independently through 2025 to 2026.

The 2024 to 2026 period has seen continued Reka model iteration alongside enterprise-customer development. The company has reported partnerships with Asia-Pacific and US enterprise customers including AI Singapore, Shutterstock, and adjacent organizations. The Singapore-domiciled structure has positioned Reka as a beneficiary of the AI Singapore National AI Strategy and adjacent Singapore-government AI-investment programs.

Mission and strategy

Reka AI's stated mission is to build multimodal AI models that natively understand text, image, video, and audio inputs and that are small enough and efficient enough to run across a range of deployment surfaces (cloud API, customer-private cloud, on-device). The strategy combines three threads. First, frontier-class multimodal foundation-model research with the Reka Core, Flash, and Edge tiered model line covering different size-and-capability points. Second, the Reka Studio platform for enterprise customers requiring custom fine-tuning, on-premise deployment, or specialized data-handling. Third, geographic positioning as a Singapore-headquartered alternative to US and Chinese frontier-model providers, with explicit appeal to Asia-Pacific enterprise customers who want neither US nor Chinese sovereignty exposure.

The competitive premise is that multimodal AI is a structurally different commercial problem from text-first AI, that a research-team-led Insurgent built around multimodal-from-day-one architectures can compete on multimodal capabilities with frontier labs that bolted multimodality onto text-first models, and that the Singapore headquarters provides a structural advantage in Asia-Pacific markets that US-headquartered competitors cannot fully match.

Models and products

  • Reka Core. The largest production model in the Reka line. Frontier-class multimodal capabilities across text, image, video, and audio.
  • Reka Flash. Mid-sized multimodal model. Reka Flash 21B was released open-weights as a research checkpoint.
  • Reka Edge. Smaller model targeting on-device and edge-deployment use cases. Designed for low-latency inference on constrained compute.
  • Reka Studio. Enterprise platform for fine-tuning, deployment, and integration of Reka models on customer infrastructure.
  • Reka Vision. Video-understanding product with emphasis on long-form video reasoning.
  • Reka API. Direct developer access to the model line via REST API.

Distribution channels combine Reka API access for developers, Reka Studio for enterprise customers, and selected open-weights releases through Hugging Face for the research community.

Benchmarks and standing

Reka has published benchmark results positioning Core and Flash competitively against frontier-class multimodal alternatives. The April 2024 Reka Core release reported scores comparable to GPT-4 and Claude 3 Opus on a range of multimodal evaluations, with particular strength on video-understanding tasks. Independent third-party evaluation through LMArena and adjacent comparative leaderboards has placed Reka models among the top non-US-non-Chinese frontier-model offerings.

Industry coverage has consistently characterized Reka as the principal Singapore-headquartered AI Insurgent at frontier-research scale, with the Yogatama-Tay-Liu founder-team research credibility and the multimodal-by-default architectural positioning as principal validating data points. The September 2024 Snowflake acquisition discussions and the reported $1 billion valuation provided implicit market confirmation of the company's frontier-model positioning.

Leadership

As of April 2026, Reka AI's senior leadership includes:

  • Dani Yogatama, Co-Founder and Chief Executive Officer.
  • Yi Tay, Co-Founder and Chief Scientist.
  • Qi Liu, Co-Founder.
  • Aaron Ng, Co-Founder and founding engineer.
  • Senior research and engineering leadership across the Singapore, San Francisco, and London offices.

The founding cohort has remained intact through the 2024 acquisition discussions and the subsequent independent-operations period.

Funding and backers

  • Seed and pre-Series A (2022 to 2023): Approximately $58 million across multiple rounds with Index Ventures, Sequoia Capital India / Peak XV, and DST Global Partners.
  • Series A (June 2024): $103 million led by DST Global Partners with NVIDIA, Snowflake Ventures, Smash Capital, and existing investors. Reported $1 billion valuation context with the Snowflake acquisition discussions later that year.
  • Cumulative capital approximately $160 million as of April 2026.

Industry position

Reka AI occupies a distinctive position as the principal frontier-class AI Insurgent headquartered outside the US and China, with the multimodal-by-default architectural focus, the Singapore headquarters, and the founder-team frontier-lab research credibility. Industry coverage has consistently characterized Reka as one of the structurally consequential frontier-model Insurgents of the post-2022 cohort, alongside Mistral AI and a handful of other research-team-led companies.

The structural risks are two. First, the multimodal-frontier competitive landscape has tightened — OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet, Google DeepMind Gemini 2.5 Pro, and the Chinese frontier models have all delivered substantial multimodal capability at scale, and Reka's earlier multimodal-architecture lead has narrowed. Second, the absence of a frontier-lab-scale capital base (multi-billion dollars rather than multi-hundred-million) limits the compute scaling that frontier-model training requires, and continued capital raising at frontier-competitive scale remains an open commercial question.

Competitive landscape

Outlook

  • Continued Reka Core, Flash, and Edge model iteration through 2026 to 2027.
  • Enterprise customer expansion in Asia-Pacific and US markets.
  • Potential additional fundraising at frontier-competitive scale or further strategic-partnership development.
  • The competitive dynamic with frontier US, Chinese, and European multimodal-model providers.
  • Whether Singapore-government AI-investment programs produce sustained commercial-customer demand for Reka in the Asia-Pacific market.

Sources

About the author
Nextomoro

AI Research Lab Intelligence

Keep track of what's happening from cutting edge AI Research institutions.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.