Ben Mildenhall

Ben Mildenhall is an American computer scientist, a co-founder of World Labs, and the lead author of the 2020 NeRF paper, the Neural Radiance Fields method that defined the modern neural-rendering research line.
Ben Mildenhall

Ben Mildenhall

Ben Mildenhall is an American computer scientist whose research spans computer graphics, 3D computer vision, neural rendering, and generative models of 3D scenes. He is the lead author of the 2020 NeRF paper that introduced Neural Radiance Fields, the volumetric scene representation that catalyzed the modern neural-rendering research line, and a co-founder of World Labs, the San Francisco spatial-intelligence company launched in 2024 by Stanford professor Fei-Fei Li. As of May 2026, he holds a research-and-co-founder role at World Labs, having joined from a research-scientist position at Google Research, where he and the surrounding "NeRF group" published much of the canonical follow-on work between 2021 and 2023.

At a glance

Origins

Mildenhall is American and based in the San Francisco Bay Area. He completed his undergraduate degree at Stanford University from 2011 to 2015, graduating with a BS in computer science (with honors) and mathematics. Stanford undergraduate research projects covered probabilistic-inference applications including reinforcement learning, handwriting recognition, and procedural content generation, and a summer 2014 collaboration with Pixar Research on geometric data prefiltering for rendering.

The Stanford-to-Berkeley transition in 2015 placed Mildenhall in Ren Ng's computer-graphics research group at the University of California, Berkeley, with the support of a 2015 Hertz Foundation Graduate Fellowship. Ng, the founder of the light-field-camera company Lytro and a Stanford computer-graphics PhD himself, supervises a Berkeley research group that combines imaging, graphics, computer vision, and machine learning. The Berkeley graduate years coincided with the rapid rise of deep learning in computer vision and graphics, and Mildenhall's research direction shifted across the PhD from light-field-camera and computational-imaging methods toward neural-network-based scene representations.

Career

Mildenhall's PhD years at Berkeley from 2015 to 2020 produced the publications that defined the modern neural-rendering research line, written with Pratul Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. The summer-internship record across the doctoral period included Fyusion (2018) and Google Research under Marc Levoy (2017), with the Pixar Research undergraduate work as a precursor. Mildenhall's 2020 dissertation, "Neural Scene Representations for View Synthesis," was awarded the 2020 David J. Sakrison Memorial Prize (the Berkeley EECS award for outstanding doctoral research) and the 2021 ACM Doctoral Dissertation Award Honorable Mention, the runner-up to the ACM-wide best-thesis prize.

The capstone Berkeley publication, "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis", was presented as an oral paper at the European Conference on Computer Vision (ECCV) in 2020 and received the conference's Best Paper Honorable Mention. The NeRF method represents a 3D scene as a continuous volumetric function, parameterized by a small fully-connected neural network that takes a 3D position and viewing direction as input and outputs volume density and view-dependent radiance. Classical volume rendering integrates the network's outputs along camera rays to produce photorealistic renderings from arbitrary viewpoints, given only an unstructured set of input photographs with known camera poses. The paper has been cited more than 18,000 times on Google Scholar, placing it among the most-cited computer-vision papers of the 2020s, and it produced a research subfield ("neural radiance fields" or "neural rendering") that has since spawned thousands of follow-on papers across graphics, vision, robotics, and content-creation tooling.

Mildenhall joined Google Research as a research scientist after the PhD, holding the role from 2021 to 2023. The Google Research period produced the principal NeRF follow-on work alongside Barron, Pratul Srinivasan, and Matthew Tancik (collectively the "NeRF group"): Mip-NeRF (ICCV 2021, anti-aliased multiscale rendering), Mip-NeRF 360 (CVPR 2022, unbounded scenes), Block-NeRF (CVPR 2022, large-scale scene reconstruction with Waymo collaborators), RawNeRF (CVPR 2022, HDR synthesis from raw images), Zip-NeRF (ICCV 2023, anti-aliased grid-based NeRF), and DreamFusion (ICLR 2023, text-to-3D using 2D diffusion). DreamFusion was awarded the 2023 ICLR Outstanding Paper Award and is the canonical reference for score-distillation-sampling-based text-to-3D generation.

In early 2024 Mildenhall co-founded World Labs with Fei-Fei Li (chief executive officer), Justin Johnson (a University of Michigan computer-science professor and former Stanford PhD student of Li's), and Christoph Lassner (formerly Meta Reality Labs Research). The founding thesis is that "spatial intelligence," AI systems that understand, generate, and reason about three-dimensional environments, is a research direction separable from the dominant LLM and 2D-image-generation paradigms. Mildenhall's research record in neural rendering and 3D scene representation matched the founding thesis. World Labs raised a $230 million seed at a $1 billion-class valuation in September 2024 (led by Andreessen Horowitz, Radical Ventures, and NEA) and a $1 billion strategic round in February 2026 (with Autodesk contributing $200 million as strategic anchor). The company's first commercial product, Marble, generates and edits persistent 3D environments from text, images, video, or 3D layouts.

Affiliations

Notable contributions

  • NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ECCV 2020 oral, Best Paper Honorable Mention). With Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Lead-author paper introducing the NeRF method: a small fully-connected network that maps 3D position and viewing direction to volume density and view-dependent radiance, with classical volume rendering used to synthesize photorealistic novel views from unstructured input photographs. The most-cited single paper of Mildenhall's career, with more than 18,000 Google Scholar citations, and the founding artifact of the modern neural-rendering research line. Project page at matthewtancik.com/nerf; code released at github.com/bmild/nerf.
  • Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains (NeurIPS 2020). With Tancik, Srinivasan, Barron, Ramamoorthi, Ng, and others. Demonstrated that mapping coordinates through a Fourier feature embedding before passing them to a multi-layer perceptron substantially improves the network's ability to model high-frequency content, providing the theoretical scaffolding for the positional-encoding strategy used in NeRF and many subsequent neural-field architectures.
  • Mip-NeRF (ICCV 2021). With Barron, Srinivasan, Tancik, Peter Hedman, and Ng. Anti-aliased multiscale extension of NeRF that fits Gaussian frustums to camera rays at multiple scales, addressing aliasing artifacts in the original method.
  • Mip-NeRF 360 (CVPR 2022). With Barron, Verbin, Srinivasan, and Hedman. Extended Mip-NeRF to unbounded outdoor scenes with non-linear scene parameterizations.
  • Block-NeRF (CVPR 2022). With Tancik, Casser, Yan, Pradhan, Barron, Kretzschmar, and others (with Waymo collaborators). City-scale neural reconstruction by partitioning into geographic blocks.
  • RawNeRF (CVPR 2022 Best Paper Honorable Mention). With Hedman, Martin-Brualla, Srinivasan, and Barron. NeRF-based HDR synthesis directly from raw image data, with applications to noisy and dark-scene capture.
  • DreamFusion: Text-to-3D Using 2D Diffusion (ICLR 2023 Outstanding Paper Award). With Ben Poole, Ajay Jain, and Jonathan T. Barron. Score-distillation-sampling method that uses a pretrained 2D image-diffusion model as a generative prior to optimize a NeRF-style 3D representation from a text prompt, without requiring 3D training data. The canonical reference for the text-to-3D research line.
  • Zip-NeRF (ICCV 2023 Best Paper Honorable Mention). With Barron, Verbin, Srinivasan, and Hedman. Combination of grid-based NeRF acceleration with anti-aliasing and gridded-feature methods.
  • World Labs co-founding and the spatial-intelligence research program (2024 onward). Operationalization of the neural-rendering and 3D-scene research line as a commercial program in San Francisco, with Marble as the first commercial product.

Investments and boards

The entries below are limited to AI, semiconductors, datacenters, software, and energy.

  • World Labs (AI): Co-founder, 2024 to present. $230 million seed at a $1 billion-class valuation, September 2024 (Andreessen Horowitz, Radical Ventures, NEA leads); $1 billion strategic round in February 2026 with Autodesk contributing $200 million as strategic anchor.

No personal angel investments in AI, semiconductors, datacenters, software, or energy companies have been publicly disclosed as of May 2026.

Network

Mildenhall's longest-running professional relationship is with Ren Ng, his Berkeley PhD advisor and a co-author on the NeRF paper. The Berkeley computational-imaging-and-graphics group around Ng included Pratul Srinivasan and Matthew Tancik, the two most frequent NeRF co-authors and the longest-running research collaborators across the Berkeley and Google periods. Srinivasan was Mildenhall's co-recipient of the 2025 SIGGRAPH Significant New Researcher Award.

The Google Research "NeRF group" produced Mildenhall's principal post-PhD collaborator network. Jonathan T. Barron, a senior staff research scientist at Google Research and a NeRF co-author, was Mildenhall's most frequent post-PhD collaborator across the Mip-NeRF, Block-NeRF, RawNeRF, DreamFusion, and Zip-NeRF papers. Peter Hedman, Dor Verbin, and Ben Poole are additional Google Research collaborators across the same period. Ravi Ramamoorthi, the senior co-author on the original NeRF paper from UC San Diego, contributed graphics-and-rendering domain expertise.

The World Labs co-founder team is Mildenhall's primary current commercial collaborator group: Fei-Fei Li, the chief executive officer and Stanford computer-vision faculty (on partial leave); Justin Johnson, University of Michigan computer-vision faculty and a former Stanford Vision Lab PhD student of Li's; and Christoph Lassner, formerly senior researcher at Meta Reality Labs Research and a graphics-and-3D-reconstruction specialist. The four-co-founder team operates the spatial-intelligence research program.

Position in the field

Mildenhall's standing in the senior computer-vision and graphics cohort rests on a single, exceptionally widely-cited body of work: the NeRF paper and its derivatives. Industry coverage of the modern neural-rendering research line consistently treats the 2020 NeRF paper as the foundational artifact, and the cumulative citation count on his Google Scholar profile (roughly 49,000 across all papers, with the original NeRF accounting for more than 18,000 of those) places him among the most-cited graphics-and-vision researchers of his generation despite an exceptionally short publication record.

The Berkeley dissertation prize (2020 Sakrison) and the 2021 ACM Doctoral Dissertation Award Honorable Mention bracketed the start of the post-PhD career; the 2025 SIGGRAPH Significant New Researcher Award (shared with Pratul Srinivasan) is the principal mid-career honor. SIGGRAPH characterizes the recognition as "for outstanding contributions to new representations for 3D graphics, neural rendering, novel view synthesis, and generative models of 3D scenes," noting that Mildenhall and Srinivasan "revolutionized the full gamut of 3D graphics from the capture of real scenes with neural representations, to computational imaging, all the way to generative models for the synthesis of 3D scenes."

The World Labs role frames Mildenhall's current public posture as a working co-founder of a commercial frontier-research lab whose thesis is a direct continuation of his published research. The handful of senior neural-rendering researchers who have made comparable transitions to commercial AI labs include Justin Johnson (also at World Labs), Christoph Lassner (Meta Reality Labs Research to World Labs), and the broader neural-radiance-fields community now distributed across Google DeepMind, Nvidia Research, and a small number of focused startups.

Outlook

Open questions over the next 6 to 18 months:

  • World Labs research and product cadence. Whether Mildenhall's research output through the World Labs research line continues at the publication cadence of the Berkeley and Google years, and whether World Labs follows Marble with successor 3D-scene-generation products and open research artifacts that draw on the neural-radiance-fields line.
  • Marble adoption and the spatial-AI commercial market. Marble's traction across gaming, visual-effects, virtual-reality, and 3D-design industries, and whether spatial intelligence becomes a separable commercial market large enough to support a $2 billion-class company.
  • Neural-rendering research direction. The trajectory of the NeRF-and-Gaussian-splatting research line, including whether World Labs's research program produces follow-on architectural innovations comparable to the Mip-NeRF, Zip-NeRF, and DreamFusion progression of the Google period.
  • Open-source releases. Whether Mildenhall and the World Labs research team continue the open-code tradition of the Berkeley NeRF release, and whether World Labs publishes papers and reference implementations alongside the Marble commercial product.
  • Public profile. Continued conference talks, keynotes, and tutorial appearances at CVPR, ECCV, ICCV, and SIGGRAPH, where Mildenhall has been a recurring invited speaker on the neural-radiance-fields research line.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.