Geoffrey Hinton

Geoffrey Hinton is a British-Canadian cognitive scientist, University Professor Emeritus at the University of Toronto, chief scientific advisor at the Vector Institute, and a co-recipient of the 2018 Turing Award and 2024 Nobel Prize in Physics for foundational work on artificial neural networks.
Geoffrey Hinton

Bio

Geoffrey Everest Hinton is a British-Canadian cognitive scientist and computer scientist, born December 6, 1947 in Wimbledon, London, England. He is University Professor Emeritus at the University of Toronto, chief scientific advisor at the Vector Institute, and a co-recipient of the 2018 ACM Turing Award (with Yoshua Bengio and Yann LeCun) and one half of the 2024 Nobel Prize in Physics (with John Hopfield) for foundational work on artificial neural networks. He resigned from Google in May 2023 after a decade at Google Brain to speak publicly about the long-term risks of artificial intelligence, and is one of the most-cited researchers in the modern history of computer science.

At a glance

  • Education: BA in experimental psychology, King's College, Cambridge (1970); PhD in artificial intelligence, University of Edinburgh (1978), advised by Christopher Longuet-Higgins.
  • Current roles: University Professor Emeritus, University of Toronto Department of Computer Science; chief scientific advisor, Vector Institute (since 2017).
  • Key contributions: the 1986 backpropagation paper with David Rumelhart and Ronald Williams; the Boltzmann machine (1983 to 1985, with Terry Sejnowski); deep belief networks (2006); AlexNet (2012, with Alex Krizhevsky and Ilya Sutskever); dropout regularization; t-SNE visualization; capsule networks; the Forward-Forward algorithm.
  • Awards: 2024 Nobel Prize in Physics (shared with John Hopfield); 2018 ACM Turing Award (shared with Yoshua Bengio and Yann LeCun); Rumelhart Prize (2001); Royal Society Fellow (1998); IJCAI Award for Research Excellence (2005); Princess of Asturias Award for Technical and Scientific Research (2022); VinFuture Grand Prize (2024); Queen Elizabeth Prize for Engineering (2025).
  • Professional bodies: Fellow of the Royal Society of London; Fellow of the Royal Society of Canada; foreign associate of the National Academy of Engineering; former president of the Cognitive Science Society.
  • Wikipedia: Geoffrey Hinton
  • Personal site: cs.toronto.edu/~hinton

Origins

Hinton was born in Wimbledon, London, on December 6, 1947, into a family with a long lineage in British science. His father, Howard Hinton, was an entomologist and Fellow of the Royal Society. His great-great-grandfather was the logician George Boole, whose Boolean algebra underpins digital computing, and his great-grandfather was the surgeon and author Charles Howard Hinton. The middle name Everest comes from a relative, the surveyor George Everest, after whom the mountain is named.

He attended Clifton College in Bristol before reading natural sciences at King's College, Cambridge, switching subjects multiple times and ultimately graduating with a Bachelor of Arts in experimental psychology in 1970. He took a series of jobs and a brief postgraduate stint at the University of Sussex before entering the doctoral programme at the University of Edinburgh in 1973, where he completed his PhD in artificial intelligence in 1978 under the supervision of Christopher Longuet-Higgins. The Edinburgh thesis applied connectionist models to problems in cognition at a time when symbolic AI dominated the field and neural-network research was widely considered a dead end.

Career

After Edinburgh, Hinton spent short periods at the University of Sussex and the MRC Applied Psychology Unit before moving to the United States as a faculty member at the University of California, San Diego, where he collaborated with David Rumelhart and others in the Parallel Distributed Processing group. The 1986 paper "Learning representations by back-propagating errors", co-authored by Rumelhart, Hinton, and Ronald Williams, popularized the use of backpropagation for training multi-layer neural networks and remains one of the foundational publications of modern deep learning.

In 1982 Hinton joined Carnegie Mellon University, where he co-developed the Boltzmann machine with Terry Sejnowski between 1983 and 1985. The Boltzmann machine drew on tools from statistical physics to construct a stochastic recurrent network capable of learning internal representations from data, and would later be cited specifically in the 2024 Nobel Prize in Physics citation. He left Carnegie Mellon in 1987 to join the University of Toronto as a full professor in the Department of Computer Science, in part because of his discomfort with US military funding of academic AI work. Aside from a 1998 to 2001 period as founding director of the Gatsby Computational Neuroscience Unit at University College London, he has remained at Toronto for the rest of his academic career.

Through the 1990s and 2000s Hinton continued to publish on neural networks at a time when the field was unfashionable. The 2006 paper "A Fast Learning Algorithm for Deep Belief Nets" with Simon Osindero and Yee Whye Teh is widely credited with reviving academic and industrial interest in deep neural networks by demonstrating effective unsupervised pre-training for layered architectures. The 2012 ImageNet competition entry from his Toronto group, AlexNet, written by Alex Krizhevsky with Ilya Sutskever and Hinton, achieved a top-5 error rate of 15.3 percent and beat the runner-up by more than ten percentage points, an event widely cited as the beginning of the modern deep-learning era in computer vision. The three founders incorporated DNNresearch in 2012 and sold the company to Google in March 2013 for a reported $44 million. Hinton split his time between Toronto and Google Brain in Mountain View through the following decade, contributing to research on attention mechanisms, capsule networks, and the Forward-Forward algorithm.

In 2017, Hinton became the founding chief scientific advisor of the Vector Institute, a Toronto-based AI research collaboration that, alongside Mila in Montreal and Amii in Edmonton, anchors the Pan-Canadian AI Strategy. He resigned from Google on May 1, 2023, citing a desire to speak about the risks of advanced AI without considering how the public commentary affected his employer. From mid-2023 onward he has been a frequent public voice on the longer-term risks of artificial intelligence, including a widely cited estimate of a roughly 10 to 20 percent probability of AI-driven human extinction within three decades.

Affiliations

  • University of Sussex and MRC Applied Psychology Unit: Postdoctoral and research positions, late 1970s.
  • University of California, San Diego: Visiting scholar with the Parallel Distributed Processing group, early 1980s.
  • Carnegie Mellon University: Faculty, 1982 to 1987.
  • University of Toronto: Faculty, 1987 to present (University Professor Emeritus from 2014).
  • University College London: Founding director, Gatsby Computational Neuroscience Unit, 1998 to 2001.
  • Google Brain: Vice President and Engineering Fellow, 2013 to May 2023.
  • Vector Institute: Co-founder and chief scientific advisor, 2017 to present.

Notable contributions

Hinton's published record runs across nearly five decades of work on artificial neural networks, from connectionist models of cognition through the modern deep-learning era. Citation counts place him among the most-cited researchers in computer science.

  • Backpropagation (1986). Co-author of "Learning representations by back-propagating errors" with David Rumelhart and Ronald Williams in Nature. The paper popularized the application of the backpropagation algorithm to multi-layer networks and is one of the foundational publications of modern neural-network research.
  • Boltzmann machines (1983 to 1985). Co-developer with Terry Sejnowski of the Boltzmann machine and the restricted Boltzmann machine, stochastic recurrent networks that draw on statistical-physics formulations to learn distributions over data. Cited specifically in the 2024 Nobel Prize in Physics announcement.
  • Deep belief networks (2006). Co-author of "A Fast Learning Algorithm for Deep Belief Nets" with Simon Osindero and Yee Whye Teh. The paper demonstrated effective unsupervised layer-wise pre-training and is widely credited with reviving interest in deep neural networks.
  • AlexNet (2012). Co-author with Alex Krizhevsky and Ilya Sutskever of the ImageNet 2012 winning entry, "ImageNet Classification with Deep Convolutional Neural Networks". The result is widely cited as the beginning of the modern deep-learning era in computer vision.
  • Dropout regularization (2012). Co-author with Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov of "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", introducing a regularization technique that became standard practice in deep-learning training.
  • t-distributed Stochastic Neighbor Embedding (t-SNE) (2008). Co-author with Laurens van der Maaten of the t-SNE algorithm for visualizing high-dimensional data, one of the most widely used data-visualization tools in machine learning.
  • Capsule networks (2017). With Sara Sabour and Nicholas Frosst, proposed capsule networks as an alternative to convolutional networks for learning spatially structured representations.
  • Forward-Forward algorithm (2022). Proposed an alternative to backpropagation that uses two forward passes rather than a backward pass, intended in part as a more biologically plausible learning rule.
  • 2018 ACM Turing Award, shared with Yoshua Bengio and Yann LeCun "for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing." The three are commonly described as the godfathers of deep learning.
  • 2024 Nobel Prize in Physics, shared with John Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Hinton's Boltzmann machine work is cited explicitly in the prize materials.
  • Public commentary on AI risk (2023 onward). Since the May 2023 Google resignation, Hinton has been a frequent public voice on existential and near-term risks of advanced AI through major outlets, podcasts, parliamentary testimony, and the responsible-AI talk at University of Toronto Convocation Hall in April 2024.

Honors

  • Royal Society Fellow (1998), Fellow of the Royal Society of Canada (1996), foreign associate of the United States National Academy of Engineering (2016), Fellow of the Association for the Advancement of Artificial Intelligence.
  • Rumelhart Prize (2001), the inaugural award; IJCAI Award for Research Excellence (2005); Killam Prize in Engineering (2012); NSERC Herzberg Gold Medal (2010).
  • Companion of the Order of Canada (2018).
  • Princess of Asturias Award for Technical and Scientific Research (2022), shared with Yann LeCun, Yoshua Bengio, and Demis Hassabis.
  • 2018 ACM Turing Award, shared with Bengio and LeCun.
  • VinFuture Grand Prize (2024), shared with Yann LeCun, Yoshua Bengio, Jen-Hsun Huang, and Fei-Fei Li.
  • 2024 Nobel Prize in Physics, shared with John Hopfield.
  • Queen Elizabeth Prize for Engineering (2025), shared with Yoshua Bengio, Bill Dally, John Hopfield, Yann LeCun, Jen-Hsun Huang, and Fei-Fei Li.

Notable students

The roster of researchers trained or supervised by Hinton at Carnegie Mellon, Toronto, and the Gatsby Unit constitutes one of the most influential teacher-student lineages in modern machine learning.

  • Yann LeCun: Postdoctoral fellow under Hinton at the University of Toronto in 1987 to 1988; subsequent Bell Labs researcher, co-founder of Facebook AI Research, and 2018 Turing co-recipient.
  • Ilya Sutskever: Doctoral and postdoctoral student at Toronto; co-author of AlexNet, co-founder of OpenAI, and founder of Safe Superintelligence.
  • Alex Krizhevsky: Doctoral student at Toronto; first author of AlexNet and DNNresearch co-founder.
  • Ruslan Salakhutdinov: Doctoral student at Toronto; subsequent Carnegie Mellon professor, former director of AI research at Apple.
  • Brendan Frey, Radford Neal, Yee Whye Teh, Peter Dayan, Zoubin Ghahramani, Max Welling, Richard Zemel, Alex Graves: senior researchers across academic and industrial AI laboratories.

Position in the field

Hinton occupies a distinctive position in the history of machine learning. The combination of more than four decades of continuous work on artificial neural networks, the 1986 backpropagation paper, the 2006 deep belief networks paper, the 2012 AlexNet result, the 2018 Turing Award, and the 2024 Nobel Prize in Physics is unmatched among living computer scientists. Industry coverage and academic histories of deep learning consistently treat Hinton, Yoshua Bengio, and Yann LeCun as the three figures whose continued investment in connectionist research through the AI winter set the stage for the modern frontier era.

The student lineage is itself a substantial element of his influence. The 2012 AlexNet team built the deep-learning research bench at Google through DNNresearch, then seeded OpenAI and Safe Superintelligence through Sutskever, and the broader teacher-student tree extends into senior research roles at Google DeepMind, Anthropic, Meta AI, and academic institutions across North America and Europe.

The May 2023 Google resignation reframed Hinton's public profile from elder researcher to one of the most prominent voices on AI safety from inside the senior research community. The position contrasts with Yann LeCun, who has continued to argue that current LLM-family architectures are insufficient for general intelligence and that doomerist commentary is overstated, and aligns more closely with Yoshua Bengio, whose post-2023 work has concentrated on AI-safety policy and the LawZero nonprofit.

Outlook

Open questions and watchable signals over the next 6 to 18 months:

  • Continued public commentary. Hinton's appearances at international AI policy convenings, his commentary in major-media outlets, and his testimony before legislative bodies remain among the most influential individual voices on AI risk from a senior research figure.
  • Vector Institute trajectory. The continued growth and research output of the Toronto institute as Pan-Canadian AI Strategy funding evolves through subsequent budget cycles.
  • Student-researcher trajectories. The directions taken by senior students including Ilya Sutskever at Safe Superintelligence, Ruslan Salakhutdinov, and others, given Hinton's continued informal mentorship.
  • Forward-Forward and beyond-backpropagation research. Whether Hinton's biologically plausible learning research produces follow-on results from his Toronto group or research collaborators.
  • AI-policy engagement. The trajectory of Hinton's contributions to AI-safety regulation in the UK, Canada, and the United States, particularly during the post-2024 international AI safety summit process.

Sources

About the author
Nextomoro

Nextomoro

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

nextomoro tracks progress for AI research labs, models, and what's next.

AI Research Lab Intelligence

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Research Lab Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.