Hinton's research concerns the use of neural networks for
machine learning,
memory,
perception, and symbol processing. He has written or co-written more than 200
peer-reviewed publications. In the 1980s, Hinton was part of the "Parallel Distributed Processing" group at Carnegie Mellon University, which included notable scientists like
Terrence Sejnowski,
Francis Crick,
David Rumelhart, and
James McClelland. This group favoured the
connectionist approach during the
AI winter. Their findings were published in a two-volume set. The connectionist approach adopted by Hinton suggests that capabilities in areas like logic and grammar can be encoded into the parameters of neural networks, and that neural networks can learn them from data.
Symbolists on the other side advocated for explicitly programming
knowledge and
rules into AI systems. His other contributions to neural network research include
distributed representations,
time delay neural network,
mixtures of experts,
Helmholtz machines and
product of experts. An accessible introduction to Geoffrey Hinton's research can be found in his articles in
Scientific American in September 1992 and October 1993. In 1995, Hinton and colleagues proposed the wake-sleep algorithm, involving a neural network with separate pathways for recognition and generation, being trained with alternating "wake" and "sleep" phases. In 2007, Hinton coauthored an
unsupervised learning paper titled
Unsupervised learning of image transformations. In 2008, he developed the visualization method
t-SNE with Laurens van der Maaten.,
Richard S. Sutton, Geoffrey Hinton,
Yoshua Bengio, and
Steve JurvetsonWhile Hinton was a postdoc at UC San Diego, David Rumelhart, Hinton and
Ronald J. Williams applied the
backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful
internal representations of data. Hinton said that "David Rumelhart came up with the basic idea of backpropagation, so it's his invention." Although this work was important in popularising backpropagation, it was not the first to suggest the approach. In 2021, Hinton presented GLOM, a speculative architecture idea also aiming to improve image understanding by modeling part-whole relationships in neural networks. In 2021, Hinton co-authored a widely cited paper proposing a framework for
contrastive learning in computer vision. The technique involves pulling together representations of
augmented versions of the same image, and pushing apart dissimilar representations. The Forward-Forward algorithm is well-suited for what Hinton calls "mortal computation", where the knowledge learned isn't transferable to other systems and thus dies with the hardware, as can be the case for certain
analog computers used for machine learning. == Honours and awards ==