Mathematical foundations Today's deep neural networks are based on early work in
statistics over 200 years ago. The simplest kind of
feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The
mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the
method of least squares or
linear regression. It was used as a means of finding a good rough linear fit to a set of points by
Legendre (1805) and
Gauss (1795) for the prediction of planetary movement.
Perceptrons Historically, digital computers such as the
von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of
connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and
Walter Pitts This model paved the way for research to split into two approaches; one approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. McCulloch and Pitts also developed mathematical models of artificial neurons capable of representing logical functions. proposed a learning
hypothesis based on the mechanism of
neural plasticity that became known as
Hebbian learning. It was used in many early neural networks, such as Rosenblatt's
perceptron and the
Hopfield network. Farley and
Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by
Rochester, Holland, Habit and Duda (1956). In 1958, psychologist
Frank Rosenblatt described the
perceptron, one of the first implemented artificial neural networks, funded by the United States
Office of Naval Research. R. D. Joseph (1960) mentions an even earlier perceptron-like device by B. G. Farley and W. A. Clark of the
MIT Lincoln Laboratory; cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e.,
deep learning. The perceptron raised public excitement for research in artificial neural networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI", fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
Historical foundations and the Dartmouth proposal Artificial neural networks were identified as a promising direction for artificial intelligence research in the 1955 proposal for the
Dartmouth Summer Research Project on Artificial Intelligence. In the proposal, researchers suggested that simplified computational models of biological neurons, described as "neuron nets," might enable machines to learn, form concepts, and improve performance through experience. or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are
Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." The first deep learning
multilayer perceptron (MLP) trained by
stochastic gradient descent was published in 1967 by
Shun'ichi Amari. In computer experiments conducted by Amari's student S. Saito, a five layer MLP with two modifiable layers learned
internal representations to classify non-linearily separable pattern classes. The rectifier has become the most popular activation function for deep learning. Nevertheless, research stagnated in the United States following the work of
Minsky and
Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967). In 1976,
transfer learning was introduced.
Backpropagation Interest in neural networks revived during the 1980s with the development of the
backpropagation algorithm, which allowed multi-layer neural networks to be trained efficiently by propagating error gradients backward through network layers. Backpropagation is an efficient application of the
chain rule derived by
Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, In 1970,
Seppo Linnainmaa published the modern form of backpropagation in his master's thesis (1970). (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986,
David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
Convolutional neural networks Deep learning architectures for
convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the
neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation. a popular downsampling procedure for CNNs. CNNs have become an essential tool for
computer vision. The
time delay neural network (TDNN) was introduced in 1987 by
Alex Waibel to apply CNNs to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989,
Yann LeCun et al. created a CNN called
LeNet for
recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on
optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms.
LeNet-5 (1998), a 7-level CNN by Yann LeCun et al. that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images. From 1988 onward, the use of neural networks transformed the field of
protein structure prediction, in particular when the first cascading networks were trained on
profiles (matrices) produced by multiple
sequence alignments.
Recurrent neural networks One origin of RNN was
statistical mechanics. In 1972,
Shun'ichi Amari proposed to modify the weights of an
Ising model by
Hebbian learning rule as a model of
associative memory, adding in the component of learning. This was popularized as the Hopfield network by
John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,
Cajal observed "recurrent semicircles" in the
cerebellar cortex.
Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past. In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks. In cognitive psychology, the journal
American Psychologist in early 1980s carried out a debate on the relation between cognition and emotion. Social psychologist
Robert Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while
Richard Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology. Two early influential works were the
Jordan network (1986) and the
Elman network (1990), which applied RNN to study
cognitive psychology. In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991,
Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in
ChatGPT) and neural
knowledge distillation. In 1991,
Sepp Hochreiter's diploma thesis identified and analyzed the
vanishing gradient problem and proposed recurrent
residual connections to solve it. He and Schmidhuber introduced
long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture. During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by
Terry Sejnowski,
Peter Dayan,
Geoffrey Hinton, and others, including the
Boltzmann machine,
restricted Boltzmann machine,
Helmholtz machine, and the
wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
Modern deep learning Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in
pattern recognition and
handwriting recognition. In 2011, a CNN named
DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci,
Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how
max-pooling CNNs on GPU improved performance significantly. In October 2012,
AlexNet by
Alex Krizhevsky,
Ilya Sutskever, and Geoffrey Hinton won the large-scale
ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and
Andrew Zisserman and Google's
Inceptionv3. In 2012,
Ng and
Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images.
Unsupervised pre-training and increased computing power from
GPUs and
distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in
nonlinear system identification and classification applications.
Generative adversarial networks (GANs) (
Ian Goodfellow et al., 2014) became state of the art in generative modeling in 2014–2018. The GAN principle was originally published in 1991 by Jürgen Schmidhuber, who called it "artificial curiosity": two neural networks contest with each other in the form of a
zero-sum game, where one network's gain is the other network's loss. The first network is a
generative model that models a
probability distribution over output patterns. The second network learns by
gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by
Nvidia's
StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning
deepfakes.
Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as
DALL·E 2 (2022) and
Stable Diffusion (2022). In 2014, the state of the art was training "[a] very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in
training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the
highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
Transformers During the 2010s, the
seq2seq model was developed, and attention mechanisms were added. It led to the modern
transformer architecture in 2017 in
Attention Is All You Need. scales linearly and was later shown to be equivalent to the unnormalized linear transformer. Many modern
large language models such as
ChatGPT,
GPT-4, and
BERT use this architecture. ==Models==