The theoretical base for contemporary neural networks was independently proposed by
Alexander Bain in 1873 and
William James in 1890. Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949,
Donald Hebb described
Hebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it. In 1956,
Svaetichin discovered the functioning of second order retinal cells (Horizontal Cells), which were fundamental for the understanding of neural networks. Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach of
connectionism. However, starting with the mathematical model of artificial neurons proposed by
Warren McCulloch and
Walter Pitts in 1943, followed by
Frank Rosenblatt's introduction of the
perceptron, and its hardware implementation in the late 1950s artificial neural networks became increasingly used for machine learning applications instead, and increasingly differed from their biological counterparts. In 1969,
Marvin Minsky and
Seymour Papert analyzed the limitations of single-layer perceptrons in their book
Perceptrons, and this critique led to a decline in funding and interest in neural-network research that some authors describe as an "AI winter". Research on neural networks revived in the 1980s with the development and popularization of multilayer networks trained by back-propagation, and from the 2000s onward the combination of large datasets, faster hardware (notably GPUs), and algorithmic advances that have led to the rise of deep learning. ==See also==