He is remembered for his work with Joannes Gregorius Dusser de Barenne from Yale and later with
Walter Pitts from the
University of Chicago. He provided the foundation for certain brain theories in a number of classic papers, including "
A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943) and "How We Know Universals: The Perception of Auditory and Visual Forms" (1947), both published in the
Bulletin of Mathematical Biophysics. The former is "widely credited with being a seminal contribution to neural network theory, the theory of automata, the theory of computation, and cybernetics". In his last days in 1960s, he worked on loops, oscillations and triadic relations with Moreno-Díaz; the reticular formation with Kilmer and dynamic models of memory with Da Fonseca. His work in the 1960s was summarized in a 1968 paper.
Neuroscience He studied the excitation of the brain by
strychnine neuronography, which was a method to map brain connections. Applying strychnine in one point of the brain causes excitations in different points of the brain. Bailey, Bonin, and McCulloch conducted a series of studies in the 1940s that identified connections in the brains of macaque and chimpanzee that are consistent with modern understanding of
VOF.
Mathematical logic In 1919 he began to work mainly on mathematical logic, and by 1923 he attempted to make a logic of
transitive verbs. His goal in psychology is to invent a "psychon" or "least psychic event" that are binary atomic events with necessary causes, such that they can be combined to create complex logical propositions concerning their antecedents. He noticed in 1929 that these may correspond to the all-or-nothing firings of neurons in the brain. He worked with
Manuel Blum in studying how a neural network can be "logically stable", that is, can implement a Boolean function even if the activation thresholds of individual neurons are varied. They were inspired by the problem of how the brain can perform the same functions, such as breathing, under influence of
caffeine or
alcohol, which shifts the activation threshold over the entire brain.
How we know universals In the 1947 paper
How we know universals, they studied the problem of recognizing objects despite changes in representation. For example, recognizing a square under different viewing angles and lighting conditions, or recognizing a phoneme under different loudness and tones. That is, recognizing objects invariant under the
action of some
symmetry group. This problem was partly inspired by a practical problem in designing a machine for the blind to read (recounted in Wiener's
Cybernetics, see before). The paper proposed two solutions. The first is in computing an invariant by averaging over the symmetry group. Let the symmetry group be G and the object to be recognized be x. Let a neural network implement a function T. Then, the group-invariant representation would be \frac{1}\sum_{g \in G} T(g x), the group-action average. The second solution is in a negative feedback circuit that drives a canonical representation. Consider the problem of recognizing whether an object is a square. The circuit moves the eye so that the "center of gravity of brightness" of the object is moved to the middle of the visual field. This then effectively converts each object into a canonical representation, which can then be compared with a representation in the brain.
Neural network modelling In the 1943 paper McCulloch and Pitts attempted to demonstrate that a
Turing machine program could be implemented in a finite network of
formal neurons (in the event, the Turing Machine contains their model of the brain, but the converse is not true), that the neuron was the base logic unit of the brain. In the 1947 paper they offered approaches to designing "nervous nets" to recognize visual inputs despite changes in orientation or size. From 1952 McCulloch worked at the Research Laboratory of Electronics at MIT, working primarily on
neural network modelling. His team examined the visual system of the
frog in consideration of McCulloch's 1947 paper, discovering that the eye provides the brain with information that is already, to a degree, organized and interpreted, instead of simply transmitting an image. With Roberto Moreno-Díaz, he studied a formalized problem of memory. Given that neural networks can store memory by a pattern of oscillations in a circle, they studied the number of possible oscillation patterns that can be sustained by some neural network with N neurons. This came out to be K(N) = \binom{2^N}{k}\sum_{k=1}^{2^N-1} k! (Schnabel, 1966). Also, they proved a universality theorem, in that for each N, there exists a neural network (possibly with more than N neurons) with \log_2 K(N) binary inputs, such that, for any oscillation pattern realizable by some neural network with N neurons, there exists a binary input for this universal network such that it exhibits the same pattern.
Control McCulloch considered the problem of contradictory information and motives, which he called a "
heterarchy" of motives, meaning that the motives are not linearly ordered, but can be ordered like A > B > C > A. He posited the concept of "poker chip"
reticular formations as to how the brain deals with contradictory information in a democratic, somatotopical neural network. Specifically, how the brain can commit the animal to a single course of action when the situation is ambiguous. They designed a prototypic example neural network "RETIC", with "12 anastomatically coupled modules stacked in columnar array", which can switch between unambiguous stable modes based on ambiguous inputs. was developed by
von Foerster and
Pask in their study of
self-organization and by Pask in his
Conversation Theory and
Interactions of Actors Theory. == Publications ==