Grossberg is a pioneer of the fields of
computational neuroscience, connectionist cognitive science, and neuromorphic technology. His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of vision and
image processing; object, scene, and event learning,
pattern recognition, and search; audition, speech and language; cognitive information processing and planning; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and
robotics; self-organizing
neurodynamics; and
mental disorders. Grossberg also collaborates with experimentalists to design experiments that test theoretical predictions and fill in conceptually important gaps in the experimental literature, carries out analyses of the mathematical dynamics of neural systems, and transfers biological neural models to applications in engineering and technology. He has published 18 books or journal special issues, over 560 research articles, and has 7 patents. Grossberg has studied how brains give rise to minds since he took the introductory psychology course as a freshman at
Dartmouth College in 1957. At that time, Grossberg introduced the paradigm of using
nonlinear systems of
differential equations to show how brain mechanisms can give rise to behavioral functions. This paradigm is helping to solve the classical mind/body problem, and is the basic mathematical formalism that is used in biological neural network research today. In particular, in 1957–1958, Grossberg discovered widely used equations for (1) short-term memory (STM), or neuronal activation (often called the Additive and Shunting models, or the Hopfield model after John Hopfield's 1984 application of the
Additive model equation); (2) medium-term memory (MTM), or activity-dependent
habituation (often called habituative transmitter gates, or depressing synapses after Larry Abbott's 1997 introduction of this term); and (3) long-term memory (LTM), or neuronal learning (often called gated steepest descent learning). One variant of these learning equations, called Instar Learning, was introduced by Grossberg in 1976 into Adaptive Resonance Theory and Self-Organizing Maps for the learning of adaptive filters in these models. This learning equation was also used by Kohonen in his applications of Self-Organizing Maps starting in 1984. Another variant of these learning equations, called Outstar Learning, was used by Grossberg starting in 1967 for spatial pattern learning. Outstar and Instar learning were combined by Grossberg in 1976 in a three-layer network for the learning of multi-dimensional maps from any m-dimensional input space to any n-dimensional output space. This application was called Counter-propagation by Hecht-Nielsen in 1987. Building on his 1964 Rockefeller PhD thesis, in the 1960s and 1970s, Grossberg generalized the Additive and Shunting models to a class of dynamical systems that included these models as well as non-neural biological models, and proved content addressable memory theorems for this more general class of models. As part of this analysis, he introduced a Liapunov functional method to help classify the limiting and oscillatory dynamics of competitive systems by keeping track of which population is winning through time. This Liapunov method led him and Michael Cohen to discover in 1981 and publish in 1982 and 1983 a Liapunov function that they used to prove that global limits exist in a class of dynamical systems with symmetric interaction coefficients that includes the Additive and Shunting models. This model is often called the Cohen-Grossberg model and Liapunov function. John Hopfield published the special case of the Cohen-Grossberg Liapunov function for the Additive model in 1984. In 1987, Bart Kosko adapted the Cohen-Grossberg model and Liapunov function, which proved global convergence of STM, to define an Adaptive Bidirectional Associative Memory that combines STM and LTM and which also globally converges to a limit. Grossberg has introduced, and developed with his colleagues, fundamental concepts, mechanisms, models, and architectures across a wide spectrum of topics about brain and behavior. He has collaborated with over 100 PhD students and postdoctoral fellows. These models have provided unified and principled explanations of psychological and neurobiological data about processes including auditory and visual perception, attention, consciousness, cognition, cognitive-emotional interactions, and action in both typical, or normal, individuals and clinical patients. This work models how particular brain breakdowns or lesions cause behavioral symptoms of mental disorders such as Alzheimer's disease, autism, amnesia, PTSD, ADHD, visual and auditory agnosia and neglect, and slow-wave sleep. The models have also been applied in many large-scale applications to engineering, technology, and AI. Taken together, they provide a blueprint for designing autonomous adaptive intelligent algorithms, agents, and mobile robots. These results have been combined in a self-contained and non-technical exposition in a conversational style in Grossberg's 2021 publication
Conscious Mind, Resonant Brain: How Each Brain Makes a Mind. This book won the 2022 PROSE book award in Neuroscience of the Association of American Publishers. Models that Grossberg introduced and helped to develop include: • the foundations of neural network research:
competitive learning,
self-organizing maps, instars, and masking fields (for classification), outstars (for spatial pattern learning), avalanches (for serial order learning and performance), gated dipoles (for opponent processing); • perceptual and cognitive development, social cognition, working memory, cognitive information processing, planning, numerical estimation, and attention: Adaptive Resonance Theory (ART), ARTMAP, STORE, CORT-X, SpaN, LIST PARSE, lisTELOS, SMART, CRIB; • visual perception, attention, consciousness, object and scene learning, recognition, predictive remapping, and search: BCS/FCS, FACADE, 3D LAMINART, aFILM, LIGHTSHAFT, Motion BCS, 3D FORMOTION, MODE, VIEWNET, dARTEX, cART, ARTSCAN, pARTSCAN, dARTSCAN, 3D ARTSCAN, ARTSCAN Search, ARTSCENE, ARTSCENE Search; • auditory streaming, perception, speech, and language processing: SPINET, ARTSTREAM, NormNet, PHONET, ARTPHONE, ARTWORD; • cognitive-emotional dynamics, reinforcement learning, motivated attention, and adaptively timed behavior: CogEM, START, MOTIVATOR; Spectral Timing; • visual and spatial navigation: SOVEREIGN, STARS, ViSTARS, GRIDSmap, GridPlaceMap, Spectral Spacing; • adaptive sensory-motor control of eye, arm, and leg movements: VITE, FLETE, VITEWRITE, DIRECT, VAM, CPG, SACCART, TELOS, SAC-SPEM; • autism: iSTART ==Career and infrastructure development==