MarketAuditory system
Company Profile

Auditory system

The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.

System overview
The outer ear funnels sound vibrations to the eardrum, increasing the sound pressure in the middle frequency range. The middle-ear ossicles further amplify the vibration pressure roughly 20 times. The base of the stapes couples vibrations into the cochlea via the oval window, which vibrates the perilymph liquid (present throughout the inner ear) and causes the round window to bulb out as the oval window bulges in. Vestibular duct perilymph vibrations bend organ of Corti outer cells (4 lines) causing prestin to be released in cell tips. This causes the cells to be chemically elongated and shrunk (somatic motor), and hair bundles to shift which, in turn, electrically affects the basilar membrane's movement (hair-bundle motor). These motors (outer hair cells) amplify the traveling wave amplitudes over 40-fold. The outer hair cells (OHC) are minimally innervated by spiral ganglion in slow (unmyelinated) reciprocal communicative bundles (30+ hairs per nerve fiber); this contrasts with inner hair cells (IHC) that have only afferent innervation (30+ nerve fibers per one hair) but are heavily connected. There are three to four times as many OHCs as IHCs. The basilar membrane (BM) is a barrier between scalae, along the edge of which the IHCs and OHCs sit. Basilar membrane width and stiffness vary to control the frequencies best sensed by the IHC. At the cochlear base the BM is at its narrowest and most stiff (high-frequencies), while at the cochlear apex it is at its widest and least stiff (low-frequencies). The tectorial membrane (TM) helps facilitate cochlear amplification by stimulating OHC (direct) and IHC (via endolymph vibrations). TM width and stiffness parallels BM's and similarly aids in frequency differentiation. The superior olivary complex (SOC), in the pons, is the first convergence of the left and right cochlear pulses. SOC has 14 described nuclei; their abbreviation are used here (see Superior olivary complex for their full names). MSO determines the angle the sound came from by measuring time differences in left and right info. LSO normalizes sound levels between the ears; it uses the sound intensities to help determine sound angle. LSO innervates the IHC. VNTB innervate OHC. MNTB inhibit LSO via glycine. LNTB are glycine-immune, used for fast signalling. DPO are high-frequency and tonotopical. DLPO are low-frequency and tonotopical. VLPO have the same function as DPO, but act in a different area. PVO, CPO, RPO, VMPO, ALPO and SPON (inhibited by glycine) are various signalling and inhibiting nuclei. The trapezoid body is where most of the cochlear nucleus (CN) fibers decussate (cross left to right and vice versa); this cross aids in sound localization. The CN breaks into ventral (VCN) and dorsal (DCN) regions. The VCN has three nuclei. Bushy cells transmit timing info, their shape averages timing jitters. Stellate (chopper) cells encode sound spectra (peaks and valleys) by spatial neural firing rates based on auditory input strength (rather than frequency). Octopus cells have close to the best temporal precision while firing, they decode the auditory timing code. The DCN has 2 nuclei. DCN also receives info from VCN. Fusiform cells integrate information to determine spectral cues to locations (for example, whether a sound originated from in front or behind). Cochlear nerve fibers (30,000+) each have a most sensitive frequency and respond over a wide range of levels. Simplified, nerve fibers' signals are transported by bushy cells to the binaural areas in the olivary complex, while signal peaks and valleys are noted by stellate cells, and signal timing is extracted by octopus cells. The lateral lemniscus has three nuclei: dorsal nuclei respond best to bilateral input and have complexity tuned responses; intermediate nuclei have broad tuning responses; and ventral nuclei have broad and moderately complex tuning curves. Ventral nuclei of lateral lemniscus help the inferior colliculus (IC) decode amplitude modulated sounds by giving both phasic and tonic responses (short and long notes, respectively). IC receives inputs not shown, including: • visual (pretectal area: moves eyes to sound. superior colliculus: orientation and behavior toward objects, as well as eye movements (saccade)) areas, • pons (superior cerebellar peduncle: thalamus to cerebellum connection/hear sound and learn behavioral response), • spinal cord (periaqueductal grey: hear sound and instinctually move), and • thalamus. The above are what implicate IC in the 'startle response' and ocular reflexes. Beyond multi-sensory integration IC responds to specific amplitude modulation frequencies, allowing for the detection of pitch. IC also determines time differences in binaural hearing. The medial geniculate nucleus divides into: • ventral (relay and relay-inhibitory cells: frequency, intensity, and binaural info topographically relayed), • dorsal (broad and complex tuned nuclei: connection to somatosensory info), and • medial (broad, complex, and narrow tuned nuclei: relay intensity and sound duration). The auditory cortex (AC) brings sound into awareness/perception. AC identifies sounds (sound-name recognition) and also identifies the sound's origin location. AC is a topographical frequency map with bundles reacting to different harmonies, timing and pitch. Right-hand-side AC is more sensitive to tonality, left-hand-side AC is more sensitive to minute sequential differences in sound. Rostromedial and ventrolateral prefrontal cortices are involved in activation during tonal space and storing short-term memories, respectively. The Heschl's gyrus/transverse temporal gyrus includes Wernicke's area and functionality, it is heavily involved in emotion-sound, emotion-facial-expression, and sound-memory processes. The entorhinal cortex is the part of the 'hippocampus system' that aids and stores visual and auditory memories. The supramarginal gyrus (SMG) aids in language comprehension and is responsible for compassionate responses. SMG links sounds to words with the angular gyrus and aids in word choice. SMG integrates tactile, visual, and auditory info. == Structure ==
Structure
Outer ear The folds of cartilage surrounding the ear canal are called the auricle. Sound waves are reflected and attenuated when they hit the auricle, and these changes provide additional information that will help the brain determine the sound direction. The sound waves enter the auditory canal, a deceptively simple tube. The ear canal amplifies sounds that are between 3 and 12 kHz. It is thought that a calcium driven motor causes a shortening of these links to regenerate tensions. This regeneration of tension allows for apprehension of prolonged auditory stimulation. Neurons Afferent neurons innervate cochlear inner hair cells, at synapses where the neurotransmitter glutamate communicates signals from the hair cells to the dendrites of the primary auditory neurons. There are far fewer inner hair cells in the cochlea than afferent nerve fibers – many auditory nerve fibers innervate each hair cell. The neural dendrites belong to neurons of the auditory nerve, which in turn joins the vestibular nerve to form the vestibulocochlear nerve, or cranial nerve number VIII. The region of the basilar membrane supplying the inputs to a particular afferent nerve fibre can be considered to be its receptive field. Efferent projections from the brain to the cochlea also play a role in the perception of sound, although this is not well understood. Efferent synapses occur on outer hair cells and on afferent (towards the brain) dendrites under inner hair cells == Neuronal structure ==
Neuronal structure
Cochlear nucleus The cochlear nucleus is the first site of the neuronal processing of the newly converted "digital" data from the inner ear (see also binaural fusion). In mammals, this region is anatomically and physiologically split into two regions, the dorsal cochlear nucleus (DCN), and ventral cochlear nucleus (VCN). The VCN is further divided by the nerve root into the posteroventral cochlear nucleus (PVCN) and the anteroventral cochlear nucleus (AVCN). Trapezoid body The trapezoid body is a bundle of decussating fibers in the ventral pons that carry information used for binaural computations in the brainstem. Some of these axons come from the cochlear nucleus and cross over to the other side before traveling on to the superior olivary nucleus. This is believed to help with localization of sound. Superior olivary complex The superior olivary complex is located in the pons, and receives projections predominantly from the ventral cochlear nucleus, although the dorsal cochlear nucleus projects there as well, via the ventral acoustic stria. Within the superior olivary complex lies the lateral superior olive (LSO) and the medial superior olive (MSO). The former is important in detecting interaural level differences while the latter is important in distinguishing interaural time difference. The inferior colliculus also receives descending inputs from the auditory cortex and auditory thalamus (or medial geniculate nucleus). Medial geniculate nucleus The medial geniculate nucleus is part of the thalamic relay system. Primary auditory cortex The primary auditory cortex is the first region of cerebral cortex to receive auditory input. Perception of sound is associated with the left posterior superior temporal gyrus (STG). The superior temporal gyrus contains several important structures of the brain, including Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of basic characteristics of sound such as pitch and rhythm. We know from research in nonhuman primates that the primary auditory cortex can probably be divided further into functionally differentiable subregions. The neurons of the primary auditory cortex can be considered to have receptive fields covering a range of auditory frequencies and have selective responses to harmonic pitches. Neurons integrating information from the two ears have receptive fields covering a particular region of auditory space. The primary auditory cortex is surrounded by secondary auditory cortex, and interconnects with it. These secondary areas interconnect with further processing areas in the superior temporal gyrus, in the dorsal bank of the superior temporal sulcus, and in the frontal lobe. In humans, connections of these regions with the middle temporal gyrus are probably important for speech perception. The frontotemporal system underlying auditory perception allows us to distinguish sounds as speech, music, or noise. The auditory ventral and dorsal streams Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License. From the primary auditory cortex emerge two separate pathways: the auditory ventral stream and auditory dorsal stream. The auditory ventral stream includes the anterior superior temporal gyrus, anterior superior temporal sulcus, middle temporal gyrus and temporal pole. Neurons in these areas are responsible for sound recognition, and extraction of meaning from sentences. The auditory dorsal stream includes the posterior superior temporal gyrus and sulcus, inferior parietal lobule and intra-parietal sulcus. Both pathways project in humans to the inferior frontal gyrus. The most established role of the auditory dorsal stream in primates is sound localization. In humans, the auditory dorsal stream in the left hemisphere is also responsible for speech repetition and articulation, phonological long-term encoding of word names, and verbal working memory. Ascending Auditory Pathway: Coding Mechanisms and Experience-Dependent Plasticity The ascending auditory pathway transmits acoustic information from the cochlea to auditory cortex through a series of subcortical nuclei, including the cochlear nucleus, superior olivary complex, inferior colliculus, and medial geniculate body. Throughout these stages, several organizational principles—particularly tonotopy, the ordered mapping of frequency—are preserved from the cochlea to cortex. The pathway supports both sound localization and sound identification, with the latter being critical for speech and language. Rate Coding in the Auditory Nerve The basilar membrane converts frequency information into a place code, but intensity is also encoded through firing rate. Increased sound levels produce larger basilar membrane displacements, leading to greater hair-cell depolarization and higher firing rates in auditory nerve fibers. Auditory nerve fibers vary in spontaneous firing rates. High-spontaneous-rate fibers respond at very low thresholds but saturate quickly, whereas medium- and low-spontaneous-rate fibers encode higher intensities. This “division of labor” enables fine-grained representation of intensity across a wide dynamic range, crucial for speech perception. Limitations of Place Coding Pure place coding cannot account for all aspects of pitch perception. As harmonic frequencies increase, cochlear filters become too broad to resolve adjacent harmonics. This limitation becomes more pronounced at higher sound intensities, motivating the need for temporal coding mechanisms. Temporal Coding and Phase Locking In temporal (time-based) coding, auditory nerve fibers synchronize their firing to specific phases of the acoustic waveform, a process known as phase locking. Although neurons do not fire on every cycle, they fire preferentially near waveform peaks, allowing representation of periodicity even when place cues are ambiguous. Phase locking underlies the frequency-following response (FFR), a scalp-recorded electrophysiological signal that mirrors the timing, pitch, and harmonic structure of the stimulus. Speech-evoked FFRs reproduce stimulus periodicity with such fidelity that the response waveform can generate intelligible speech when played back. Subcortical Response Timing Auditory processing in the brainstem occurs on the scale of milliseconds. Recordings from humans and nonhuman primates show activation in the inferior colliculus within 5–10 ms of stimulus onset, with thalamic and cortical responses following shortly after. This rapid temporal precision is essential for speech onset detection, phoneme discrimination, and sound localization. Experience-Dependent Plasticity in the Auditory Brainstem Although cortical areas exhibit the most dramatic plasticity, evidence shows that the auditory brainstem is also shaped by experience via descending corticofugal pathways. Musical Training Musicians show: larger FFR amplitudes faster onset responses enhanced representation of F0 and harmonics superior pitch tracking of dynamic contours (e.g., Mandarin tones) These enhancements appear to reflect strengthened cortico-brainstem interactions. Short-Term Training Even brief (≈1 hour) phonetic discrimination training increases FFR harmonic power, whereas passive listening does not. This demonstrates rapid, learning-dependent modulation of subcortical encoding. Descending (Corticofugal) Pathways The descending auditory pathway projects from auditory cortex to the inferior colliculus, superior olivary complex, and cochlear nucleus. These projections adjust subcortical gain, sharpen tuning, and enhance behaviorally relevant features. Thus, while sensory information travels upward, experience and attention travel downward, dynamically shaping early auditory coding. Relevance to Speech and Language The ascending auditory pathway supports language by encoding: formant structure and harmonic content rapid temporal cues essential for stop consonants and prosody pitch contours relevant for tone languages stable representations shaped by linguistic experience Tonotopy is preserved into auditory cortex, where neurons in the superior temporal gyrus and sulcus integrate this information into language-specific categories. == Clinical significance ==
Clinical significance
Proper function of the auditory system is required to able to sense, process, localizalizing a sound's origin in space and understand sound from the surroundings, even in noise background. Difficulty in sensing, processing and understanding sound input has the potential to adversely impact an individual's ability to communicate, learn and effectively complete routine tasks on a daily basis. In children, early diagnosis and treatment of impaired auditory system function is an important factor in ensuring that key social, academic and speech/language developmental milestones are met. Impairment of the auditory system can include any of the following: • Auditory brainstem response and ABR audiometry test for newborn hearing • Auditory processing disorderHyperacusisDiplacusisTinnitusEndaural phenomena == See also ==
tickerdossier.comtickerdossier.substack.com