Processing methods and application areas include
storage,
data compression,
music information retrieval,
speech processing,
localization,
acoustic detection,
transmission,
noise cancellation,
acoustic fingerprinting,
sound recognition,
synthesis, and enhancement (e.g.
equalization,
filtering,
level compression,
echo and
reverb removal or addition, etc.).
Audio broadcasting Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize
overmodulation, compensate for non-linear transmitters (a potential issue with
medium wave and
shortwave broadcasting), and adjust overall
loudness to the desired level.
Active noise control Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to
destructive interference.
Audio synthesis Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either
imitate sounds or generate new ones. Audio synthesis is also used to generate human
speech using
speech synthesis.
Audio effects Audio effects alter the sound of a
musical instrument or other audio source. Common effects include
distortion, often used with electric guitar in
electric blues and
rock music;
dynamic effects such as
volume pedals and
compressors, which affect loudness;
filters such as
wah-wah pedals and
graphic equalizers, which modify frequency ranges;
modulation effects, such as
chorus,
flangers and
phasers;
pitch effects such as
pitch shifters; and time effects, such as
reverb and
delay, which create echoing sounds and emulate the sound of different spaces. Musicians,
audio engineers and record producers use effects units during live performances or in the
recording studio, typically with electric guitar, bass guitar,
electronic keyboard or
electric piano. While effects are most frequently used with
electric or
electronic instruments, they can be used with any audio source, such as
acoustic instruments, drums, and vocals.
Computer audition Computer audition (CA) or machine listening is the general field of study of
algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer
Paris Smaragdis, interviewed in
Technology Review, talks about these systems "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents." Inspired by models of
human audition, CA deals with questions of representation,
transduction, grouping, use of musical knowledge and general sound
semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically, this requires a combination of methods from the fields of
signal processing,
auditory modelling, music perception and
cognition,
pattern recognition, and
machine learning, as well as more traditional methods of
artificial intelligence for musical knowledge representation. == See also ==