The most important qualities of a speech synthesis system are
naturalness and
intelligibility.'''''' Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics. The two primary technologies generating synthetic speech waveforms are
concatenative synthesis and
formant synthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used.
Concatenation synthesis Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.
Unit selection synthesis Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual
phones,
diphones, half-phones,
syllables,
morphemes,
words,
phrases, and
sentences. Typically, the division into segments is done using a specially modified
speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the
waveform and
spectrogram. An
index of the units in the speech database is then created based on the segmentation and acoustic parameters like the
fundamental frequency (
pitch), duration, position in the syllable, and neighboring phones. At
run time, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted
decision tree. Unit selection provides the greatest naturalness, because it applies only a small amount of
digital signal processing (DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the
gigabytes of recorded data, representing dozens of hours of speech. Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database. Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems.
Diphone synthesis Diphone synthesis uses a minimal speech database containing all the
diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the
phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target
prosody of a sentence is superimposed on these minimal units by means of
digital signal processing techniques such as
linear predictive coding,
PSOLA or
MBROLA. or more recent techniques such as pitch modification in the source domain using
discrete cosine transform. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot,
Leachim, that was invented by
Michael J. Freeman. Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach. It was tested in a fourth grade classroom in
the Bronx, New York.
Domain-specific synthesis Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings. Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in
non-rhotic dialects of English the
"r" in words like
"clear" is usually only pronounced when the following word has a vowel as its first letter (e.g.
"clear out" is realized as ). Likewise in
French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called
liaison. This
alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be
context-sensitive.
Formant synthesis Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using
additive synthesis and an acoustic model (
physical modelling synthesis). Parameters such as
fundamental frequency,
voicing, and
noise levels are varied over time to create a
waveform of artificial speech. This method is sometimes called
rules-based synthesis; however, many concatenative systems also have rules-based components. Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a
screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in
embedded systems, where
memory and
microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and
intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice. Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the
Texas Instruments toy
Speak & Spell, and in the early 1980s
Sega arcade machines and in many
Atari, Inc. arcade games using the
TMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces. For tonal languages, such as Chinese or Taiwanese language, there are different levels of
tone sandhi required and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi.
Articulatory synthesis Articulatory synthesis consists of computational techniques for synthesizing speech based on models of the human
vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at
Haskins Laboratories in the mid-1970s by
Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at
Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues. Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the
NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the
University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by
Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as
gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model". More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation.
HMM-based synthesis HMM-based synthesis is a synthesis method based on
hidden Markov models, also called Statistical Parametric Synthesis. In this system, the
frequency spectrum (
vocal tract),
fundamental frequency (voice source), and duration (
prosody) of speech are modeled simultaneously by HMMs. Speech
waveforms are generated from HMMs themselves based on the
maximum likelihood criterion.
Sinewave synthesis Sinewave synthesis is a technique for synthesizing speech by replacing the
formants (main bands of energy) with pure tone whistles.
Deep learning-based synthesis Deep learning speech synthesis uses
deep neural networks (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder). The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text.
Audio deepfakes In 2023,
VICE reporter
Joseph Cox published findings that he had recorded five minutes of himself talking and then used a tool developed by ElevenLabs to create voice deepfakes that defeated a bank's
voice-authentication system. == Challenges ==