Vector quantization is used for lossy data compression, lossy data correction, pattern recognition, density estimation and clustering. Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid. For
density estimation, the area/volume that is closer to a particular centroid than to any other is inversely proportional to the density (due to the density matching property of the algorithm).
Use in data compression Vector quantization, also called "block quantization" or "pattern matching quantization" is often used in
lossy data compression. It works by encoding values from a multidimensional
vector space into a finite set of values from a discrete
subspace of lower dimension. A lower-space vector requires less storage space, so the data is compressed. Due to the density matching property of vector quantization, the compressed data has errors that are inversely proportional to density. The transformation is usually done by
projection or by using a
codebook. In some cases, a codebook can be also used to
entropy code the discrete value in the same step, by generating a
prefix coded variable-length encoded value as its output. The set of discrete amplitude levels is quantized jointly rather than each sample being quantized separately. Consider a
k-dimensional vector [x_1,x_2,...,x_k] of amplitude levels. It is compressed by choosing the nearest matching vector from a set of
n-dimensional vectors [y_1,y_2,...,y_n], with
n [y_1,y_2,...,y_n] form the
vector space to which all the quantized vectors belong. Only the index of the codeword in the codebook is sent instead of the quantized values. This conserves space and achieves more compression.
Twin vector quantization (VQF) is part of the
MPEG-4 standard dealing with time domain weighted interleaved vector quantization.
Video codecs based on vector quantization •
Bink video •
Cinepak •
Daala is transform-based but uses
pyramid vector quantization on transformed coefficients •
Digital Video Interactive: Production-Level Video and Real-Time Video •
Indeo •
Microsoft Video 1 •
QuickTime:
Apple Video (RPZA) and
Graphics Codec (SMC) •
Sorenson SVQ1 and SVQ3 •
Smacker video •
VQA format, used in many games The usage of video codecs based on vector quantization has declined significantly in favor of those based on
motion compensated prediction combined with
transform coding, e.g. those defined in
MPEG standards, as the low decoding complexity of vector quantization has become less relevant.
Audio codecs based on vector quantization •
AMR-WB+ •
CELP •
CELT (now part of
Opus) is transform-based but uses
pyramid vector quantization on transformed coefficients •
Codec 2 •
DTS •
G.729 •
iLBC •
Ogg Vorbis •
TwinVQ Use in pattern recognition VQ was also used in the eighties for speech and
speaker recognition. Recently it has also been used for efficient
nearest neighbor search and on-line signature recognition. In
pattern recognition applications, one codebook is constructed for each class (each class being a user in biometric applications) using acoustic vectors of this user. In the testing phase the quantization distortion of a testing signal is worked out with the whole set of codebooks obtained in the training phase. The codebook that provides the smallest vector quantization distortion indicates the identified user. The main advantage of VQ in
pattern recognition is its low computational burden when compared with other techniques such as
dynamic time warping (DTW) and
hidden Markov model (HMM). The main drawback when compared to DTW and HMM is that it does not take into account the temporal evolution of the signals (speech, signature, etc.) because all the vectors are mixed up. In order to overcome this problem a multi-section codebook approach has been proposed. The multi-section approach consists of modelling the signal with several sections (for instance, one codebook for the initial part, another one for the center and a last codebook for the ending part).
Use as clustering algorithm As VQ is seeking for centroids as density points of nearby lying samples, it can be also directly used as a prototype-based clustering method: each centroid is then associated with one prototype. By aiming to minimize the expected squared quantization error and introducing a decreasing learning gain fulfilling the Robbins-Monro conditions, multiple iterations over the whole data set with a concrete but fixed number of prototypes converges to the solution of
k-means clustering algorithm in an incremental manner.
Generative adversarial networks (GAN) VQ has been used to quantize a feature representation layer in the discriminator of
generative adversarial networks. The feature quantization (FQ) technique performs implicit feature matching. It improves the GAN training, and yields an improved performance on a variety of popular GAN models: BigGAN for image generation, StyleGAN for face synthesis, and U-GAT-IT for unsupervised image-to-image translation. == See also ==