Several techniques are employed for model compression.
Pruning Pruning sparsifies a large model by setting some parameters to exactly zero. This effectively reduces the number of parameters. This allows the use of
sparse matrix operations, which are faster than dense matrix operations. Pruning criteria can be based on magnitudes of parameters, the statistical pattern of neural
activations,
Hessian values, etc.
Quantization Quantization reduces the numerical precision of weights and activations. For example, instead of storing weights as 32-bit
floating-point numbers, they can be represented using 8-bit integers. Low-precision parameters take up less space, and takes less compute to perform arithmetic with. It is also possible to quantize some parameters more aggressively than others, so for example, a less important parameter can have 8-bit precision while another, more important parameter, can have 16-bit precision. Inference with such models requires
mixed-precision arithmetic. Quantized models can also be used during training (rather than after training).
PyTorch implements automatic mixed-precision (AMP), which performs autocasting, gradient scaling, and loss scaling.
Low-rank factorization Weight matrices can be approximated by low-
rank matrices. Let W be a weight matrix of shape m \times n. A low-rank approximation is W \approx UV^T, where U and V are matrices of shapes m \times k, n \times k. When k is small, this both reduces the number of parameters needed to represent W approximately, and accelerates matrix multiplication by W. Low-rank approximations can be found by
singular value decomposition (SVD). The choice of rank for each weight matrix is a hyperparameter, and jointly optimized as a mixed discrete-continuous optimization problem. The rank of weight matrices may also be pruned after training, taking into account the effect of activation functions like ReLU on the implicit rank of the weight matrices. == Training ==