The matrix vectorization operation can be written in terms of a linear sum. Let
X be an matrix that we want to vectorize, and let
ei be the
i-th canonical basis vector for the
n-dimensional space, that is \mathbf{e}_i=\left[0,\dots,0,1,0,\dots,0\right]^\mathrm{T}. Let
Bi be a block matrix defined as follows: \mathbf{B}_i = \begin{bmatrix} \mathbf{0} \\ \vdots \\ \mathbf{0} \\ \mathbf{I}_m \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{bmatrix} = \mathbf{e}_i \otimes \mathbf{I}_m
Bi consists of
n block matrices of size , stacked column-wise, and all these matrices are all-zero except for the
i-th one, which is a identity matrix
Im. Then the vectorized version of
X can be expressed as follows: \operatorname{vec}(\mathbf{X}) = \sum_{i=1}^n \mathbf{B}_i \mathbf{X} \mathbf{e}_i Multiplication of
X by
ei extracts the
i-th column, while multiplication by
Bi puts it into the desired position in the final vector. Alternatively, the linear sum can be expressed using the
Kronecker product: \operatorname{vec}(\mathbf{X}) = \sum_{i=1}^n \mathbf{e}_i \otimes \mathbf{X} \mathbf{e}_i ==Half-vectorization==