Relations to other matrix operations {{ordered list The Kronecker product is a special case of the
tensor product, so it is
bilinear and
associative: : \begin{align} \mathbf{A} \otimes (\mathbf{B} + \mathbf{C}) &= \mathbf{A} \otimes \mathbf{B} + \mathbf{A} \otimes \mathbf{C}, \\ (\mathbf{B} + \mathbf{C}) \otimes \mathbf{A} &= \mathbf{B} \otimes \mathbf{A} + \mathbf{C} \otimes \mathbf{A}, \\ (k\mathbf{A}) \otimes \mathbf{B} &= \mathbf{A} \otimes (k\mathbf{B}) = k(\mathbf{A} \otimes \mathbf{B}), \\ (\mathbf{A} \otimes \mathbf{B}) \otimes \mathbf{C} &= \mathbf{A} \otimes (\mathbf{B} \otimes \mathbf{C}), \\ \mathbf{A} \otimes \mathbf{0} &= \mathbf{0} \otimes \mathbf{A} = \mathbf{0}, \end{align} where
A,
B and
C are matrices,
0 is a zero matrix, and
k is a scalar. In general, and are different matrices. However, and are permutation equivalent, meaning that there exist
permutation matrices P and
Q such that : \mathbf{B} \otimes \mathbf{A} = \mathbf{P} \, \left(\mathbf{A} \otimes \mathbf{B}\right) \, \mathbf{Q}. If
A and
B are square, then and are even permutation
similar, meaning that we can take . The matrices and are perfect shuffle matrices, called the "commutation" matrix. The Commutation matrix
Sp,
q can be constructed by taking slices of the
Ir identity matrix, where r=pq. : \mathbf{S}_{p,q} = \begin{bmatrix} \mathbf{I}_r(1:q:r,:) \\ \mathbf{I}_r(2:q:r,:) \\ \vdots \\ \mathbf{I}_r(q:q:r,:) \end{bmatrix}
MATLAB colon notation is used here to indicate submatrices, and
Ir is the identity matrix. If \mathbf{A} \in \mathbb{R}^{m_1 \times n_1} and \mathbf{B} \in \mathbb{R}^{m_2 \times n_2}, then : \mathbf{B} \otimes \mathbf{A} = \mathbf{S}_{m_1,m_2} (\mathbf{A} \otimes \mathbf{B}) \mathbf{S}^\textsf{T}_{n_1,n_2} If
A,
B,
C and
D are matrices of such size that one can form the
matrix products
AC and
BD, then : (\mathbf{A} \otimes \mathbf{B})(\mathbf{C} \otimes \mathbf{D}) = (\mathbf{AC}) \otimes (\mathbf{BD}). This is called the
mixed-product property, because it mixes the ordinary matrix product and the Kronecker product. As an immediate consequence (again, taking \mathbf{A} \in \mathbb{R}^{m_1 \times n_1} and \mathbf{B} \in \mathbb{R}^{m_2 \times n_2}), : \mathbf{A} \otimes \mathbf{B} = (\mathbf{I}_{m_1} \otimes \mathbf{B} )(\mathbf{A} \otimes \mathbf{I}_{n_2}) = (\mathbf{A} \otimes \mathbf{I}_{m_2} )(\mathbf{I}_{n_1} \otimes \mathbf{B}) . In particular, using the
transpose property from below, this means that if : \mathbf{A} = \mathbf{Q} \otimes \mathbf{U} and
Q and
U are
orthogonal (or
unitary), then
A is also orthogonal (resp., unitary). The mixed Kronecker matrix-vector product can be written as: : \left( \mathbf{A} \otimes \mathbf{B} \right) \operatorname{vec} \left( \mathbf{V} \right) = \operatorname{vec} (\mathbf{B} \mathbf{V} \mathbf{A}^T) where \operatorname{vec}(\mathbf{V}) is the
vectorization operator applied on \mathbf{V} (formed by reshaping the matrix). If \mathbf{A} and \mathbf{C} are square matrices of the dimension m, and \mathbf{B} and \mathbf{D} are square matrices of the dimension n, then the commutator : [\mathbf{A} \otimes \mathbf{B},\mathbf{C} \otimes \mathbf{D}]=[\mathbf{A},\mathbf{C}]\otimes (\mathbf{B}\mathbf{D}) + (\mathbf{C}\mathbf{A}) \otimes [\mathbf{B},\mathbf{D}], or : [\mathbf{A} \otimes \mathbf{B},\mathbf{C} \otimes \mathbf{D}]=[\mathbf{A},\mathbf{C}]\otimes (\mathbf{D}\mathbf{B}) + (\mathbf{A}\mathbf{C}) \otimes [\mathbf{B},\mathbf{D}]. The mixed-product property also works for the element-wise product. If
A and
C are matrices of the same size,
B and
D are matrices of the same size, then that is : (\mathbf{A} \otimes \mathbf{B})^{+} = \mathbf{A}^{+} \otimes \mathbf{B}^{+}. In the language of
category theory, the mixed-product property of the Kronecker product (and more general tensor product) shows that the category
MatF of matrices over a
field F, is in fact a
monoidal category, with objects natural numbers
n, morphisms are matrices with entries in
F, composition is given by matrix multiplication,
identity arrows are simply
identity matrices In, and the tensor product is given by the Kronecker product.
MatF is a concrete
skeleton category for the
equivalent category FinVectF of finite dimensional vector spaces over
F, whose objects are such finite dimensional vector spaces
V, arrows are
F-linear maps , and identity arrows are the identity maps of the spaces. The equivalence of categories amounts to simultaneously
choosing a basis in every finite-dimensional vector space
V over
F; matrices' elements represent these mappings with respect to the chosen bases; and likewise the Kronecker product is the representation of the
tensor product in the chosen bases. Transposition and
conjugate transposition are distributive over the Kronecker product: : (\mathbf{A}\otimes \mathbf{B})^\textsf{T} = \mathbf{A}^\textsf{T} \otimes \mathbf{B}^\textsf{T} and (\mathbf{A}\otimes \mathbf{B})^* = \mathbf{A}^* \otimes \mathbf{B}^*. Let
A be an matrix and let
B be an matrix. Then : \left| \mathbf{A} \otimes \mathbf{B} \right| = \left| \mathbf{A} \right| ^m \left| \mathbf{B} \right| ^n . The exponent in is the order of
B and the exponent in is the order of
A. If
A is ,
B is , and
Ik denotes the
identity matrix then we can define what is sometimes called the
Kronecker sum, , by : \mathbf{A}\,\overline{\oplus}\,\mathbf{B} = \mathbf{A} \otimes \mathbf{I}_m + \mathbf{I}_n \otimes \mathbf{B} . This is
different from the
direct sum of two matrices. This operation is related to the tensor product on
Lie algebras, as detailed below (#Abstract properties) in the point "Relation to the abstract
tensor product". We have the following formula for the
matrix exponential, which is useful in some numerical evaluations. : \exp({\mathbf{N}\,\overline{\oplus}\,\mathbf{M}}) = \exp(\mathbf{N}) \otimes \exp(\mathbf{M}) Kronecker sums appear naturally in
physics when considering ensembles of non-interacting
systems. If
Hr is the Hamiltonian of the
rth such system. Then the total Hamiltonian of the ensemble is : H_{\operatorname{Tot}}=\overline{\bigoplus_r} H^r . Let A be an m\times n matrix and B a p \times q matrix. When the order of the Kronecker product and vectorization is interchanged, the two operations can be linked linearly through a function that involves the
commutation matrix, K_{qm} . That is, \operatorname{vec}(\operatorname{Kron}(A, B)) and \operatorname{Kron}(\operatorname{vec}A,\operatorname{vec}B) have the following relationship: : \operatorname{vec}(A\otimes B) = (I_n \otimes K_{qm}\otimes I_p)(\operatorname{vec}A \otimes \operatorname{vec}B). Furthermore, the above relation can be rearranged in terms of either \operatorname{vec}A or \operatorname{vec}B as follows: : \operatorname{vec}(A\otimes B)= (I_n \otimes G)\operatorname{vec}A = (H\otimes I_p)\operatorname{vec}B, where : G = (K_{qm}\otimes I_p)(I_m \otimes \operatorname{vec}B) \text{ and } H=(I_n\otimes K_{qm})(\operatorname{vec}A \otimes I_q). If x \in \mathbb{R}^n and y \in \mathbb{R}^m are arbitrary vectors, then the outer product between x and y is defined as xy^T. The Kronecker product is related to the outer product by: y\otimes x = \operatorname{vec}(xy^T). }}
Abstract properties {{ordered list Suppose that
A and
B are square matrices of size
n and
m respectively. Let
λ1, ...,
λn be the
eigenvalues of
A and
μ1, ...,
μm be those of
B (listed according to
multiplicity). Then the
eigenvalues of are : \lambda_i \mu_j, \qquad i=1,\ldots,n ,\, j=1,\ldots,m. It follows that the
trace and
determinant of a Kronecker product are given by : \operatorname{tr}(\mathbf{A} \otimes \mathbf{B}) = \operatorname{tr} \mathbf{A} \, \operatorname{tr} \mathbf{B} \quad\text{and}\quad \det(\mathbf{A} \otimes \mathbf{B}) = (\det \mathbf{A})^m (\det \mathbf{B})^n. If
A and
B are rectangular matrices, then one can consider their
singular values. Suppose that
A has
rA nonzero singular values, namely : \sigma_{\mathbf{A},i}, \qquad i = 1, \ldots, r_\mathbf{A}. Similarly, denote the nonzero singular values of
B by : \sigma_{\mathbf{B},i}, \qquad i = 1, \ldots, r_\mathbf{B}. Then the Kronecker product has
rArB nonzero singular values, namely : \sigma_{\mathbf{A},i} \sigma_{\mathbf{B},j}, \qquad i=1,\ldots,r_\mathbf{A} ,\, j=1,\ldots,r_\mathbf{B}. Since the
rank of a matrix equals the number of nonzero singular values, we find that : \operatorname{rank}(\mathbf{A} \otimes \mathbf{B}) = \operatorname{rank} \mathbf{A} \, \operatorname{rank} \mathbf{B}. The Kronecker product of matrices corresponds to the abstract tensor product of linear maps. Specifically, if the vector spaces
V,
W,
X, and
Y have bases {{nowrap|{
v1, ...,
vm},}} {{nowrap|{
w1, ...,
wn},}} {{nowrap|{
x1, ...,
xd},}} and {{nowrap|{
y1, ...,
ye},}} respectively, and if the matrices
A and
B represent the linear transformations and , respectively in the appropriate bases, then the matrix represents the tensor product of the two maps, with respect to the basis {{nowrap|{
v1 ⊗
w1,
v1 ⊗
w2, ...,
v2 ⊗
w1, ...,
vm ⊗
wn}}} of and the similarly defined basis of with the property that , where
i and
j are integers in the proper range. When
V and
W are
Lie algebras, and and are
Lie algebra homomorphisms, the Kronecker sum of
A and
B represents the induced Lie algebra homomorphisms . The Kronecker product of the
adjacency matrices of two
graphs is the adjacency matrix of the
tensor product graph. The
Kronecker sum of the adjacency matrices of two
graphs is the adjacency matrix of the
Cartesian product graph. }} == Matrix equations ==