In the following properties, the operator \bull denotes the row-wise Kronecker product (face-splitting product) and the operator \ast denotes the column-wise Kronecker product{{ordered list :\left(\mathbf{A} \bull \mathbf{B}\right)^\textsf{T} = \textbf{A}^\textsf{T} \ast \mathbf{B}^\textsf{T}, :\begin{align} \mathbf{A} \bull (\mathbf{B} + \mathbf{C}) &= \mathbf{A} \bull \mathbf{B} + \mathbf{A} \bull \mathbf{C}, \\ (\mathbf{B} + \mathbf{C}) \bull \mathbf{A} &= \mathbf{B} \bull \mathbf{A} + \mathbf{C} \bull \mathbf{A}, \\ (k\mathbf{A}) \bull \mathbf{B} &= \mathbf{A} \bull (k\mathbf{B}) = k(\mathbf{A} \bull \mathbf{B}), \\ (\mathbf{A} \bull \mathbf{B}) \bull \mathbf{C} &= \mathbf{A} \bull (\mathbf{B} \bull \mathbf{C}), \\ \end{align} where
A,
B and
C are matrices, and
k is a
scalar, :a \bull \mathbf{B} = \mathbf{B} \bull a, where \circ denotes the
Hadamard product, where a is a row
vector, where \mathbf{P} is a permutation matrix. where c and d are
vectors (it is a combine of properties 3 an 8), Similarly: : (\mathbf{A} \bull \mathbf{B})(\mathbf{M}\mathbf{N}c \otimes \mathbf{Q}\mathbf{P}d) = (\mathbf{A}\mathbf{M}\mathbf{N}c) \circ (\mathbf{B}\mathbf{Q}\mathbf{P}d), : \mathcal F\left((C^{(1)}x) \star (C^{(2)}y)\right) = \left((\mathcal F C^{(1)}) \bull (\mathcal F C^{(2)})\right)(x \otimes y) = (\mathcal F C^{(1)}x) \circ (\mathcal F C^{(2)}y), where \star is vector
convolution; C^{(1)},C^{(2)} are "count sketch" matrices; and \mathcal F is the
Fourier transform matrix (this result is an evolving of
count sketch properties). This can be generalized for appropriate matrices \mathbf{A},\mathbf{B}: :\mathcal F\left((\mathbf{A}x) \star (\mathbf{B}y)\right) = \left((\mathcal F \mathbf{A}) \bull (\mathcal F \mathbf{B})\right)(x \otimes y) = (\mathcal F \mathbf{A}x) \circ (\mathcal F \mathbf{B}y) because property 11 above gives us :\left((\mathcal F \mathbf{A}) \bull (\mathcal F \mathbf{B})\right)(x \otimes y) = (\mathcal F \mathbf{A}x) \circ (\mathcal F \mathbf{B}y) And the
convolution theorem gives us :\mathcal F\left((\mathbf{A}x) \star (\mathbf{B}y)\right) = (\mathcal F \mathbf{A}x) \circ (\mathcal F \mathbf{B}y) : \mathbf{A} \bull \mathbf{B} = \left(\mathbf {A} \otimes \mathbf {1_k}^\textsf{T}\right) \circ \left(\mathbf {1_c}^\textsf{T} \otimes \mathbf {B}\right), where \mathbf {A} is r \times c matrix, \mathbf{B} is r \times k matrix, \mathbf{1_c} is a vector of 1's of length c, and \mathbf{1_k} is a vector of 1's of length k or : \mathbf{M} \bull \mathbf{M} = \left(\mathbf{M} \otimes \mathbf{1}^\textsf{T}\right) \circ \left(\mathbf{1}^\textsf{T} \otimes \mathbf{M}\right), where \mathbf{M} is r \times c matrix, \circ means element by element multiplication and \mathbf{1} is a vector of 1's of length c. : \mathbf{M} \bull \mathbf{M} = \mathbf{M} [\circ] \left(\mathbf{M} \otimes \mathbf{1}^\textsf{T}\right), where [\circ] denotes the
penetrating face product of matrices. Similarly: : \mathbf{P} \ast \mathbf{N} = (\mathbf{P} \otimes \mathbf{1_k}) \circ (\mathbf{1_c} \otimes \mathbf{N}), where \mathbf{P} is c \times r matrix, \mathbf{N} is k \times r matrix,. : \mathbf{W_d}\mathbf{A} = \mathbf{w} \bull \mathbf{A}, : \operatorname{vec}((\mathbf{w}^\textsf{T} \ast \mathbf{A})\mathbf{B})= (\mathbf{B}^\textsf{T} \ast \mathbf{A}) \mathbf{w}= \operatorname{vec}(\mathbf{A}(\mathbf{w} \bull \mathbf{B}))= \operatorname{vec}(\mathbf{A}\mathbf{W_d}\mathbf{B}), : \operatorname{vec}\left(\mathbf{A}^\textsf{T} \mathbf{W_d} \mathbf{A}\right) = \left(\mathbf{A} \bull \mathbf{A}\right)^\textsf{T} \mathbf{w}, where \mathbf{w} is the vector consisting of the diagonal elements of \mathbf{W_d} , \operatorname{vec}(\mathbf{A}) means stack the columns of a matrix \mathbf{A} on top of each other to give a vector. : (\mathbf{A} \bull \mathbf{L})(\mathbf{B} \otimes \mathbf{M}) \cdots (\mathbf{C} \otimes \mathbf{S})(\mathbf{K} \ast \mathbf{T}) = (\mathbf{A}\mathbf{B}...\mathbf{C}\mathbf{K}) \circ (\mathbf{L}\mathbf{M}...\mathbf{S}\mathbf{T}). Similarly: : \begin{align} (\mathbf{A} \bull \mathbf{L})(\mathbf{B} \otimes \mathbf{M}) \cdots (\mathbf{C} \otimes \mathbf{S})(c \otimes d) &= (\mathbf{A}\mathbf{B} \cdots \mathbf{C}c) \circ (\mathbf{L}\mathbf{M} \cdots \mathbf{S}d), \\ (\mathbf{A} \bull \mathbf{L})(\mathbf{B} \otimes \mathbf{M}) \cdots (\mathbf{C} \otimes \mathbf{S})(\mathbf{P}c \otimes \mathbf{Q}d) &= (\mathbf{A}\mathbf{B} \cdots \mathbf{C}\mathbf{P}c) \circ (\mathbf{L}\mathbf{M}\cdots\mathbf{S}\mathbf{Q}d) \end{align}, where c and d are
vectors : If \mathbf{B} is a
diagonal matrix and \mathbf{b} is its main diagonal: : \operatorname{vec}(\mathbf{A} \mathbf{B} \mathbf{C}) = ( \mathbf{C}^\mathrm T \ast \mathbf{A} ) \mathbf{b} Here, \operatorname{vec}(\cdot) is the column-wise
vectorization operator. }}
Examples Source: : \begin{align} &\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{bmatrix} \bullet \begin{bmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \right) \left( \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \otimes \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \right) \left( \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \\ \end{bmatrix} \otimes \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \\ \end{bmatrix} \right) \left( \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \ast \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \right) \\[5pt] {}={} &\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{bmatrix} \bullet \begin{bmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \right) \left( \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \,\otimes\, \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \\ \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \right) \\[5pt] {}={} & \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \,\circ\, \begin{bmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \\ \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} . \end{align}
Theorem Source: If M = T^{(1)} \bullet \dots \bullet T^{(c)}, where T^{(1)}, \dots, T^{(c)} are independent components a random matrix T with independent identically distributed rows T_1, \dots, T_m\in \mathbb R^d, such that : E\left[(T_1x)^2\right] = \left\|x\right\|_2^2 and E\left[(T_1 x)^p\right]^\frac{1}{p} \le \sqrt{ap}\|x\|_2, then for any vector x : \left| \left\|Mx\right\|_2 - \left\|x\right\|_2 \right| with probability 1 - \delta if the quantity of rows :m = (4a)^{2c} \varepsilon^{-2} \log 1/\delta + (2ae)\varepsilon^{-1}(\log 1/\delta)^c. In particular, if the entries of T are \pm 1 can get : m = O\left(\varepsilon^{-2}\log1/\delta + \varepsilon^{-1}\left(\frac{1}{c}\log1/\delta\right)^c\right) which matches the
Johnson–Lindenstrauss lemma of m = O\left(\varepsilon^{-2}\log1/\delta\right) when \varepsilon is small. == Block face-splitting product ==