MarketKhatri–Rao product
Company Profile

Khatri–Rao product

In mathematics, the Khatri–Rao product or block Kronecker product of two partitioned matrices and is defined as

Column-wise Kronecker product
The column-wise Kronecker product of two matrices is a special case of the Khatri-Rao product as defined above, and may also be called the Khatri–Rao product. This product assumes the partitions of the matrices are their columns. In this case , , and for each j: . The resulting product is a matrix of which each column is the Kronecker product of the corresponding columns of A and B. Using the matrices from the previous examples with the columns partitioned: : \mathbf{C} = \left[ \begin{array} { c | c | c} \mathbf{C}_1 & \mathbf{C}_2 & \mathbf{C}_3 \end{array} \right] = \left[ \begin{array} {c | c | c} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{array} \right] ,\quad \mathbf{D} = \left[ \begin{array} { c | c | c } \mathbf{D}_1 & \mathbf{D}_2 & \mathbf{D}_3 \end{array} \right] = \left[ \begin{array} { c | c | c } 1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9 \end{array} \right] , so that: : \mathbf{C} \ast \mathbf{D} = \left[ \begin{array} { c | c | c } \mathbf{C}_1 \otimes \mathbf{D}_1 & \mathbf{C}_2 \otimes \mathbf{D}_2 & \mathbf{C}_3 \otimes \mathbf{D}_3 \end{array} \right] = \left[ \begin{array} { c | c | c } 1 & 8 & 21 \\ 2 & 10 & 24 \\ 3 & 12 & 27 \\ 4 & 20 & 42 \\ 8 & 25 & 48 \\ 12 & 30 & 54 \\ 7 & 32 & 63 \\ 14 & 40 & 72 \\ 21 & 48 & 81 \end{array} \right]. This column-wise version of the Khatri–Rao product is useful in linear algebra approaches to data analytical processing and in optimizing the solution of inverse problems dealing with a diagonal matrix. In 1996 the column-wise Khatri–Rao product was proposed to estimate the angles of arrival (AOAs) and delays of multipath signals and four coordinates of signals sources at a digital antenna array. == Face-splitting product ==
Face-splitting product
An alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows, was proposed by V. Slyusar in 1996. This matrix operation was named the "face-splitting product" of matrices or the "transposed Khatri–Rao product". This type of operation is based on row-by-row Kronecker products of two matrices. Using the matrices from the previous examples with the rows partitioned: : \mathbf{C} = \begin{bmatrix} \mathbf{C}_1 \\\hline \mathbf{C}_2 \\\hline \mathbf{C}_3\\ \end{bmatrix} = \begin{bmatrix} 1 & 2 & 3 \\\hline 4 & 5 & 6 \\\hline 7 & 8 & 9 \end{bmatrix} ,\quad \mathbf{D} = \begin{bmatrix} \mathbf{D}_1\\\hline \mathbf{D}_2\\\hline \mathbf{D}_3\\ \end{bmatrix} = \begin{bmatrix} 1 & 4 & 7 \\\hline 2 & 5 & 8 \\\hline 3 & 6 & 9 \end{bmatrix} , the result can be obtained: : \mathbf{C} \bull \mathbf{D} = \begin{bmatrix} \mathbf{C}_1 \otimes \mathbf{D}_1\\\hline \mathbf{C}_2 \otimes \mathbf{D}_2\\\hline \mathbf{C}_3 \otimes \mathbf{D}_3\\ \end{bmatrix} = \begin{bmatrix} 1 & 4 & 7 & 2 & 8 & 14 & 3 & 12 & 21 \\\hline 8 & 20 & 32 & 10 & 25 & 40 & 12 & 30 & 48 \\\hline 21 & 42 & 63 & 24 & 48 & 72 & 27 & 54 & 81 \end{bmatrix}. ==Main properties==
Main properties
In the following properties, the operator \bull denotes the row-wise Kronecker product (face-splitting product) and the operator \ast denotes the column-wise Kronecker product{{ordered list :\left(\mathbf{A} \bull \mathbf{B}\right)^\textsf{T} = \textbf{A}^\textsf{T} \ast \mathbf{B}^\textsf{T}, :\begin{align} \mathbf{A} \bull (\mathbf{B} + \mathbf{C}) &= \mathbf{A} \bull \mathbf{B} + \mathbf{A} \bull \mathbf{C}, \\ (\mathbf{B} + \mathbf{C}) \bull \mathbf{A} &= \mathbf{B} \bull \mathbf{A} + \mathbf{C} \bull \mathbf{A}, \\ (k\mathbf{A}) \bull \mathbf{B} &= \mathbf{A} \bull (k\mathbf{B}) = k(\mathbf{A} \bull \mathbf{B}), \\ (\mathbf{A} \bull \mathbf{B}) \bull \mathbf{C} &= \mathbf{A} \bull (\mathbf{B} \bull \mathbf{C}), \\ \end{align} where A, B and C are matrices, and k is a scalar, :a \bull \mathbf{B} = \mathbf{B} \bull a, where \circ denotes the Hadamard product, where a is a row vector, where \mathbf{P} is a permutation matrix. where c and d are vectors (it is a combine of properties 3 an 8), Similarly: : (\mathbf{A} \bull \mathbf{B})(\mathbf{M}\mathbf{N}c \otimes \mathbf{Q}\mathbf{P}d) = (\mathbf{A}\mathbf{M}\mathbf{N}c) \circ (\mathbf{B}\mathbf{Q}\mathbf{P}d), : \mathcal F\left((C^{(1)}x) \star (C^{(2)}y)\right) = \left((\mathcal F C^{(1)}) \bull (\mathcal F C^{(2)})\right)(x \otimes y) = (\mathcal F C^{(1)}x) \circ (\mathcal F C^{(2)}y), where \star is vector convolution; C^{(1)},C^{(2)} are "count sketch" matrices; and \mathcal F is the Fourier transform matrix (this result is an evolving of count sketch properties). This can be generalized for appropriate matrices \mathbf{A},\mathbf{B}: :\mathcal F\left((\mathbf{A}x) \star (\mathbf{B}y)\right) = \left((\mathcal F \mathbf{A}) \bull (\mathcal F \mathbf{B})\right)(x \otimes y) = (\mathcal F \mathbf{A}x) \circ (\mathcal F \mathbf{B}y) because property 11 above gives us :\left((\mathcal F \mathbf{A}) \bull (\mathcal F \mathbf{B})\right)(x \otimes y) = (\mathcal F \mathbf{A}x) \circ (\mathcal F \mathbf{B}y) And the convolution theorem gives us :\mathcal F\left((\mathbf{A}x) \star (\mathbf{B}y)\right) = (\mathcal F \mathbf{A}x) \circ (\mathcal F \mathbf{B}y) : \mathbf{A} \bull \mathbf{B} = \left(\mathbf {A} \otimes \mathbf {1_k}^\textsf{T}\right) \circ \left(\mathbf {1_c}^\textsf{T} \otimes \mathbf {B}\right), where \mathbf {A} is r \times c matrix, \mathbf{B} is r \times k matrix, \mathbf{1_c} is a vector of 1's of length c, and \mathbf{1_k} is a vector of 1's of length k or : \mathbf{M} \bull \mathbf{M} = \left(\mathbf{M} \otimes \mathbf{1}^\textsf{T}\right) \circ \left(\mathbf{1}^\textsf{T} \otimes \mathbf{M}\right), where \mathbf{M} is r \times c matrix, \circ means element by element multiplication and \mathbf{1} is a vector of 1's of length c. : \mathbf{M} \bull \mathbf{M} = \mathbf{M} [\circ] \left(\mathbf{M} \otimes \mathbf{1}^\textsf{T}\right), where [\circ] denotes the penetrating face product of matrices. Similarly: : \mathbf{P} \ast \mathbf{N} = (\mathbf{P} \otimes \mathbf{1_k}) \circ (\mathbf{1_c} \otimes \mathbf{N}), where \mathbf{P} is c \times r matrix, \mathbf{N} is k \times r matrix,. : \mathbf{W_d}\mathbf{A} = \mathbf{w} \bull \mathbf{A}, : \operatorname{vec}((\mathbf{w}^\textsf{T} \ast \mathbf{A})\mathbf{B})= (\mathbf{B}^\textsf{T} \ast \mathbf{A}) \mathbf{w}= \operatorname{vec}(\mathbf{A}(\mathbf{w} \bull \mathbf{B}))= \operatorname{vec}(\mathbf{A}\mathbf{W_d}\mathbf{B}), : \operatorname{vec}\left(\mathbf{A}^\textsf{T} \mathbf{W_d} \mathbf{A}\right) = \left(\mathbf{A} \bull \mathbf{A}\right)^\textsf{T} \mathbf{w}, where \mathbf{w} is the vector consisting of the diagonal elements of \mathbf{W_d} , \operatorname{vec}(\mathbf{A}) means stack the columns of a matrix \mathbf{A} on top of each other to give a vector. : (\mathbf{A} \bull \mathbf{L})(\mathbf{B} \otimes \mathbf{M}) \cdots (\mathbf{C} \otimes \mathbf{S})(\mathbf{K} \ast \mathbf{T}) = (\mathbf{A}\mathbf{B}...\mathbf{C}\mathbf{K}) \circ (\mathbf{L}\mathbf{M}...\mathbf{S}\mathbf{T}). Similarly: : \begin{align} (\mathbf{A} \bull \mathbf{L})(\mathbf{B} \otimes \mathbf{M}) \cdots (\mathbf{C} \otimes \mathbf{S})(c \otimes d) &= (\mathbf{A}\mathbf{B} \cdots \mathbf{C}c) \circ (\mathbf{L}\mathbf{M} \cdots \mathbf{S}d), \\ (\mathbf{A} \bull \mathbf{L})(\mathbf{B} \otimes \mathbf{M}) \cdots (\mathbf{C} \otimes \mathbf{S})(\mathbf{P}c \otimes \mathbf{Q}d) &= (\mathbf{A}\mathbf{B} \cdots \mathbf{C}\mathbf{P}c) \circ (\mathbf{L}\mathbf{M}\cdots\mathbf{S}\mathbf{Q}d) \end{align}, where c and d are vectors : If \mathbf{B} is a diagonal matrix and \mathbf{b} is its main diagonal: : \operatorname{vec}(\mathbf{A} \mathbf{B} \mathbf{C}) = ( \mathbf{C}^\mathrm T \ast \mathbf{A} ) \mathbf{b} Here, \operatorname{vec}(\cdot) is the column-wise vectorization operator. }} Examples Source: : \begin{align} &\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{bmatrix} \bullet \begin{bmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \right) \left( \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \otimes \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \right) \left( \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \\ \end{bmatrix} \otimes \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \\ \end{bmatrix} \right) \left( \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \ast \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \right) \\[5pt] {}={} &\left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{bmatrix} \bullet \begin{bmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \right) \left( \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \,\otimes\, \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \\ \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \right) \\[5pt] {}={} & \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \,\circ\, \begin{bmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \\ \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} . \end{align} Theorem Source: If M = T^{(1)} \bullet \dots \bullet T^{(c)}, where T^{(1)}, \dots, T^{(c)} are independent components a random matrix T with independent identically distributed rows T_1, \dots, T_m\in \mathbb R^d, such that : E\left[(T_1x)^2\right] = \left\|x\right\|_2^2 and E\left[(T_1 x)^p\right]^\frac{1}{p} \le \sqrt{ap}\|x\|_2, then for any vector x : \left| \left\|Mx\right\|_2 - \left\|x\right\|_2 \right| with probability 1 - \delta if the quantity of rows :m = (4a)^{2c} \varepsilon^{-2} \log 1/\delta + (2ae)\varepsilon^{-1}(\log 1/\delta)^c. In particular, if the entries of T are \pm 1 can get : m = O\left(\varepsilon^{-2}\log1/\delta + \varepsilon^{-1}\left(\frac{1}{c}\log1/\delta\right)^c\right) which matches the Johnson–Lindenstrauss lemma of m = O\left(\varepsilon^{-2}\log1/\delta\right) when \varepsilon is small. == Block face-splitting product ==
Block face-splitting product
File:Transposed Block Face-Splitting Product.jpg|thumb|Transposed block face-splitting product in the context of a multi-face radar model == Applications ==
Applications
The Face-splitting product and the Block Face-splitting product used in the tensor-matrix theory of digital antenna arrays. These operations are also used in: • Artificial Intelligence and Machine learning systems to minimization of convolution and tensor sketch operations, • Generalized linear array model in statistics == See also ==
tickerdossier.comtickerdossier.substack.com