Definition The transpose of a matrix , denoted by , , , or , may be constructed by any of the following methods: •
Reflect over its
main diagonal (which runs from the top left to the bottom right) to obtain • Write the rows of as the columns of • Write the columns of as the rows of Formally, the th row, th column element of is the th row, th column element of : : \left[\mathbf{A}^\text{T}\right]_{ij} = \left[\mathbf{A}\right]_{ji}. If is an matrix, then is an matrix.
Matrix definitions involving transposition A square matrix whose transpose is equal to itself is called a
symmetric matrix; that is, is symmetric if : \mathbf{A}^{\text{T}} = \mathbf{A}. A square matrix whose transpose is equal to its negative is called a
skew-symmetric matrix; that is, is skew-symmetric if : \mathbf{A}^{\text{T}} = -\mathbf{A}. A square
complex matrix whose transpose is equal to the matrix with every entry replaced by its
complex conjugate (denoted here with an overline) is called a
Hermitian matrix (equivalent to the matrix being equal to its
conjugate transpose); that is, is Hermitian if : \mathbf{A}^{\text{T}} = \overline{\mathbf{A}}. A square
complex matrix whose transpose is equal to the negation of its complex conjugate is called a
skew-Hermitian matrix; that is, is skew-Hermitian if : \mathbf{A}^{\text{T}} = -\overline{\mathbf{A}}. A square matrix whose transpose is equal to its
inverse is called an
orthogonal matrix; that is, is orthogonal if : \mathbf{A}^{\text{T}} = \mathbf{A}^{-1}. A square complex matrix whose transpose is equal to its conjugate inverse is called a
unitary matrix; that is, is unitary if : \mathbf{A}^{\text{T}} = \overline{\mathbf{A}^{-1}}.
Examples • \begin{bmatrix} 1 & 2 \end{bmatrix}^{\text{T}} = \, \begin{bmatrix} 1 \\ 2 \end{bmatrix} • \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}^{\text{T}} = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} • \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^{\text{T}} = \begin{bmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{bmatrix}
Properties Let and be matrices and be a
scalar. • \left(\mathbf{A}^\text{T} \right)^\text{T} = \mathbf{A}. • : The operation of taking the transpose is an
involution (self-
inverse). • \left(\mathbf{A} + \mathbf{B}\right)^\text{T} = \mathbf{A}^\text{T} + \mathbf{B}^\text{T}. • : The transpose respects
addition. • \left(c \mathbf{A}\right)^\text{T} = c (\mathbf{A}^\text{T}). • : The transpose of a scalar is the same scalar. Together with the preceding property, this implies that the transpose is a
linear map from the
space of matrices to the space of the matrices. • \left(\mathbf{A B}\right)^\text{T} = \mathbf{B}^\text{T} \mathbf{A}^\text{T}. • : The order of the factors reverses. By induction, this result extends to the general case of multiple matrices, so • :: . • \det \left(\mathbf{A}^\text{T}\right) = \det(\mathbf{A}). • : The
determinant of a square matrix is the same as the determinant of its transpose. • The
dot product of two column vectors and can be computed as the single entry of the matrix product\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^{\text{T}} \mathbf{b}. • If has only real entries, then is a
positive-semidefinite matrix. • \left(\mathbf{A}^\text{T} \right)^{-1} = \left(\mathbf{A}^{-1} \right)^\text{T}. • : The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix.The notation is sometimes used to represent either of these equivalent expressions. • If is a square matrix, then its
eigenvalues are equal to the eigenvalues of its transpose, since they share the same
characteristic polynomial. This can also be seen directly, see
Eigenvalues of Transpose • \left(\mathbf A\mathbf a\right) \cdot \mathbf b =\mathbf a \cdot \left(\mathbf A^\text{T}\mathbf b\right) for two column vectors \mathbf a, \mathbf b and the standard
dot product. • Over any field k, a square matrix \mathbf{A} is
similar to \mathbf{A}^\text{T}. • : This implies that \mathbf{A} and \mathbf{A}^\text{T} have the same
invariant factors, which implies they share the same minimal polynomial, characteristic polynomial, and eigenvalues, among other properties. • : A proof of this property uses the following two observations. • :* Let \mathbf{A} and \mathbf{B} be n\times n matrices over some base field k and let L be a
field extension of k. If \mathbf{A} and \mathbf{B} are similar as matrices over L, then they are similar over k. In particular this applies when L is the
algebraic closure of k. • :* If \mathbf{A} is a matrix over an algebraically closed field in
Jordan normal form with respect to some basis, then \mathbf{A} is similar to \mathbf{A}^\text{T}. This further reduces to proving the same fact when \mathbf{A} is a single Jordan block, which is a straightforward exercise.
Products If is an matrix and is its transpose, then the result of
matrix multiplication with these two matrices gives two square matrices: is and is . Furthermore, these products are
symmetric matrices. Indeed, the matrix product has entries that are the
inner product of a row of with a column of . But the columns of are the rows of , so the entry corresponds to the inner product of two rows of . If is the entry of the product, it is obtained from rows and in . The entry is also obtained from these rows, thus , and the product matrix () is symmetric. Similarly, the product is a symmetric matrix. A quick proof of the symmetry of results from the fact that it is its own transpose: : \left(\mathbf{A} \mathbf{A}^\text{T}\right)^\text{T} = \left(\mathbf{A}^\text{T}\right)^\text{T} \mathbf{A}^\text{T}= \mathbf{A} \mathbf{A}^\text{T} .
Implementation of matrix transposition on computers On a
computer, one can often avoid explicitly transposing a matrix in
memory by simply accessing the same data in a different order. For example,
software libraries for
linear algebra, such as
BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement. However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in
row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a
fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing
memory locality. Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an matrix
in-place, with
O(1) additional storage or at most storage much less than . For , this involves a complicated
permutation of the data elements that is non-trivial to implement in-place. Therefore, efficient
in-place matrix transposition has been the subject of numerous research publications in
computer science, starting in the late 1950s, and several algorithms have been developed. == Transposes of linear maps and bilinear forms ==