Basic properties • The sum and difference of two symmetric matrices is symmetric. • This is not always true for the
product: given symmetric matrices A and B, then AB is symmetric if and only if A and B
commute, i.e., if AB=BA. • For any integer n, A^n is symmetric if A is symmetric. • Rank of a symmetric matrix A is equal to the number of non-zero eigenvalues of A.
Decomposition into symmetric and skew-symmetric Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let \mbox{Mat}_n denote the space of n \times n matrices. If \mbox{Sym}_n denotes the space of n \times n symmetric matrices and \mbox{Skew}_n the space of n \times n skew-symmetric matrices then \mbox{Mat}_n = \mbox{Sym}_n + \mbox{Skew}_n and \mbox{Sym}_n \cap \mbox{Skew}_n = \{0\}, i.e. \mbox{Mat}_n = \mbox{Sym}_n \oplus \mbox{Skew}_n , where \oplus denotes the
direct sum. Let X \in \mbox{Mat}_n then X = \frac{1}{2}\left(X + X^\textsf{T}\right) + \frac{1}{2}\left(X - X^\textsf{T}\right). Notice that \frac{1}{2}\left(X + X^\textsf{T}\right) \in \mbox{Sym}_n and \frac{1}{2} \left(X - X^\textsf{T}\right) \in \mathrm{Skew}_n. This is true for every
square matrix X with entries from any
field whose
characteristic is different from 2. A symmetric n \times n matrix is determined by \tfrac{1}{2}n(n+1) scalars (the number of entries on or above the
main diagonal). Similarly, a
skew-symmetric matrix is determined by \tfrac{1}{2}n(n-1) scalars (the number of entries above the main diagonal).
Matrix congruent to a symmetric matrix Any matrix
congruent to a symmetric matrix is again symmetric: if X is a symmetric matrix, then so is A X A^{\mathrm T} for any matrix A.
Symmetry implies normality A (real-valued) symmetric matrix is necessarily a
normal matrix.
Real symmetric matrices Denote by \langle \cdot,\cdot \rangle the standard
inner product on \mathbb{R}^n. The real n \times n matrix A is symmetric if and only if \langle Ax, y \rangle = \langle x, Ay \rangle \quad \forall x, y \in \mathbb{R}^n. Since this definition is independent of the choice of
basis, symmetry is a property that depends only on the
linear operator A and a choice of
inner product. This characterization of symmetry is useful, for example, in
differential geometry, for each
tangent space to a
manifold may be endowed with an inner product, giving rise to what is called a
Riemannian manifold. Another area where this formulation is used is in
Hilbert spaces. The finite-dimensional
spectral theorem says that any symmetric matrix whose entries are
real can be
diagonalized by an
orthogonal matrix. More explicitly: For every real symmetric matrix A there exists a real orthogonal matrix Q such that D = Q^{\mathrm T} A Q is a
diagonal matrix. Every real symmetric matrix is thus,
up to choice of an
orthonormal basis, a diagonal matrix. If A and B are n \times n real symmetric matrices that commute, then they can be simultaneously diagonalized by an orthogonal matrix: there exists a basis of \mathbb{R}^n such that every element of the basis is an
eigenvector for both A and B. Every real symmetric matrix is
Hermitian, and therefore all its
eigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrix D (above), and therefore D is uniquely determined by A up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices.
Complex symmetric matrices A complex symmetric matrix can be 'diagonalized' using a
unitary matrix: thus if A is a complex symmetric matrix, there is a unitary matrix U such that U A U^{\mathrm T} is a real diagonal matrix with non-negative entries. This result is referred to as the
Autonne–Takagi factorization. It was originally proved by
Léon Autonne (1915) and
Teiji Takagi (1925) and rediscovered with different proofs by several other mathematicians. In fact, the matrix B=A^{\dagger} A is Hermitian and
positive semi-definite, so there is a unitary matrix V such that V^{\dagger} B V is diagonal with non-negative real entries. Thus C=V^{\mathrm T} A V is complex symmetric with C^{\dagger}C real. Writing C=X+iY with X and Y real symmetric matrices, C^{\dagger}C=X^2+Y^2+i(XY-YX). Thus XY=YX. Since X and Y commute, there is a real orthogonal matrix W such that both W X W^{\mathrm T} and W Y W^{\mathrm T} are diagonal. Setting U=W V^{\mathrm T} (a unitary matrix), the matrix UAU^{\mathrm T} is complex diagonal. Pre-multiplying U by a suitable diagonal unitary matrix (which preserves unitarity of U), the diagonal entries of UAU^{\mathrm T} can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as UAU^\mathrm T = \operatorname{diag}(r_1 e^{i\theta_1},r_2 e^{i\theta_2}, \dots, r_n e^{i\theta_n}). The matrix we seek is simply given by D = \operatorname{diag}(e^{-i\theta_1/2},e^{-i\theta_2/2}, \dots, e^{-i\theta_n/2}). Clearly DUAU^\mathrm TD = \operatorname{diag}(r_1, r_2, \dots, r_n) as desired, so we make the modification U' = DU. Since their squares are the eigenvalues of A^{\dagger} A, they coincide with the
singular values of A. (Note, about the eigen-decomposition of a complex symmetric matrix A, the Jordan normal form of A may not be diagonal, therefore A may not be diagonalized by any similarity transformation.) == Decomposition ==