Induced partial ordering For arbitrary square matrices M, N we write M \ge N if M - N \ge 0 i.e., M - N is positive semi-definite. This defines a
partial ordering on the set of all square matrices. One can similarly define a strict partial ordering M > N. The ordering is called the
Loewner order.
Inverse of positive definite matrix Every positive definite matrix is
invertible and its inverse is also positive definite. If M \geq N > 0 then N^{-1} \geq M^{-1} > 0. Moreover, by the
min-max theorem, the th largest eigenvalue of M is greater than or equal to the th largest eigenvalue of N.
Scaling If M is positive definite and r > 0 is a real number, then r M is positive definite.
Addition • If M and N are positive-definite, then the sum M + N is also positive-definite.
Trace The diagonal entries m_{ii} of a positive-semidefinite matrix are real and non-negative. As a consequence the
trace, \operatorname{tr}(M) \ge 0. Furthermore, since every principal sub-matrix (in particular, 2-by-2) is positive semidefinite, \left|m_{ij}\right| \leq \sqrt{m_{ii}m_{jj}} \quad \forall i, j and thus, when n \ge 1, \max_{i,j} \left|m_{ij}\right| \leq \max_i m_{ii} An n \times n Hermitian matrix M is positive definite if it satisfies the following trace inequalities: \operatorname{tr}(M) > 0 \quad \mathrm{and} \quad \frac{(\operatorname{tr}(M))^2}{\operatorname{tr}(M^2)} > n-1 . Another important result is that for any M and N positive-semidefinite matrices, \operatorname{tr}(MN) \ge 0 . This follows by writing \operatorname{tr}(MN) = \operatorname{tr}(M^\frac{1}{2}N M^\frac{1}{2}). The matrix M^\frac{1}{2}N M^\frac{1}{2} is positive-semidefinite and thus has non-negative eigenvalues, whose sum, the trace, is therefore also non-negative.
Hadamard product If M, N \geq 0, although M N is not necessary positive semidefinite, the
Hadamard product is, M \circ N \geq 0 (this result is often called the
Schur product theorem). Regarding the Hadamard product of two positive semidefinite matrices M = (m_{ij}) \geq 0, N \geq 0, there are two notable inequalities: • Oppenheim's inequality: \det(M \circ N) \geq \det (N) \prod\nolimits_i m_{ii}. • \det(M \circ N) \geq \det(M) \det(N).
Kronecker product If M, N \geq 0, although M N is not necessary positive semidefinite, the
Kronecker product M \otimes N \geq 0.
Frobenius product If M, N \geq 0, although M N is not necessary positive semidefinite, the
Frobenius inner product M : N \geq 0 (Lancaster–Tismenetsky,
The Theory of Matrices, p. 218).
Convexity The set of positive semidefinite symmetric matrices is
convex. That is, if M and N are positive semidefinite, then for any \alpha between and , \alpha M + \left(1 - \alpha\right) N is also positive semidefinite. For any vector \mathbf{x}: \mathbf{x}^\mathsf{T} \left(\alpha M + \left(1 - \alpha\right)N\right)\mathbf{x} = \alpha \mathbf{x}^\mathsf{T} M\mathbf{x} + (1 - \alpha) \mathbf{x}^\mathsf{T} N\mathbf{x} \geq 0. This property guarantees that
semidefinite programming problems converge to a globally optimal solution.
Relation with cosine The positive-definiteness of a matrix A expresses that the angle \theta between any vector \mathbf{x} and its image A \mathbf{x} is always -\pi / 2 \cos\theta = \frac{ \mathbf{x}^\mathsf{T} A\mathbf{x} }{\lVert \mathbf{x} \rVert \lVert A\mathbf{x} \rVert} = \frac{\langle \mathbf{x}, A\mathbf{x} \rangle}{\lVert \mathbf{x} \rVert \lVert A\mathbf{x} \rVert} , \theta = \theta(\mathbf{x}, A \mathbf{x}) \equiv \widehat{\left(\mathbf{x},A\mathbf{x}\right)} \equiv the angle between \mathbf{x} and A\mathbf{x}.
Further properties • If M is a symmetric
Toeplitz matrix, i.e. the entries m_{ij} are given as a function of their absolute index differences: m_{ij} = h(|i-j|), and the
strict inequality \sum_{j \neq 0} \left|h(j)\right| holds, then M is
strictly positive definite. • Let M > 0 and N Hermitian. If MN + NM \ge 0 (resp., MN + NM > 0) then N \ge 0 (resp., N > 0). • If M > 0 is real, then there is a \delta > 0 such that M > \delta I, where I is the
identity matrix. • If M_k denotes the leading k \times k minor, \det\left(M_k\right)/\det\left(M_{k-1}\right) is the th pivot during
LU decomposition. • A matrix is negative definite if its th order leading
principal minor is negative when k is odd, and positive when k is even. • If M is a real positive definite matrix, then there exists a positive real number m such that for every vector \mathbf{v}, \mathbf{v}^\mathsf{T} M\mathbf{v} \geq m\|\mathbf{v}\|_2^{2}. • A Hermitian matrix is positive semidefinite if and only if all of its principal minors are nonnegative. It is however not enough to consider the leading principal minors only, as is checked on the diagonal matrix with entries and
Block matrices and submatrices A positive 2n \times 2n matrix may also be defined by
blocks: M = \begin{bmatrix} A & B \\ C & D \end{bmatrix} where each block is n \times n, By applying the positivity condition, it immediately follows that A and D are hermitian, and C = B^*. We have that \mathbf{z}^* M\mathbf{z} \ge 0 for all complex \mathbf{z}, and in particular for \mathbf{z} = [\mathbf{v}, 0]^\mathsf{T} . Then \begin{bmatrix} \mathbf{v}^* & 0 \end{bmatrix} \begin{bmatrix} A & B \\ B^* & D \end{bmatrix} \begin{bmatrix} \mathbf{v} \\ 0 \end{bmatrix} = \mathbf{v}^* A\mathbf{v} \ge 0. A similar argument can be applied to D, and thus we conclude that both A and D must be positive definite. The argument can be extended to show that any
principal submatrix of M is itself positive definite. Converse results can be proved with stronger conditions on the blocks, for instance, using the
Schur complement.
Local extrema A general
quadratic form f(\mathbf{x}) on n real variables x_1, \ldots, x_n can always be written as \mathbf{x}^\mathsf{T} M \mathbf{x} where \mathbf{x} is the column vector with those variables, and M is a symmetric real matrix. Therefore, the matrix being positive definite means that f has a unique minimum (zero) when \mathbf{x} is zero, and is strictly positive for any other \mathbf{x}. More generally, a twice-differentiable real function f on n real variables has local minimum at arguments x_1, \ldots, x_n if its
gradient is zero and its
Hessian (the matrix of all second derivatives) is positive semi-definite at that point. Similar statements can be made for negative definite and semi-definite matrices.
Covariance In
statistics, the
covariance matrix of a
multivariate probability distribution is always positive semi-definite; and it is positive definite unless one variable is an exact linear function of the others. Conversely, every positive semi-definite matrix is the covariance matrix of some multivariate distribution. == Extension for non-Hermitian square matrices ==