Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Consider -dimensional vectors that are formed as a list of scalars, such as the three-dimensional vectors \mathbf x = \begin{bmatrix}1\\-3\\4\end{bmatrix}\quad\mbox{and}\quad \mathbf y = \begin{bmatrix}-20\\60\\-80\end{bmatrix}. These vectors are said to be
scalar multiples of each other, or
parallel or
collinear, if there is a scalar such that \mathbf x = \lambda \mathbf y. In this case, {{nowrap|\lambda = -\frac{1}{20} .}} Now consider the linear transformation of -dimensional vectors defined by an -by- matrix , A \mathbf v = \mathbf w, or \begin{bmatrix} A_{11} & A_{12} & \cdots & A_{1n} \\ A_{21} & A_{22} & \cdots & A_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n1} & A_{n2} & \cdots & A_{nn} \\ \end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} = \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} where, for each row, w_i = A_{i1} v_1 + A_{i2} v_2 + \cdots + A_{in} v_n = \sum_{j = 1}^n A_{ij} v_j. If it occurs that and are scalar multiples, that is if then is an
eigenvector of the linear transformation and the scale factor is the
eigenvalue corresponding to that eigenvector. Equation () is the
eigenvalue equation for the matrix . Equation () can be stated equivalently as where is the -by-
identity matrix and is the zero vector.
Eigenvalues and the characteristic polynomial Equation () has a nonzero solution
if and only if the
determinant of the matrix is zero. Therefore, the eigenvalues of are values of that satisfy the equation Using the
Leibniz formula for determinants, the left-hand side of equation () is a
polynomial function of the variable and the
degree of this polynomial is , the order of the matrix . Its
coefficients depend on the entries of , except that its term of degree is always . This polynomial is called the
characteristic polynomial of . Equation () is called the
characteristic equation or the
secular equation of . The characteristic polynomial of an -by- matrix , being a polynomial of degree , has at most
complex number roots, which can be found by factoring the characteristic polynomial, or numerically by root finding. The characteristic polynomial can be
factored into the product of linear terms, where the complex numbers , , ... , , each of which is an eigenvalue, may not all be distinct. (The number of times an eigenvalue appears is known as its
algebraic multiplicity.) As a brief example, which is described in more detail in the examples section later, consider the matrix A = \begin{bmatrix} 2 & 1\\ 1 & 2 \end{bmatrix}. Taking the determinant of , the characteristic polynomial of is \det(A - \lambda I) = \begin{vmatrix} 2 - \lambda & 1 \\ 1 & 2 - \lambda \end{vmatrix} = 3 - 4\lambda + \lambda^2. Setting the characteristic polynomial equal to zero, it has roots at and , which are the two eigenvalues of . The eigenvectors corresponding to each eigenvalue can be found by solving for the components of in the equation . In this example, the eigenvectors are any nonzero scalar multiples of \mathbf v_{\lambda=1} = \begin{bmatrix} 1 \\ -1 \end{bmatrix}, \quad \mathbf v_{\lambda=3} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}. If the entries of the matrix are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be
irrational numbers even if all the entries of are
rational numbers or even if they are all integers. However, if the entries of are all
algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of
complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the
intermediate value theorem at least one of the roots is real. Therefore, any
real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.
Spectrum of a matrix The
spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the
spectral radius of the matrix.
Algebraic multiplicity Let be an eigenvalue of an -by- matrix . The
algebraic multiplicity of the eigenvalue is its
multiplicity as a root of the characteristic polynomial, that is, the largest integer such that
evenly divides that polynomial. Suppose a matrix has dimension and distinct eigenvalues. Whereas equation () factors the characteristic polynomial of into the product of linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, \det(A - \lambda I) = (\lambda_1 - \lambda)^{\mu_A(\lambda_1)}(\lambda_2 - \lambda)^{\mu_A(\lambda_2)} \cdots (\lambda_d - \lambda)^{\mu_A(\lambda_d)}. If then the right-hand side is the product of linear terms, and this is the same as equation (). The size of each eigenvalue's algebraic multiplicity is related to the dimension as \begin{align} 1 &\leq \mu_A(\lambda_i) \leq n, \\ \mu_A &= \sum_{i=1}^d \mu_A\left(\lambda_i\right) = n. \end{align} If , then is said to be a '
. If equals the geometric multiplicity of , , defined in the next section, then is said to be a '.
Eigenspaces, geometric multiplicity, and the eigenbasis for matrices Given a particular eigenvalue of the matrix , define the
set to be all vectors that satisfy equation (), E = \left\{\mathbf{v} : \left(A - \lambda I\right) \mathbf{v} = \mathbf{0}\right\}. On one hand, this set is precisely the
kernel or nullspace of the matrix . On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of associated with . So, the set is the
union of the zero vector with the set of all eigenvectors of associated with , and equals the nullspace of The space is called the '
or ' of associated with . In general is a complex number and the eigenvectors are complex matrices (column vectors). Because every nullspace is a
linear subspace of the domain, is a linear subspace of {{nowrap|\mathbb{C}^n.}} Because the eigenspace is a linear subspace, it is
closed under addition. That is, if two vectors and belong to the set , written , then or equivalently . This can be checked using the
distributive property of matrix multiplication. Similarly, because is a linear subspace, it is closed under scalar multiplication. That is, if and is a complex number, or equivalently . This can be checked by noting that multiplication of complex matrices by complex numbers is
commutative. As long as and are not zero, they are also eigenvectors of associated with . The dimension of the eigenspace associated with , or equivalently the maximum number of linearly independent eigenvectors associated with , is referred to as the eigenvalue's '''
Because is also the nullspace of , the geometric multiplicity of is the dimension of the nullspace of also called the ' of . This quantity is related to the size and rank of by the equation \gamma_A(\lambda) = n - \operatorname{rank}(A - \lambda I). Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed . 1 \le \gamma_A(\lambda) \le \mu_A(\lambda) \le n To prove the inequality \gamma_A(\lambda)\le\mu_A(\lambda), let , where is a fixed complex number, and the eigenspace associated with is the nullspace of . Let the dimension of that eigenspace be This means that the last rows of the echelon form of are zero. Thus, there is an invertible matrix coming from Gauss-Jordan reduction, such that EB=\begin{bmatrix}*&*\\\mathbf 0_{k\times (n-k)}&\mathbf 0_{k\times k}\end{bmatrix}. Therefore the last rows of are times the last rows of . Therefore the polynomial evenly divides the polynomial , because of basic properties of determinants (homogeneity). On the other hand, , so divides , and so the algebraic multiplicity of is at least . Suppose has distinct eigenvalues , where the geometric multiplicity of is . The total geometric multiplicity of , \gamma_A = \sum_{i=1}^d \gamma_A(\lambda_i), \quad d \le \gamma_A \le n, is the dimension of the
sum of all the eigenspaces of 's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of . If \gamma_A=n, then • The direct sum of the eigenspaces of all of 's eigenvalues is the entire vector space {{nowrap|\mathbb{C}^n.}} • A basis of \mathbb{C}^n can be formed from linearly independent eigenvectors of ; such a basis is called an '''''' • Any vector in \mathbb{C}^n can be written as a linear combination of eigenvectors of .
Additional properties Let be an arbitrary matrix of complex numbers with eigenvalues . Each eigenvalue appears times in this list, where is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: • The
trace of , defined as the sum of its diagonal elements, is also the sum of all eigenvalues, \operatorname{tr}(A) = \sum_{i=1}^n a_{ii} = \sum_{i=1}^n \lambda_i = \lambda_1 + \lambda_2 + \cdots + \lambda_n. • The
determinant of is the product of all its eigenvalues, \det(A) = \prod_{i=1}^n \lambda_i = \lambda_1\lambda_2 \cdots \lambda_n. • The eigenvalues of the th power of ; i.e., the eigenvalues of , for any positive integer , are . • The matrix is
invertible if and only if every eigenvalue is nonzero. • If is invertible, then the eigenvalues of are \frac{1}{\lambda_1}, \ldots, \frac{1}{\lambda_n} and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the
reciprocal polynomial of the original up to a scalar factor, the eigenvalues share the same algebraic multiplicity. • If is equal to its
conjugate transpose , or equivalently if is
Hermitian, then every eigenvalue is real. The same is true of any
symmetric real matrix. • If is not only Hermitian but also
positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. • If is
unitary, every eigenvalue has absolute value . • If is a matrix and are its eigenvalues, then the eigenvalues of matrix (where is the identity matrix) are . Moreover, if , the eigenvalues of are . More generally, for a polynomial the eigenvalues of matrix are .
Left and right eigenvectors Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a '''''', namely a vector that multiplies the matrix in the defining equation, equation (), A \mathbf v = \lambda \mathbf v. The eigenvalue and eigenvector problem can also be defined for vectors that multiply matrix . In this formulation, the defining equation is \mathbf u A = \kappa \mathbf u, where is a scalar and is a matrix. Any row vector satisfying this equation is called a '''''' of and is its associated eigenvalue. Taking the transpose of this equation, A^\textsf{T} \mathbf u^\textsf{T} = \kappa \mathbf u^\textsf{T}. Comparing this equation to equation (), it follows immediately that a left eigenvector of is the same as the transpose of a right eigenvector of , with the same eigenvalue. Furthermore, since the characteristic polynomial of is the same as the characteristic polynomial of , the left and right eigenvectors of are associated with the same eigenvalues.
Eigenvalues of Transpose A matrix has the same eigenvalues as its transpose, as can be directly seen as follows. Assume is an eigenvalue of an n \times n matrix with eigenvector . Then Ax=\lambda x or (A-\lambda I) x=0 Thus, columns of (A-\lambda I) are linearly dependent. Or the
rank of the matrix is less than . But as column rank = row rank, the rows are not linearly independent. Hence, there are numbers y_1,y_2,\ldots,y_n such that \sum_{i=1}^n y_iR_i=0, where R_i's are the rows of (A-\lambda I). Let y=(y_1,y_2,\ldots,y_n)^T then y^T (A-\lambda I)=0 or taking transpose, (A^T-\lambda I)y^T=0. Thus, is also an eigenvalue of A^T. Moreover, this argument shows that the eigenvalues of A and A^T have the same geometric multiplicity.
Diagonalization and the eigendecomposition Suppose the eigenvectors of form a basis, or equivalently has linearly independent eigenvectors , , ..., with associated eigenvalues , , ..., . The eigenvalues need not be distinct. Define a
square matrix whose columns are the linearly independent eigenvectors of , Q = \begin{bmatrix} \mathbf v_1 & \mathbf v_2 & \cdots & \mathbf v_n \end{bmatrix}. Since each column of is an eigenvector of , right multiplying by scales each column of by its associated eigenvalue, AQ = \begin{bmatrix} \lambda_1 \mathbf v_1 & \lambda_2 \mathbf v_2 & \cdots & \lambda_n \mathbf v_n \end{bmatrix}. With this in mind, define a diagonal matrix where each diagonal element is the eigenvalue associated with the th column of . Then AQ = Q\Lambda. Because the columns of are linearly independent, is invertible. Right multiplying both sides of the equation by , A = Q\Lambda Q^{-1}, or by instead left multiplying both sides by , Q^{-1}AQ = \Lambda. can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the
eigendecomposition and it is a
similarity transformation. Such a matrix is said to be ''
to the diagonal matrix or diagonalizable''. The matrix is the change of basis matrix of the similarity transformation. Essentially, the matrices and represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as . Conversely, suppose a matrix is diagonalizable. Let be a non-singular square matrix such that is some diagonal matrix . Left multiplying both by , . Each column of must therefore be an eigenvector of whose eigenvalue is the corresponding diagonal element of . Since the columns of must be linearly independent for to be invertible, there exist linearly independent eigenvectors of . It then follows that the eigenvectors of form a basis if and only if is diagonalizable. A matrix that is not diagonalizable is said to be
defective. For defective matrices, the notion of eigenvectors generalizes to
generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the
Jordan normal form. Over an algebraically closed field, any matrix has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into
generalized eigenspaces.
Variational characterization In the
Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of is the maximum value of the
quadratic form . A value of that realizes that maximum is an eigenvector.
Matrix examples Two-dimensional matrix example . Consider the matrix A = \begin{bmatrix} 2 & 1\\ 1 & 2 \end{bmatrix}. The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors of this transformation satisfy equation (), and the values of for which the determinant of the matrix equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of , \begin{align} \det(A - \lambda I) &= \left|\begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} - \lambda\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\right| = \begin{vmatrix} 2 - \lambda & 1 \\ 1 & 2 - \lambda \end{vmatrix} \\[6pt] &= 3 - 4\lambda + \lambda^2 \\[6pt] &= (\lambda - 3)(\lambda - 1). \end{align} Setting the characteristic polynomial equal to zero, it has roots at and , which are the two eigenvalues of . For , equation () becomes, \begin{align} (A - I)\mathbf{v}_{\lambda=1} &= \begin{bmatrix} 1 & 1\\ 1 & 1\end{bmatrix} \begin{bmatrix}v_1 \\ v_2\end{bmatrix} = \begin{bmatrix}0 \\ 0\end{bmatrix} \\ 1v_1 + 1v_2 &= 0 \end{align} Any nonzero vector with solves this equation. Therefore, \mathbf{v}_{\lambda=1} = \begin{bmatrix} v_1 \\ -v_1 \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} is an eigenvector of corresponding to , as is any scalar multiple of this vector. For , equation () becomes \begin{align} (A - 3I)\mathbf{v}_{\lambda=3} &= \begin{bmatrix} -1 & \hphantom{-}1\\ \hphantom{-}1 & -1 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ -1v_1 + 1v_2 &= 0;\\ 1v_1 - 1v_2 &= 0 \end{align} Any nonzero vector with solves this equation. Therefore, \mathbf v_{\lambda=3} = \begin{bmatrix} v_1 \\ v_1 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} is an eigenvector of corresponding to , as is any scalar multiple of this vector. Thus, the vectors and are eigenvectors of associated with the eigenvalues and , respectively.
Three-dimensional matrix example Consider the matrix A = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{bmatrix}. The characteristic polynomial of is \begin{align} \det(A - \lambda I) &= \left|\begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{bmatrix} - \lambda\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\right| = \begin{vmatrix} 2 - \lambda & 0 & 0 \\ 0 & 3 - \lambda & 4 \\ 0 & 4 & 9 - \lambda \end{vmatrix}, \\[6pt] &= (2 - \lambda)\bigl[(3 - \lambda)(9 - \lambda) - 16\bigr] = -\lambda^3 + 14\lambda^2 - 35\lambda + 22. \end{align} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of . These eigenvalues correspond to the eigenvectors , , and , or any nonzero multiple thereof.
Three-dimensional matrix example with complex eigenvalues Consider the
cyclic permutation matrix A = \begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \end{bmatrix}. This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is , whose roots are \begin{align} \lambda_1 &= 1 \\ \lambda_2 &= -\frac{1}{2} + i \frac{\sqrt{3}}{2} \\ \lambda_3 &= \lambda_2^* = -\frac{1}{2} - i \frac{\sqrt{3}}{2} \end{align} where is an
imaginary unit with . For the real eigenvalue , any vector with three equal nonzero entries is an eigenvector. For example, A \begin{bmatrix} 5\\ 5\\ 5 \end{bmatrix} = \begin{bmatrix} 5\\ 5\\ 5 \end{bmatrix} = 1 \cdot \begin{bmatrix} 5\\ 5\\ 5 \end{bmatrix}. For the complex conjugate pair of imaginary eigenvalues, \lambda_2\lambda_3 = 1, \quad \lambda_2^2 = \lambda_3, \quad \lambda_3^2 = \lambda_2. Then A \begin{bmatrix} 1 \\ \lambda_2 \\ \lambda_3 \end{bmatrix} = \begin{bmatrix} \lambda_2 \\ \lambda_3 \\ 1 \end{bmatrix} = \lambda_2 \cdot \begin{bmatrix} 1 \\ \lambda_2 \\ \lambda_3 \end{bmatrix}, and A \begin{bmatrix} 1 \\ \lambda_3 \\ \lambda_2 \end{bmatrix} = \begin{bmatrix} \lambda_3 \\ \lambda_2 \\ 1 \end{bmatrix} = \lambda_3 \cdot \begin{bmatrix} 1 \\ \lambda_3 \\ \lambda_2 \end{bmatrix}. Therefore, the other two eigenvectors of are complex and are and with eigenvalues and , respectively. The two complex eigenvectors also appear in a complex conjugate pair, \mathbf v_{\lambda_2} = \mathbf v_{\lambda_3}^*.
Diagonal matrix example Matrices with entries only along the main diagonal are called
diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix A = \begin{bmatrix} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3\end{bmatrix}. The characteristic polynomial of is \det(A - \lambda I) = (1 - \lambda)(2 - \lambda)(3 - \lambda), which has the roots , , and . These roots are the diagonal elements as well as the eigenvalues of . Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, \mathbf v_{\lambda_1} = \begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix},\quad \mathbf v_{\lambda_2} = \begin{bmatrix} 0\\ 1\\ 0 \end{bmatrix},\quad \mathbf v_{\lambda_3} = \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}, respectively, as well as scalar multiples of these vectors.
Triangular matrix example A matrix whose elements above the main diagonal are all zero is called a
lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an
upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, A = \begin{bmatrix} 1 & 0 & 0\\ 1 & 2 & 0\\ 2 & 3 & 3 \end{bmatrix}. The characteristic polynomial of is \det(A - \lambda I) = (1 - \lambda)(2 - \lambda)(3 - \lambda), which has the roots , , and . These roots are the diagonal elements as well as the eigenvalues of . These eigenvalues correspond to the eigenvectors, \mathbf v_{\lambda_1} = \begin{bmatrix} 1\\ -1\\ \frac{1}{2}\end{bmatrix},\quad \mathbf v_{\lambda_2} = \begin{bmatrix} 0\\ 1\\ -3\end{bmatrix},\quad \mathbf v_{\lambda_3} = \begin{bmatrix} 0\\ 0\\ 1\end{bmatrix}, respectively, as well as scalar multiples of these vectors.
Matrix with repeated eigenvalues example As in the previous example, the lower triangular matrix A = \begin{bmatrix} 2 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ 0 & 1 & 3 & 0 \\ 0 & 0 & 1 & 3 \end{bmatrix}, has a characteristic polynomial that is the product of its diagonal elements, \det(A - \lambda I) = \begin{vmatrix} 2 - \lambda & 0 & 0 & 0 \\ 1 & 2- \lambda & 0 & 0 \\ 0 & 1 & 3- \lambda & 0 \\ 0 & 0 & 1 & 3- \lambda \end{vmatrix} = (2 - \lambda)^2(3 - \lambda)^2. The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The '''' of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is , the order of the characteristic polynomial and the dimension of . On the other hand, the '''' of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector . The total geometric multiplicity is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
Eigenvector-eigenvalue identity For a
Hermitian matrix , the norm squared of the
α-th component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding
minor matrix, |v_{i\alpha}|^2 = \frac{\prod_{k}{(\lambda_i(A)-\lambda_k(A_\alpha))}}{\prod_{k \neq i}{(\lambda_i(A)-\lambda_k(A))}}, where A_\alpha is the
submatrix formed by removing the
α-th row and column from the original matrix. This identity also extends to
diagonalizable matrices, and has been rediscovered many times in the literature. == Eigenvalues and eigenfunctions of differential operators ==