A common thread in many proofs is the
Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the
Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the
spectral theory Given a strictly positive eigenvector
v corresponding to
r and another eigenvector
w with the same eigenvalue. (The vectors
v and
w can be chosen to be real, because
A and
r are both real, so the null space of
A-r has a basis consisting of real vectors.) Assuming at least one of the components of
w is positive (otherwise multiply
w by −1). Given maximal possible
α such that
u=v- α w is non-negative, then one of the components of
u is zero, otherwise
α is not maximum. Vector
u is an eigenvector. It is non-negative, hence by the lemma described in the
previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of
u is zero. The contradiction implies that
w does not exist. Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvalue
r and all other eigenvalues which have the same absolute value. If there is a Jordan block, then the
infinity norm (A/r)k∞ tends to infinity for
k → ∞ , but that contradicts the existence of the positive eigenvector. Given
r = 1, or
A/r. Letting
v be a Perron–Frobenius strictly positive eigenvector, so
Av=v, then: \|v\|_{\infty}= \|A^k v\|_{\infty} \ge \|A^k\|_{\infty} \min_i (v_i), ~~\Rightarrow~~ \|A^k\|_{\infty} \le \|v\|/\min_i (v_i) So ‖
Ak‖∞ is bounded for all
k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖
Ak‖∞ is unbounded. For a two by two matrix: : J^k= \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix} ^k = \begin{pmatrix} \lambda^k & k\lambda^{k-1} \\ 0 & \lambda^k \end{pmatrix}, hence ‖
Jk‖∞ = |
k +
λ| (for |
λ| = 1), so it tends to infinity when
k does so. Since
Jk =
C−1 ''A'
k'C
, then A''
k ≥
Jk/ (
C−1
C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalue
r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as
r. The same claim is true for them, but requires more work.
No other non-negative eigenvectors Given positive (or more generally irreducible non-negative matrix)
A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for
A. Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair (
λ,
y) for
A, such that vector
y is positive, and given (
r,
x), where
x – is the left Perron–Frobenius eigenvector for
A (i.e. eigenvector for
AT), then ''rx'
T'y
= (x''
T A)
y =
xT (
Ay) = ''λx'
T'y
, also x''
T y > 0, so one has:
r =
λ. Since the eigenspace for the Perron–Frobenius eigenvalue
r is one-dimensional, non-negative eigenvector
y is a multiple of the Perron–Frobenius one.
Collatz–Wielandt formula Given a positive (or more generally irreducible non-negative matrix)
A, one defines the function
f on the set of all non-negative non-zero vectors
x such that
f(x) is the minimum value of [
Ax]
i /
xi taken over all those
i such that
xi ≠ 0. Then
f is a real-valued function, whose
maximum is the Perron–Frobenius eigenvalue
r. For the proof we denote the maximum of
f by the value
R. The proof requires to show
R = r. Inserting the Perron-Frobenius eigenvector
v into
f, we obtain
f(v) = r and conclude
r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector
x and let
ξ=f(x). The definition of
f gives
0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector
w for
A for the Perron-Frobenius eigenvalue
r, then
ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x . Hence
f(x) = ξ ≤ r, which implies
R ≤ r.
Perron projection as a limit: Ak/rk Let
A be a positive (or more generally, primitive) matrix, and let
r be its Perron–Frobenius eigenvalue. • There exists a limit
Ak/rk for
k → ∞, denote it by
P. •
P is a
projection operator:
P2 =
P, which commutes with
A:
AP =
PA. • The image of
P is one-dimensional and spanned by the Perron–Frobenius eigenvector
v (respectively for
PT—by the Perron–Frobenius eigenvector
w for
AT). •
P =
vwT, where
v,w are normalized such that
wT v = 1. • Hence
P is a positive operator. Hence
P is a
spectral projection for the Perron–Frobenius eigenvalue
r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrix
M such that there exists an eigenvalue
r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic
polynomial. (These requirements hold for primitive matrices as above). Given that
M is diagonalizable,
M is conjugate to a diagonal matrix with eigenvalues
r1, ... ,
rn on the diagonal (denote
r1 =
r). The matrix
Mk/
rk will be conjugate (1, (
r2/
r)
k, ... , (
rn/
r)
k), which tends to (1,0,0,...,0), for
k → ∞, so the limit exists. The same method works for general
M (without assuming that
M is diagonalizable). The projection and commutativity properties are elementary corollaries of the definition:
MMk/
rk =
Mk/
rk M ;
P2 = lim
M2
k/
r2
k =
P. The third fact is also elementary:
M(
Pu) =
M lim
Mk/
rk u = lim
rMk+1/
rk+1
u, so taking the limit yields
M(
Pu) =
r(
Pu), so image of
P lies in the
r-eigenspace for
M, which is one-dimensional by the assumptions. Denoting by
v,
r-eigenvector for
M (by
w for
MT). Columns of
P are multiples of
v, because the image of
P is spanned by it. Respectively, rows of
w. So
P takes a form
(a v wT), for some
a. Hence its trace equals to
(a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that
P acts identically on the
r-eigenvector for
M. So it is one-dimensional. So choosing (''w'
T'v
) = 1, implies P
= vw''
T.
Inequalities for Perron–Frobenius eigenvalue For any non-negative matrix
A its Perron–Frobenius eigenvalue
r satisfies the inequality: : r \; \le \; \max_i \sum_j a_{ij}. This is not specific to non-negative matrices: for any matrix
A with an eigenvalue \scriptstyle\lambda it is true that \scriptstyle |\lambda| \; \le \; \max_i \sum_j |a_{ij}|. This is an immediate corollary of the
Gershgorin circle theorem. However another proof is more direct: Any
matrix induced norm satisfies the inequality \scriptstyle\|A\| \ge |\lambda| for any eigenvalue \scriptstyle\lambda because, if \scriptstyle x is a corresponding eigenvector, \scriptstyle\|A\| \ge |Ax|/|x| = |\lambda x|/|x| = |\lambda|. The
infinity norm of a matrix is the maximum of row sums: \scriptstyle \left \| A \right \| _\infty = \max \limits _{1 \leq i \leq m} \sum _{j=1} ^n | a_{ij} |. Hence the desired inequality is exactly \scriptstyle\|A\|_\infty \ge |\lambda| applied to the non-negative matrix
A. Another inequality is: :\min_i \sum_j a_{ij} \; \le \; r . This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that
A is positive (not just non-negative), then there exists a positive eigenvector
w such that
Aw =
rw and the smallest component of
w (say
wi) is 1. Then
r = (
Aw)
i ≥ the sum of the numbers in row
i of
A. Thus the minimum row sum gives a lower bound for
r and this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via the
Collatz-Wielandt formula. One takes the vector
x = (1, 1, ..., 1) and immediately obtains the inequality.
Further proofs Perron projection The proof now proceeds using
spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if
A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if
P is its Perron projection then
AP =
PA = ρ(
A)
P so every column of
P is a positive right eigenvector of
A and every row is a positive left eigenvector. Moreover, if
Ax = λ
x then
PAx = λ
Px = ρ(
A)
Px which means
Px = 0 if λ ≠ ρ(
A). Thus the only positive eigenvectors are those associated with ρ(
A). If
A is a primitive matrix with ρ(
A) = 1 then it can be decomposed as
P ⊕ (1 −
P)
A so that
An =
P + (1 −
P)
An. As
n increases the second of these terms decays to zero leaving
P as the limit of
An as
n → ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. If
v and
w are the positive row and column vectors that it generates then the Perron projection is just
wv/
vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition.
Peripheral projection The analysis when
A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(
A) that negate use of the power method and prevent the powers of (1 −
P)
A decaying as in the primitive case whenever ρ(
A) = 1. So we consider the
peripheral projection, which is the spectral projection of
A corresponding to all the eigenvalues that have modulus
ρ(
A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal.
Cyclicity Suppose in addition that ρ(
A) = 1 and
A has
h eigenvalues on the unit circle. If
P is the peripheral projection then the matrix
R =
AP =
PA is non-negative and irreducible,
Rh =
P, and the cyclic group
P,
R,
R2, ....,
Rh−1 represents the harmonics of
A. The spectral projection of
A at the eigenvalue λ on the unit circle is given by the formula \scriptstyle h^{-1}\sum^h_1\lambda^{-k}R^k. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of
A is given by
A =
R ⊕ (1 −
P)
A so the difference between
An and
Rn is
An −
Rn = (1 −
P)
An representing the transients of
An which eventually decay to zero.
P may be computed as the limit of
Anh as
n → ∞. ==Counterexamples==