The PageRank algorithm outputs a
probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document. PageRank works on the assumption that a page is important if many other important pages link to it. This means the more quality backlinks a page has, the higher its PageRank score.
Simplified algorithm Assume a small universe of four web pages:
A,
B,
C, and
D. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume a
probability distribution between 0 and 1. Hence the initial value for each page in this example is 0.25. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. If the only links in the system were from pages
B,
C, and
D to
A, each link would transfer 0.25 PageRank to
A upon the next iteration, for a total of 0.75. :PR(A)= PR(B) + PR(C) + PR(D).\, Suppose instead that page
B had a link to pages
C and
A, page
C had a link to page
A, and page
D had links to all three pages. Thus, upon the first iteration, page
B would transfer half of its existing value (0.125) to page
A and the other half (0.125) to page
C. Page
C would transfer all of its existing value (0.25) to the only page it links to,
A. Since
D had three outbound links, it would transfer one third of its existing value, or approximately 0.083, to
A. At the completion of this iteration, page
A will have a PageRank of approximately 0.458. :PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}.\, In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound links
L( ). :PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. \, In the general case, the PageRank value for any page
u can be expressed as: :PR(u) = \sum_{v \in B_u} \frac{PR(v)}{L(v)}, i.e. the PageRank value for a page
u is dependent on the PageRank values for each page
v contained in the set
Bu (the set containing all pages linking to page
u), divided by the number
L(
v) of links from page
v.
Damping factor The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue following links is a damping factor
d. The probability that they instead jump to any random page is
1 - d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85. support the first variant of the formula above. Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages. the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations. Google's founders, in their original paper, Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which purports to determine which documents are actually highly valued by the Web community. Since December 2007, when it started
actively penalizing sites selling paid text links, Google has combatted
link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google's
trade secrets.
Computation PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the
power iteration method or the power method. The basic mathematical operations performed are identical.
Iterative At t=0, an initial probability distribution is assumed, usually :PR(p_i; 0) = \frac{1}{N}. where N is the total number of pages, and p_i; 0 is page i at time 0. At each time step, the computation, as detailed above, yields :PR(p_i;t+1) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j; t)}{L(p_j)} where d is the damping factor, or in matrix notation {{NumBlk|:|\mathbf{R}(t+1) = d \mathcal{M}\mathbf{R}(t) + \frac{1-d}{N} \mathbf{1},|}} where \mathbf{R}_i(t)=PR(p_i; t) and \mathbf{1} is the column vector of length N containing only ones. The matrix \mathcal{M} is defined as : \mathcal{M}_{ij} = \begin{cases} 1 /L(p_j) , & \mbox{if }j\mbox{ links to }i\ \\ 0, & \mbox{otherwise} \end{cases} i.e., :\mathcal{M} := (K^{-1} A)^T, where A denotes the
adjacency matrix of the graph and K is the diagonal matrix with the outdegrees in the diagonal. The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some small \epsilon :|\mathbf{R}(t+1) - \mathbf{R}(t)| , i.e., when convergence is assumed.
Power method If the matrix \mathcal{M} is a transition probability, i.e., column-stochastic and \mathbf{R} is a probability distribution (i.e., |\mathbf{R}|=1, \mathbf{E}\mathbf{R}=\mathbf{1} where \mathbf{E} is matrix of all ones), then equation () is equivalent to {{NumBlk|:|\mathbf{R} = \left( d \mathcal{M} + \frac{1-d}{N} \mathbf{E} \right)\mathbf{R} =: \widehat{ \mathcal{M}} \mathbf{R}.|}} Hence PageRank \mathbf{R} is the
principal eigenvector of \widehat{\mathcal{M}}. A fast and easy way to compute this is using the
power method: starting with an arbitrary vector x(0), the operator \widehat{\mathcal{M}} is applied in succession, i.e., : x(t+1) = \widehat{\mathcal{M}} x(t), until :|x(t+1) - x(t)| . Note that in equation () the matrix on the right-hand side in the parenthesis can be interpreted as : \frac{1-d}{N} \mathbf{E} = (1-d)\mathbf{P} \mathbf{1}^t, where \mathbf{P} is an initial probability distribution. n the current case :\mathbf{P} := \frac{1}{N} \mathbf{1}. Finally, if \mathcal{M} has columns with only zero values, they should be replaced with the initial probability vector \mathbf{P}. In other words, :\mathcal{M}^\prime := \mathcal{M} + \mathcal{D}, where the matrix \mathcal{D} is defined as :\mathcal{D} := \mathbf{P} \mathbf{D}^t, with : \mathbf{D}_i = \begin{cases} 1, & \mbox{if }L(p_i)=0\ \\ 0, & \mbox{otherwise} \end{cases} In this case, the above two computations using \mathcal{M} only give the same PageRank if their results are normalized: : \mathbf{R}_{\textrm{power}} = \frac{\mathbf{R}_{\textrm{iterative}}} = \frac{\mathbf{R}_{\textrm{algebraic}}}.
Implementation ====
Python ==== import numpy as np def pagerank(M, d: float = 0.85): """PageRank algorithm with explicit number of iterations. Returns ranking of nodes (pages) in the adjacency matrix. Parameters ---------- M : numpy array adjacency matrix where M_i,j represents the link from 'j' to 'i', such that for all 'j' sum(i, M_i,j) = 1 d : float, optional damping factor, by default 0.85 Returns ------- numpy array a vector of ranks such that v_i is the i-th rank from [0, 1], """ N = M.shape[1] w = np.ones(N) / N M_hat = d * M v = M_hat @ w + (1 - d) / N while np.linalg.norm(w - v) >= 1e-10: w = v v = M_hat @ w + (1 - d) / N return v M = np.array(0, 0, 0, .25], [0, 0, 0, .5], [1, 0.5, 0, .25], [0, 0.5, 1, 0) v = pagerank(M, 0.85) ==Variations==