Algorithms for determining reachability fall into two classes: those that require
preprocessing and those that do not. If you have only one (or a few) queries to make, it may be more efficient to forgo the use of more complex data structures and compute the reachability of the desired pair directly. This can be accomplished in
linear time using algorithms such as
breadth first search or
iterative deepening depth-first search. If you will be making many queries, then a more sophisticated method may be used; the exact choice of method depends on the nature of the graph being analysed. In exchange for preprocessing time and some extra storage space, we can create a
data structure which can then answer reachability queries on any pair of vertices in as low as O(1) time. Three different algorithms and data structures for three different, increasingly specialized situations are outlined below.
Floyd–Warshall Algorithm The
Floyd–Warshall algorithm can be used to compute the transitive closure of any directed graph, which gives rise to the reachability relation as in the definition, above. The algorithm requires O(|V|^3) time and O(|V|^2) space in the worst case. This algorithm is not solely interested in reachability as it also computes the shortest path distance between all pairs of vertices. For graphs containing negative cycles,
shortest paths may be undefined, but reachability between pairs can still be noted.
Thorup's Algorithm For
planar digraphs, a much faster method is available, as described by
Mikkel Thorup in 2004. This method can answer reachability queries on a planar graph in O(1) time after spending O(n \log{n}) preprocessing time to create a data structure of O(n \log{n}) size. This algorithm can also supply approximate shortest path distances, as well as route information. The overall approach is to associate with each vertex a relatively small set of so-called separator paths such that any path from a vertex v to any other vertex w must go through at least one of the separators associated with v or w. An outline of the reachability related sections follows. Given a graph G, the algorithm begins by organizing the vertices into layers starting from an arbitrary vertex v_0. The layers are built in alternating steps by first considering all vertices reachable
from the previous step (starting with just v_0) and then all vertices which reach
to the previous step until all vertices have been assigned to a layer. By construction of the layers, every vertex appears at most two layers, and every
directed path, or dipath, in G is contained within two adjacent layers L_i and L_{i+1}. Let k be the last layer created, that is, the lowest value for k such that \bigcup_{i=0}^{k} L_i = V. The graph is then re-expressed as a series of digraphs G_0, G_1, \ldots, G_{k-1} where each G_i = r_i \cup L_i \cup L_{i+1} and where r_i is the contraction of all previous levels L_0 \ldots L_{i-1} into a single vertex. Because every dipath appears in at most two consecutive layers, and because each G_i is formed by two consecutive layers, every dipath in G appears in its entirety in at least one G_i (and no more than 2 consecutive such graphs) For each G_i, three separators are identified which, when removed, break the graph into three components which each contain at most 1/2 the vertices of the original. As G_i is built from two layers of opposed dipaths, each separator may consist of up to 2 dipaths, for a total of up to 6 dipaths over all of the separators. Let S be this set of dipaths. The proof that such separators can always be found is related to the
Planar Separator Theorem of Lipton and Tarjan, and these separators can be located in linear time. For each Q \in S, the directed nature of Q provides for a natural indexing of its vertices from the start to the end of the path. For each vertex v in G_i, we locate the first vertex in Q reachable by v, and the last vertex in Q that reaches to v. That is, we are looking at how early into Q we can get from v, and how far we can stay in Q and still get back to v. This information is stored with each v. Then for any pair of vertices u and w, u can reach w
via Q if u connects to Q earlier than w connects from Q. Every vertex is labelled as above for each step of the recursion which builds G_0 \ldots, G_k. As this recursion has logarithmic depth, a total of O(\log{n}) extra information is stored per vertex. From this point, a logarithmic time query for reachability is as simple as looking over each pair of labels for a common, suitable Q. The original paper then works to tune the query time down to O(1). In summarizing the analysis of this method, first consider that the layering approach partitions the vertices so that each vertex is considered only O(1) times. The separator phase of the algorithm breaks the graph into components which are at most 1/2 the size of the original graph, resulting in a logarithmic recursion depth. At each level of the recursion, only linear work is needed to identify the separators as well as the connections possible between vertices. The overall result is O(n \log n) preprocessing time with only O(\log{n}) additional information stored for each vertex.
Kameda's Algorithm An even faster method for pre-processing, due to T. Kameda in 1975, can be used if the graph is
planar,
acyclic, and also exhibits the following additional properties: all 0-
indegree and all 0-
outdegree vertices appear on the same
face (often assumed to be the outer face), and it is possible to partition the boundary of that face into two parts such that all 0-indegree vertices appear on one part, and all 0-outdegree vertices appear on the other (i.e. the two types of vertices do not alternate). If G exhibits these properties, then we can preprocess the graph in only O(n) time, and store only O(\log{n}) extra bits per vertex, answering reachability queries for any pair of vertices in O(1) time with a simple comparison. Preprocessing performs the following steps. We add a new vertex s which has an edge to each 0-indegree vertex, and another new vertex t with edges from each 0-outdegree vertex. Note that the properties of G allow us to do so while maintaining planarity, that is, there will still be no edge crossings after these additions. For each vertex we store the list of adjacencies (out-edges) in order of the planarity of the graph (for example, clockwise with respect to the graph's embedding). We then initialize a counter i = n + 1 and begin a Depth-First Traversal from s. During this traversal, the
adjacency list of each vertex is visited from left-to-right as needed. As vertices are popped from the traversal's stack, they are labelled with the value i, and i is then decremented. Note that t is always labelled with the value n+1 and s is always labelled with 0. The depth-first traversal is then repeated, but this time the adjacency list of each vertex is visited from right-to-left. When completed, s and t, and their incident edges, are removed. Each remaining vertex stores a 2-dimensional label with values from 1 to n. Given two vertices u and v, and their labels L(u) = (a_1, a_2) and L(v) =(b_1, b_2), we say that L(u) if and only if a_1 \leq b_1, a_2 \leq b_2, and there exists at least one component a_1 or a_2 which is strictly less than b_1 or b_2, respectively. The main result of this method then states that v is reachable from u if and only if L(u) , which is easily calculated in O(1) time. == Related problems ==