By implementation ; Recursion : A
recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common
functional programming method.
Iterative algorithms use repetitions such as
loops or data structures like
stacks to solve problems. Problems may be suited for one implementation or the other. The
Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. ; Serial, parallel or distributed : Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike
parallel or
distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems. ; Deterministic or non-deterministic :
Deterministic algorithms solve the problem with exact decisions at every step; whereas
non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of
heuristics. ; Exact or approximate : While many algorithms reach an exact solution,
approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the
Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value. ; Quantum algorithm :
Quantum algorithms run on a realistic model of
quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of
Quantum computing such as
quantum superposition or
quantum entanglement.
By design paradigm Another way of classifying algorithms is by their design methodology or
paradigm. Some common paradigms are: ;
Brute-force or exhaustive search : Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords. ; Divide and conquer : A
divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually
recursively) until the instances are small enough to solve easily.
Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called
prune and search or
decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the
binary search algorithm. ; Search and enumeration : Many problems (such as playing
chess) can be modelled as problems on
graphs. A
graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes
search algorithms,
branch and bound enumeration, and
backtracking. ;
Randomized algorithm : Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some
randomness. Whether randomized algorithms with
polynomial time complexity can be the fastest algorithm for some problems is an open question known as the
P versus NP problem. There are two large classes of such algorithms: •
Monte Carlo algorithms return a correct answer with high probability. E.g.
RP is the subclass of these that run in
polynomial time. •
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g.
ZPP. ;
Reduction of complexity : This technique transforms difficult problems into better-known problems solvable with (hopefully)
asymptotically optimal algorithms. The goal is to find a reducing algorithm whose
complexity is not dominated by the resulting reduced algorithms. For example, one
selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as
transform and conquer. ;
Back tracking : In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Optimization problems For
optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: ;
Linear programming : When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular
simplex algorithm. Problems that can be solved with linear programming include the
maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be
integers, then it is classified in
integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. ;
Dynamic programming : When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and
overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called
dynamic programming avoids recomputing solutions. For example,
Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted
graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and
memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial. ; The greedy method :
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at
local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles.
Huffman Tree,
Kruskal,
Prim,
Sollin are greedy algorithms that can solve this optimization problem. ;The heuristic method :In
optimization problems,
heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include
local search,
tabu search,
simulated annealing, and
genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an
approximation algorithm. == Examples ==