P versus NP problem The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the
Cobham–Edmonds thesis. The complexity class
NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the
Boolean satisfiability problem, the
Hamiltonian path problem and the
vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of
integer programming problems in
operations research, many problems in
logistics,
protein structure prediction in
biology, and the ability to find formal proofs of
pure mathematics theorems. The P versus NP problem is one of the
Millennium Prize Problems proposed by the
Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.
Problems in NP not known to be in P or NP-complete It was shown by Ladner that if \textsf{P} \neq \textsf{NP} then there exist problems in \textsf{NP} that are neither in \textsf{P} nor \textsf{NP}-complete. If graph isomorphism is NP-complete, the
polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to
László Babai and
Eugene Luks has run time O(2^{\sqrt{n \log n}}) for graphs with n vertices, although some recent work by Babai offers some potentially new perspectives on this. The
integer factorization problem is the computational problem of determining the
prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the
RSA algorithm. The integer factorization problem is in \textsf{NP} and in \textsf{co-NP} (and even in UP and co-UP). If the problem is \textsf{NP}-complete, the polynomial time hierarchy will collapse to its first level (i.e., \textsf{NP} will equal \textsf{co-NP}). The best known algorithm for integer factorization is the
general number field sieve, which takes time O(e^{\left(\sqrt[3]{\frac{64}{9}}\right)\sqrt[3]{(\log n)}\sqrt[3]{(\log \log n)^2}}) to factor an odd integer n. However, the best known
quantum algorithm for this problem,
Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
Separations between other complexity classes Many known complexity classes are suspected to be unequal, but this has not been proved. For instance \textsf{P} \subseteq \textsf{NP} \subseteq \textsf{PP} \subseteq \textsf{PSPACE}, but it is possible that \textsf{P} = \textsf{PSPACE}. If \textsf{P} is not equal to \textsf{NP}, then \textsf{P} is not equal to \textsf{PSPACE} either. Since there are many known complexity classes between \textsf{P} and \textsf{PSPACE}, such as \textsf{RP}, \textsf{BPP}, \textsf{PP}, \textsf{BQP}, \textsf{MA}, \textsf{PH}, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, \textsf{co-NP} is the class containing the
complement problems (i.e. problems with the
yes/
no answers reversed) of \textsf{NP} problems. It is believed that \textsf{NP} is not equal to \textsf{co-NP}; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then \textsf{P} is not equal to \textsf{NP}, since \textsf{P} = \textsf{co-P}. Thus if P = NP we would have \textsf{co-P} = \textsf{co-NP} whence \textsf{NP} = \textsf{P} = \textsf{co-P} = \textsf{co-NP}. Similarly, it is not known if \textsf{L} (the set of all problems that can be solved in logarithmic space) is strictly contained in \textsf{P} or equal to \textsf{P}. Again, there are many complexity classes between the two, such as \textsf{NL} and \textsf{NC}, and it is not known if they are distinct or equal classes. It is suspected that \textsf{P} and \textsf{BPP} are equal. However, it is currently open if \textsf{BPP} = \textsf{NEXP}. ==Intractability== A problem that can theoretically be solved, but requires impractically large, near-infinite, amount of resources (e.g., time) to do so, is known as an '''''
. Conversely, a problem that can be solved in practice is called a '''
, literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable'', though this risks confusion with a
feasible solution in
mathematical optimization. Tractable problems are frequently identified with problems that have polynomial-time solutions (\textsf{P}, \textsf{PTIME}); this is known as the
Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are
EXPTIME-hard. If \textsf{NP} is not the same as \textsf{P}, then
NP-hard problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in \textsf{P} does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in
Presburger arithmetic has been shown not to be in \textsf{P}, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete
knapsack problem over a wide range of sizes in less than quadratic time and
SAT solvers routinely handle large instances of the NP-complete
Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes 2^n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 10^{12} operations each second, the program would run for about 4 \times 10^{10} years, which is the same order of magnitude as the
age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes 1.0001^n operations is practical until n gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, n^{15}, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even n^3 or n^2 algorithms are often impractical on realistic sizes of problems. ==Continuous complexity theory==