MarketPLS (complexity)
Company Profile

PLS (complexity)

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

Description
When searching for a local optimum, there are two interesting issues to deal with: First how to find a local optimum, and second how long it takes to find a local optimum. For many local search algorithms, it is not known, whether they can find a local optimum in polynomial time or not. So to answer the question of how long it takes to find a local optimum, Johnson, Papadimitriou and Yannakakis introduced the complexity class PLS in their paper "How easy is local search?". It contains local search problems for which the local optimality can be verified in polynomial time. A local search problem is in PLS, if the following properties are satisfied: • The size of every solution is polynomially bounded in the size of the instance I. • It is possible to find some solution of a problem instance in polynomial time. • It is possible to calculate the cost of each solution in polynomial time. • It is possible to find all neighbors of each solution in polynomial time. With these properties, it is possible to find for each solution s the best neighboring solution or if there is no such better neighboring solution, state that s is a local optimum. Example Consider the following instance I of the Max-2Sat Problem: (x_{1} \vee x_{2}) \wedge (\neg x_{1} \vee x_{3}) \wedge (\neg x_{2} \vee x_{3}). The aim is to find an assignment, that maximizes the sum of the satisfied clauses. A solution s for that instance is a bit string that assigns every x_{i} the value 0 or 1. In this case, a solution consists of 3 bits, for example s=000, which stands for the assignment of x_{1} to x_{3} with the value 0. The set of solutions F_{L}(I) is the set of all possible assignments of x_{1}, x_{2} and x_{3}. The cost of each solution is the number of satisfied clauses, so c_{L}(I, s=000)=2 because the second and third clause are satisfied. The Flip-neighbor of a solution s is reached by flipping one bit of the bit string s, so the neighbors of s are N(I,000)=\{100, 010, 001\} with the following costs: c_{L}(I, 100)=2 c_{L}(I, 010)=2 c_{L}(I, 001)=2 There are no neighbors with better costs than s, if we are looking for a solution with maximum cost. Even though s is not a global optimum (which for example would be a solution s'=111 that satisfies all clauses and has c_{L}(I, s')=3), s is a local optimum, because none of its neighbors has better costs. Intuitively it can be argued that this problem lies in PLS, because: • It is possible to find a solution to an instance in polynomial time, for example by setting all bits to 0. • It is possible to calculate the cost of a solution in polynomial time, by going once through the whole instance and counting the clauses that are satisfied. • It is possible to find all neighbors of a solution in polynomial time, by taking the set of solutions that differ from s in exactly one bit. If we are simply counting the number of satisfied clauses, the problem can be solved in polynomial time since the number of possible costs is polynomial. However, if we assign each clause a positive integer weight (and seek to locally maximize the sum of weights of satisfied clauses), the problem becomes PLS-complete (below). ==Formal Definition==
Formal Definition
A local search problem L has a set D_L of instances which are encoded using strings over a finite alphabet \Sigma. For each instance I there exists a finite solution set F_L(I). Let R be the relation that models L. The relation R \subseteq D_L \times F_L(I) := \{(I, s) \mid I \in D_{L}, s \in F_{L}(I)\} is in PLS if: • The size of every solution s \in F_{L}(I) is polynomial bounded in the size of I • Problem instances I \in D_{L} and solutions s \in F_{L}(I) are polynomial time verifiable • There is a polynomial time computable function A: D_{L} \rightarrow F_{L}(I) that returns for each instance I \in D_{L} some solution s \in F_{L}(I) • There is a polynomial time computable function B: D_L \times F_L(I) \rightarrow \mathbb{R} ^{+} ), the vertices being the solutions with two solutions s, s' \in F_L(I) connected by a directed arc iff s' \in N(I, s). A local optimum is a solution s, that has no neighbor with better costs. In the implicit graph, a local optimum is a sink. A neighborhood where every local optimum is a global optimum, which is a solution with the best possible cost, is called an exact neighborhood. (also called Local-Opt ): Given two integers n and m and two Boolean circuits S: \{0,1\}^n \rightarrow \{0,1\}^n such that S(0^n) \neq 0^n and V: \{0,1\}^n \rightarrow \{0,1, .., 2^m-1\}, find a vertex x \in \{0,1\}^n such that S(x)\neq x and either S(S(x)) = S(x) or V(S(x)) \leq V(x). Example neighborhood structures Example neighborhood structures for problems with boolean variables (or bit strings) as solution: • Flip - This neighborhood is similar to the Kernighan-Lin neighborhood structure, it is a greedy sequence of swaps, except that each swap happens in two steps. First the p_0 \in P_0 with the most gain of cost, or the least loss of cost, is swapped to P_1, then the node p_1 \in P_1 with the most cost, or the least loss of cost is swapped to P_0 to balance the partitions again. Experiments have shown that Fiduccia-Mattheyses has a smaller run time in each iteration of the standard algorithm, though it sometimes finds an inferior local optimum. • FM-Swap - This neighborhood structure is based on the Fiduccia-Mattheyses neighborhood structure. Each solution s = (P_0, P_1) has only one neighbor, the partition obtained after the first swap of the Fiduccia-Mattheyses. ==The standard Algorithm==
The standard Algorithm
Consider the following computational problem: ''Given some instance I of a PLS problem L, find a locally optimal solution s \in F_L(I) such that c_L(I, s') \geq c_L(I, s) for all s' \in N(I, s)''. Every local search problem can be solved using the following iterative improvement algorithm: The space the standard algorithm needs is only polynomial. It only needs to save the current solution s, which is polynomial bounded by definition. ==Reductions==
Reductions
A Reduction of one problem to another may be used to show that the second problem is at least as difficult as the first. In particular, a PLS-reduction is used to prove that a local search problem that lies in PLS is also PLS-complete, by reducing a PLS-complete Problem to the one that shall be proven to be PLS-complete. PLS-reduction A local search problem L_1 is PLS-reducible if for any instance I_1 of L_1, a subset R of solutions of instance I_2 = f (I_1 ) of L_2 can be chosen, so that the following properties are satisfied: • R contains, among other solutions, all local optima of I_2 • For every solution p of I_1 , a solution q \in R of I_2 = f (I_1 ) can be constructed in polynomial time, so that g(I_1 , q) = p • If the transition graph T_{f (I_1 )} of f (I_1 ) contains a direct path from q to q_0, and q, q_0 \in R, but all internal path vertices are outside R, then for the corresponding solutions p = g(I_1 , q) and p_0 = g(I_1 , q_0 ) holds either p = p_0 or T_{I_1} contains an edge from p to p_0 ==Relationship to other complexity classes==
Relationship to other complexity classes
PLS lies between the functional versions of P and NP: FP ⊆ PLS ⊆ FNP. that describes computational problems in which a solution is guaranteed to exist and can be recognized in polynomial time. For a problem in PLS, a solution is guaranteed to exist because the minimum-cost vertex of the entire graph is a valid solution, and the validity of a solution can be checked by computing its neighbors and comparing the costs of each one to another. It is also proven that if a PLS problem is NP-hard, then NP = co-NP. ==PLS-completeness==
PLS-completeness
Definition A local search problem L is PLS-complete, • Max-4Sat-B/Flip(or CNF-SAT) has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip to Max-4Sat-B/Flip. • Max-4Sat-(B=3)/Flip has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip to Max-4Sat-(B=3)/Flip. • Max-Uniform-Graph-Partitioning/Swap has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-Uniform-Graph-partitioning/Swap. • Set-Cover/k-change has been proven to be PLS-complete for each k ≥ 2 via a tight PLS-reduction from (3, 2, r)-Max-Constraint-Assignment/Change to Set-Cover/k-change. • Metric-TSP/k-Change has been proven to be PLS-complete via a PLS-reduction from Max-4Sat-B/Flip to Metric-TSP/k-Change. • Local-Multi-Processor-Scheduling/k-change has been proven to be PLS-complete via a tight PLS-reduction from Weighted-3Dimensional-Matching/(p, q)-Swap to Local-Multi-Processor-scheduling/(2p+q)-change, where (2p + q) ≥ 8. • Selfish-Multi-Processor-Scheduling/k-change-with-property-t has been proven to be PLS-complete via a tight PLS-reduction from Weighted-3Dimensional-Matching/(p, q)-Swap to (2p+q)-Selfish-Multi-Processor-Scheduling/k-change-with-property-t, where (2p + q) ≥ 8. • Finding a pure Nash Equilibrium in a Symmetric General-Congestion-Game/Change has been proven to be PLS-complete via a tight PLS-reduction from an asymmetric General-Congestion-Game/Change to symmetric General-Congestion-Game/Change. • Finding a pure Nash Equilibrium in a 2-Threshold-Game/Change has been proven to be PLS-complete via a tight reduction from Max-Cut/Flip to 2-Threshold-Game/Change. • Finding a pure Nash Equilibrium in Market-Sharing-Game/Change with polynomial bounded costs has been proven to be PLS-complete via a tight PLS-reduction from 2-Threshold-Games/Change to Market-Sharing-Game/Change. • (2, 3, 6)-Max-Constraint-Assignment-2-partite/Change has been proven to be PLS-complete via a tight PLS-reduction from Circuit/Flip to (2, 3, 6)-Max-Constraint-Assignment-2-partite/Change. == Relations to other complexity classes ==
Relations to other complexity classes
Fearnley, Goldberg, Hollender and Savani proved that a complexity class called CLS (Continuous Local Search) is equal to the intersection of PPAD and PLS. == Further reading ==
tickerdossier.comtickerdossier.substack.com