MarketSmoothed analysis
Company Profile

Smoothed analysis

In theoretical computer science, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable research, for problems ranging from mathematical programming, numerical analysis, machine learning, and data mining. It can give a more realistic analysis of the practical performance of the algorithm compared to analysis that uses worst-case or average-case scenarios.

History
ACM and the European Association for Theoretical Computer Science awarded the 2008 Gödel Prize to Daniel Spielman and Shanghua Teng for developing smoothed analysis. The name Smoothed Analysis was coined by Alan Edelman. In 2010 Spielman received the Nevanlinna Prize for developing smoothed analysis. Spielman and Teng's JACM paper "Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time" was also one of the three winners of the 2009 Fulkerson Prize sponsored jointly by the Mathematical Programming Society (MPS) and the American Mathematical Society (AMS). ==Examples==
Examples
Simplex algorithm for linear programming The simplex algorithm is a very efficient algorithm in practice, and it is one of the dominant algorithms for linear programming in practice. On practical problems, the number of steps taken by the algorithm is linear in the number of variables and constraints. ::C_s(n,d,\sigma) := \max_{\bar{\mathbf A}, \bar{\mathbf b}, \mathbf c} ~ \mathbb{E}_{\hat{\mathbf A},\hat{\mathbf b}}[T(\bar{\mathbf A} + \hat{\mathbf A}, \bar{\mathbf b} + \hat{\mathbf b}, \mathbf c)] = {\rm poly}(d,\log n, \sigma^{-1}). This bound holds for a specific pivot rule called the shadow vertex rule. The shadow vertex rule is slower than more commonly used pivot rules such as Dantzig's rule or the steepest edge rule One example is the 2-opt heuristic for the traveling salesman problem. It can take exponentially many iterations until it finds a locally optimal solution, although in practice the running time is subquadratic in the number of vertices. The approximation ratio, which is the ratio between the length of the output of the algorithm and the length of the optimal solution, tends to be good in practice but can also be bad in the theoretical worst case. One class of problem instances can be given by n points in the box [0,1]^d, where their pairwise distances come from a norm. Already in two dimensions, the 2-opt heuristic might take exponentially many iterations until finding a local optimum. In this setting, one can analyze the perturbation model where the vertices v_1,\dots,v_n are independently sampled according to probability distributions with probability density function f_1,\dots,f_n : [0,1]^d \rightarrow [0,\theta]. For \theta = 1, the points are uniformly distributed. When \theta > 1 is big, the adversary has more ability to increase the likelihood of hard problem instances. In this perturbation model, the expected number of iterations of the 2-opt heuristic, as well as the approximation ratios of resulting output, are bounded by polynomial functions of n and \theta. Another local search algorithm for which smoothed analysis was successful is the k-means method. Given n points in [0,1]^d, it is NP-hard to find a good partition into clusters with small pairwise distances between points in the same cluster. Lloyd's algorithm is widely used and very fast in practice, although it can take e^{\Omega(n)} iterations in the worst case to find a locally optimal solution. However, assuming that the points have independent Gaussian distributions, each with expectation in [0,1]^d and standard deviation \sigma, the expected number of iterations of the algorithm is bounded by a polynomial in n, d and \sigma. == Counter-examples ==
Counter-examples
There are problems that provably cannot be solved in smoothed polynomial time. A prominent example is Nash equilibrium (NE) computation: • Chen, Deng and Teng prove that no algorithm that is polynomial in n and 1/ε can compute an ε-approximate Nash equilibrium in a two-player game with n actions per player, unless PPAD ≤ P. In particular, this means that there is probably no FPTAS for NE. They also prove that no algorithm for computing NE in a two-player game has smoothed complexity polynomial in n and 1/s, where s is the input perturbation size, unless PPAD ≤ RP. In particular, the smoothed complexity of the Lemke-Howson algorithm is probably not polynomial. • Boodaghians, Brakensiek, Hopkins and Rubinstein prove that computing NE in a 2-player game is PPAD-hard (under randomized reductions) even when smoothing with noise of constant magnitude. ==See also==
tickerdossier.comtickerdossier.substack.com