Simplex algorithm for linear programming The
simplex algorithm is a very efficient algorithm in practice, and it is one of the dominant algorithms for
linear programming in practice. On practical problems, the number of steps taken by the algorithm is linear in the number of variables and constraints. ::C_s(n,d,\sigma) := \max_{\bar{\mathbf A}, \bar{\mathbf b}, \mathbf c} ~ \mathbb{E}_{\hat{\mathbf A},\hat{\mathbf b}}[T(\bar{\mathbf A} + \hat{\mathbf A}, \bar{\mathbf b} + \hat{\mathbf b}, \mathbf c)] = {\rm poly}(d,\log n, \sigma^{-1}). This bound holds for a specific pivot rule called the shadow vertex rule. The shadow vertex rule is slower than more commonly used pivot rules such as Dantzig's rule or the steepest edge rule One example is the
2-opt heuristic for the
traveling salesman problem. It can take exponentially many iterations until it finds a locally optimal solution, although in practice the running time is subquadratic in the number of vertices. The
approximation ratio, which is the ratio between the length of the output of the algorithm and the length of the optimal solution, tends to be good in practice but can also be bad in the theoretical worst case. One class of problem instances can be given by n points in the box [0,1]^d, where their pairwise distances come from a
norm. Already in two dimensions, the 2-opt heuristic might take exponentially many iterations until finding a
local optimum. In this setting, one can analyze the perturbation model where the vertices v_1,\dots,v_n are independently sampled according to probability distributions with
probability density function f_1,\dots,f_n : [0,1]^d \rightarrow [0,\theta]. For \theta = 1, the points are uniformly distributed. When \theta > 1 is big, the adversary has more ability to increase the likelihood of hard problem instances. In this perturbation model, the expected number of iterations of the 2-opt heuristic, as well as the approximation ratios of resulting output, are bounded by polynomial functions of n and \theta. Another local
search algorithm for which smoothed analysis was successful is the
k-means method. Given n points in [0,1]^d, it is
NP-hard to find a good partition into clusters with small pairwise distances between points in the same cluster.
Lloyd's algorithm is widely used and very fast in practice, although it can take e^{\Omega(n)} iterations in the worst case to find a locally optimal solution. However, assuming that the points have independent
Gaussian distributions, each with expectation in [0,1]^d and standard deviation \sigma, the expected number of iterations of the algorithm is bounded by a polynomial in n, d and \sigma. == Counter-examples ==