Seidel gave an algorithm for low-dimensional linear programming that may be adapted to the LP-type problem framework. Seidel's algorithm takes as input the set and a separate set (initially empty) of elements known to belong to the optimal basis. It then considers the remaining elements one-by-one in a random order, performing violation tests for each one and, depending on the result, performing a recursive call to the same algorithm with a larger set of known basis elements. It may be expressed with the following
pseudocode:
function seidel(
S,
f,
X)
is R := empty set
B :=
X for x in a random permutation of
S:
if f(
B) ≠
f(
B ∪ {
x}):
B := seidel(
R,
f,
X ∪ {
x})
R :=
R ∪ {
x}
return B In a problem with combinatorial dimension , the violation test in the th iteration of the algorithm fails only when is one of the remaining basis elements, which happens with probability at most . Based on this calculation, it can be shown that overall the expected number of violation tests performed by the algorithm is , linear in but worse than exponential in .
Clarkson defines two algorithms, a recursive algorithm and an iterative algorithm, for linear programming based on random sampling techniques, and suggests a combination of the two that calls the iterative algorithm from the recursive algorithm. The recursive algorithm repeatedly chooses random samples whose size is approximately the square root of the input size, solves the sampled problem recursively, and then uses violation tests to find a subset of the remaining elements that must include at least one basis element:
function recursive(
S,
f)
is X := empty set
repeat R := a random subset of
S with size d√n
B := basis for
R ∪
X, computed recursively
V := {
x |
f(
B) ≠
f(
B ∪ {
x})}
X :=
X ∪
V until V is empty
return B In each iteration, the expected size of is , and whenever is nonempty it includes at least one new element of the eventual basis of . Therefore, the algorithm performs at most iterations, each of which performs violation tests and makes a single recursive call to a subproblem of size . Clarkson's iterative algorithm assigns weights to each element of , initially all of them equal. It then chooses a set of elements from at random, and computes the sets and as in the previous algorithm. If the total weight of is at most times the total weight of (as happens with constant probability) then the algorithm doubles the weights of every element of , and as before it repeats this process until becomes empty. In each iteration, the weight of the optimal basis can be shown to increase at a greater rate than the total weight of , from which it follows that the algorithm must terminate within iterations. By using the recursive algorithm to solve a given problem, switching to the iterative algorithm for its recursive calls, and then switching again to Seidel's algorithm for the calls made by the iterative algorithm, it is possible solve a given LP-type problem using violation tests. When applied to a linear program, this algorithm can be interpreted as being a dual
simplex method. With certain additional computational primitives beyond the violation test and basis computation primitives, this method can be made deterministic.
Matoušek, Sharir, and Welzl describe an algorithm that uses an additional property of linear programs that is not always held by other LP-type problems, that all bases have the same cardinality of each other. If an LP-type problem does not have this property, it can be made to have it by adding new dummy elements and by modifying the function to return the
ordered pair of its old value and of the number , ordered
lexicographically. Rather than adding elements of
S one at a time, or finding samples of the elements, describe an algorithm that removes elements one at a time. At each step it maintains a basis that may initially be the set of dummy elements. It may be described with the following pseudocode:
function msw(
S,
f,
C)
is if S =
C then return C choose a random element x of
S \
C B = msw(
S \
x,
f,
C)
if f(
B) ≠ f(B ∪ {x})
then B := basis(
B ∪ {
x})
B := msw(
S,
f,
B)
return B In most of the recursive calls of the algorithm, the violation test succeeds and the if statement is skipped. However, with a small probability the violation test fails and the algorithm makes an additional basis computation and then an additional recursive call. As the authors show, the expected time for the algorithm is linear in
n and exponential in the square root of . By combining this method with Clarkson's recursive and iterative procedures, these two forms of time dependence can be separated out from each other, resulting in an algorithm that performs O(
dn) violation tests in the outer recursive algorithm and a number that is exponential in the square root of in the lower levels of the algorithm. ==Variations==