The alternating direction method of multipliers (ADMM) is a variant of the augmented Lagrangian scheme that uses partial updates for the dual variables. This method is often applied to solve problems such as, : \min_x f(x) + g(Mx). This is equivalent to the constrained problem, : \min_{x,y} f(x) + g(y), \quad \text{subject to}\quad Mx = y. Though this change may seem trivial, the problem can now be attacked using methods of constrained optimization (in particular, the augmented Lagrangian method), and the objective function is separable in
x and
y. The dual update requires solving a proximity function in
x and
y at the same time; the ADMM technique allows this problem to be solved approximately by first solving for
x with
y fixed and then solving for
y with
x fixed. Rather than iterating this process until convergence (like the
Jacobi method), the ADMM algorithm proceeds directly to updating the dual variable and then repeats the process. This is not equivalent to the exact minimization, but the method still converges to the correct solution under some assumptions. Because it does not minimize or approximately minimize the augmented Lagrangian, the algorithm is distinct from the ordinary augmented Lagrangian method. The ADMM can be viewed as an application of the
Douglas-Rachford splitting algorithm, and the Douglas-Rachford algorithm is in turn an instance of the
Proximal point algorithm; details can be found in ref. There are several modern software packages, including YALL1 (2009), SpaRSA (2009) and SALSA (2009), which solve
Basis pursuit and variants and use the ADMM. There are also packages which use the ADMM to solve more general problems, some of which can exploit multiple computing cores (e.g., SNAPVX (2015), parADMM (2016)). == Stochastic optimization ==