Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form {{NumBlk||\left[\frac 1 n \sum_{i=1}^n \max\left(0, 1 - y_i(\mathbf{w}^\mathsf{T} \mathbf{x}_i - b)\right) \right] + \lambda \|\mathbf{w}\|^2. |}} We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value for \lambda yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing to a
quadratic programming problem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed.
Primal Minimizing can be rewritten as a constrained optimization problem with a differentiable objective function in the following way. For each i \in \{1,\,\ldots,\,n\} we introduce a variable \zeta_i = \max\left(0, 1 - y_i(\mathbf{w}^\mathsf{T} \mathbf{x}_i - b)\right). Note that \zeta_i is the smallest nonnegative number satisfying y_i(\mathbf{w}^\mathsf{T} \mathbf{x}_i - b) \geq 1 - \zeta_i. Thus we can rewrite the optimization problem as follows \begin{align} &\text{minimize } \frac 1 n \sum_{i=1}^n \zeta_i + \lambda \|\mathbf{w}\|^2 \\[0.5ex] &\text{subject to } y_i\left(\mathbf{w}^\mathsf{T} \mathbf{x}_i - b\right) \geq 1 - \zeta_i \, \text{ and } \, \zeta_i \geq 0,\, \text{for all } i. \end{align} This is called the
primal problem.
Dual By solving for the
Lagrangian dual of the above problem, one obtains the simplified problem \begin{align} &\text{maximize}\,\, f(c_1 \ldots c_n) = \sum_{i=1}^n c_i - \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_i c_i(\mathbf{x}_i^\mathsf{T} \mathbf{x}_j)y_j c_j, \\ &\text{subject to } \sum_{i=1}^n c_iy_i = 0,\,\text{and } 0 \leq c_i \leq \frac{1}{2n\lambda}\;\text{for all }i. \end{align} This is called the
dual problem. Since the dual maximization problem is a quadratic function of the c_i subject to linear constraints, it is efficiently solvable by
quadratic programming algorithms. Here, the variables c_i are defined such that \mathbf{w} = \sum_{i=1}^n c_iy_i \mathbf{x}_i. Moreover, c_i = 0 exactly when \mathbf{x}_i lies on the correct side of the margin, and 0 when \mathbf{x}_i lies on the margin's boundary. It follows that \mathbf{w} can be written as a linear combination of the support vectors. The offset, b, can be recovered by finding an \mathbf{x}_i on the margin's boundary and solving y_i(\mathbf{w}^\mathsf{T} \mathbf{x}_i - b) = 1 \iff b = \mathbf{w}^\mathsf{T} \mathbf{x}_i - y_i . (Note that y_i^{-1}=y_i since y_i=\pm 1.)
Kernel trick Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data points \varphi(\mathbf{x}_i). Moreover, we are given a kernel function k which satisfies k(\mathbf{x}_i, \mathbf{x}_j) = \varphi(\mathbf{x}_i) \cdot \varphi(\mathbf{x}_j). We know the classification vector \mathbf{w} in the transformed space satisfies \mathbf{w} = \sum_{i=1}^n c_iy_i\varphi(\mathbf{x}_i), where, the c_i are obtained by solving the optimization problem \begin{align} \text{maximize}\,\, f(c_1 \ldots c_n) &= \sum_{i=1}^n c_i - \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_ic_i(\varphi(\mathbf{x}_i) \cdot \varphi(\mathbf{x}_j))y_jc_j \\ &= \sum_{i=1}^n c_i - \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_ic_ik(\mathbf{x}_i, \mathbf{x}_j)y_jc_j \\ \text{subject to } \sum_{i=1}^n c_i y_i &= 0,\,\text{and } 0 \leq c_i \leq \frac{1}{2n\lambda}\;\text{for all }i. \end{align} The coefficients c_i can be solved for using quadratic programming, as before. Again, we can find some index i such that 0 , so that \varphi(\mathbf{x}_i) lies on the boundary of the margin in the transformed space, and then solve \begin{align} b = \mathbf{w}^\mathsf{T} \varphi(\mathbf{x}_i) - y_i &= \left[\sum_{j=1}^n c_jy_j\varphi(\mathbf{x}_j) \cdot \varphi(\mathbf{x}_i)\right] - y_i \\ &= \left[\sum_{j=1}^n c_jy_jk(\mathbf{x}_j, \mathbf{x}_i)\right] - y_i. \end{align} Finally, \mathbf{z} \mapsto \sgn(\mathbf{w}^\mathsf{T} \varphi(\mathbf{z}) - b) = \sgn \left(\left[\sum_{i=1}^n c_iy_ik(\mathbf{x}_i, \mathbf{z})\right] - b\right).
Modern methods Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high.
Sub-gradient descent Sub-gradient descent algorithms for the SVM work directly with the expression f(\mathbf{w}, b) = \left[\frac 1 n \sum_{i=1}^n \max\left(0, 1 - y_i(\mathbf{w}^\mathsf{T} \mathbf{x}_i - b)\right) \right] + \lambda \|\mathbf{w}\|^2. Note that f is a
convex function of \mathbf{w} and b. As such, traditional
gradient descent (or
SGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function's
sub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale with n, the number of data points.
Coordinate descent Coordinate descent algorithms for the SVM work from the dual problem \begin{align} &\text{maximize}\,\, f(c_1 \ldots c_n) = \sum_{i=1}^n c_i - \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_i c_i(x_i \cdot x_j)y_j c_j,\\ &\text{subject to } \sum_{i=1}^n c_iy_i = 0,\,\text{and } 0 \leq c_i \leq \frac{1}{2n\lambda}\;\text{for all }i. \end{align} For each i \in \{1,\, \ldots,\, n\}, iteratively, the coefficient c_i is adjusted in the direction of \partial f/ \partial c_i. Then, the resulting vector of coefficients (c_1',\,\ldots,\,c_n') is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven. == Empirical risk minimization ==