Initial motivation A
linear programming problem is one in which we wish to maximize or minimize a linear objective function of real variables over a
polytope. In semidefinite programming, we instead use real-valued vectors and are allowed to take the dot product of vectors; nonnegativity constraints on real variables in LP (
linear programming) are replaced by semidefiniteness constraints on matrix variables in SDP (
semidefinite programming). Specifically, a general semidefinite programming problem can be defined as any mathematical programming problem of the form : \begin{array}{rl} {\displaystyle \min_{x^1, \ldots, x^n \in \mathbb{R}^n}} & {\displaystyle \sum_{i,j \in [n]} c_{i,j} (x^i \cdot x^j)} \\ \text{subject to} & {\displaystyle \sum_{i,j \in [n]} a_{i,j,k} (x^i \cdot x^j) \leq b_k} \text{ for all }k \\ \end{array} where the c_{i,j}, a_{i,j,k}, and the b_k are real numbers and x^i \cdot x^j is the
dot product of x^i and x^j.
Equivalent formulations An n \times n matrix M is said to be
positive semidefinite if it is the
Gram matrix of some vectors (i.e. if there exist vectors x^1, \ldots, x^n such that m_{i,j}=x^i \cdot x^j for all i,j). If this is the case, we denote this as M \succeq 0. Note that there are several other equivalent definitions of being positive semidefinite, for example, positive semidefinite matrices are
self-adjoint matrices that have only non-negative
eigenvalues. Denote by \mathbb{S}^n the space of all n \times n real symmetric matrices. The space is equipped with the
inner product (where {\rm trace} denotes the
trace): \langle A,B\rangle := {\rm trace}(A^T B) = \sum_{i=1,j=1}^n A_{ij}B_{ij}. We can rewrite the mathematical program given in the previous section equivalently as : \begin{array}{rl} {\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle \\ \text{subject to} & \langle A_k, X \rangle \leq b_k, \quad k = 1,\ldots,m \\ & X \succeq 0. \end{array} where entry i,j in C is given by \frac{c_{i,j} + c_{j,i}}{2} from the previous section and A_k is a symmetric n \times n matrix having i,jth entry \frac{a_{i,j,k}+a_{j,i,k}}{2} from the previous section. Thus, the matrices C and A_k are symmetric and the above inner products are well-defined. Note that if we add
slack variables appropriately, this SDP can be converted to an
equational form: : \begin{array}{rl} {\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle \\ \text{subject to} & \langle A_k, X \rangle = b_k, \quad k = 1,\ldots,m \\ & X \succeq 0. \end{array} For convenience, an SDP may be specified in a slightly different, but equivalent form. For example, linear expressions involving nonnegative
scalar variables may be added to the program specification. This remains an SDP because each variable can be incorporated into the matrix X as a diagonal entry (X_{ii} for some i). To ensure that X_{ii} \geq 0, constraints X_{ij} = 0 can be added for all j \neq i. As another example, note that for any positive semidefinite matrix X, there exists a set of vectors \{ v_i \} such that the i, j entry of X is X_{ij} = (v_i, v_j) the
scalar product of v_i and v_j. Therefore, SDPs are often formulated in terms of linear expressions on scalar products of vectors. Given the solution to the SDP in the standard form, the vectors \{ v_i \} can be recovered in O(n^3) time (e.g., by using an incomplete
Cholesky decomposition of X). == Relations to other optimization problems ==