One of the variants of the matrix completion problem is to find the lowest
rank matrix X which matches the matrix M, which we wish to recover, for all entries in the set E of observed entries. The mathematical formulation of this problem is as follows: :\begin{align} & \underset{X}{\text{min}} & \text{rank} (X) \\ & \text{subject to} & X_{ij} = M_{ij} & \;\; \forall i,j \in E\\ \end{align} Candès and Recht proved that with assumptions on the sampling of the observed entries and sufficiently many sampled entries this problem has a unique solution with high probability. An equivalent formulation, given that the matrix M to be recovered is known to be of
rank r, is to solve for X where X_{ij} = M_{ij} \;\; \forall i,j \in E
Assumptions A number of assumptions on the sampling of the observed entries and the number of sampled entries are frequently made to simplify the analysis and to ensure the problem is not
underdetermined.
Uniform sampling of observed entries To make the analysis tractable, it is often assumed that the set E of observed entries and fixed
cardinality is sampled uniformly at random from the collection of all subsets of entries of cardinality |E|. To further simplify the analysis, it is instead assumed that E is constructed by
Bernoulli sampling, i.e. that each entry is observed with probability p . If p is set to \frac{N}{mn} where N is the desired expected
cardinality of E, and m,\;n are the dimensions of the matrix (let m
without loss of generality), |E| is within O(n \log n) of N with high probability, thus
Bernoulli sampling is a good approximation for uniform sampling. Another simplification is to assume that entries are sampled independently and with replacement.
Lower bound on number of observed entries Suppose the m by n matrix M (with m ) we are trying to recover has
rank r. There is an information theoretic lower bound on how many entries must be observed before M can be uniquely reconstructed. The set of m by n matrices with rank less than or equal to r is an
algebraic variety in {\mathbb C}^{m\times n}with dimension (n+m)r - r^2. Using this result, one can show that at least 4nr - 4r^2 entries must be observed for matrix completion in {\mathbb C}^{n \times n} to have a unique solution when r \leq n/2 . Secondly, there must be at least one observed entry per row and column of M. The
singular value decomposition of M is given by U\Sigma V^\dagger. If column i is unobserved, it is easy to see the i^{\text{th}} right singular vector of M, v_i, can be changed to some arbitrary value and still yield a matrix matching M over the set of observed entries. Similarly, if row j is unobserved, the j^{\text{th}} left singular vector of M, u_i can be arbitrary. If we assume Bernoulli sampling of the set of observed entries, the
Coupon collector effect implies that entries on the order of O(n\log n) must be observed to ensure that there is an observation from each row and column with high probability. Combining the necessary conditions and assuming that r \ll m, n (a valid assumption for many practical applications), the lower bound on the number of observed entries required to prevent the problem of matrix completion from being underdetermined is on the order of nr\log n .
Incoherence The concept of incoherence arose in
compressed sensing. It is introduced in the context of matrix completion to ensure the singular vectors of M are not too "sparse" in the sense that all coordinates of each singular vector are of comparable magnitude instead of just a few coordinates having significantly larger magnitudes. The
standard basis vectors are then undesirable as singular vectors, and the vector \frac{1}{\sqrt{n}} \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} in \mathbb{R}^n is desirable. As an example of what could go wrong if the singular vectors are sufficiently "sparse", consider the m by n matrix \begin{bmatrix} 1 & 0 & \cdots & 0 \\ \vdots & & \vdots \\ 0 & 0 & 0 & 0 \end{bmatrix} with
singular value decomposition I_m \begin{bmatrix} 1 & 0 & \cdots & 0 \\ \vdots & & \vdots \\ 0 & 0 & 0 & 0 \end{bmatrix} I_n. Almost all the entries of M must be sampled before it can be reconstructed. Candès and Recht define the coherence of a matrix U with
column space an r-dimensional subspace of \mathbb{R}^n as \mu (U) = \frac{n}{r} \max_{i , where P_U is the orthogonal
projection onto U . Incoherence then asserts that given the
singular value decomposition U\Sigma V^\dagger of the m by n matrix M, • \mu (U), \; \mu (V) \leq \mu_0 • The entries of \sum_k u_k v_k ^\dagger have magnitudes upper bounded by \mu_1 \sqrt{\frac{r}{mn}} for some \mu_0, \; \mu_1. == Low rank matrix completion with noise ==