Motivating example: computing a basis from a linearly dependent set Suppose we have a
vector space V over a
field F and we want to express an arbitrary element \mathbf{v} \in V as a
linear combination of the vectors \{\mathbf{e}_{k}\}\subset V, that is, finding coefficients \{c_k\} \subset F such that : \mathbf{v} = \sum_k c_k \mathbf{e}_k. If the set \{ \mathbf{e}_{k} \} does not
span V, then such coefficients do not exist for every such \mathbf{v}. If \{ \mathbf{e}_{k} \} spans V and also is
linearly independent, this set forms a
basis of V, and the coefficients c_{k} are uniquely determined by \mathbf{v}. If, however, \{\mathbf{e}_{k}\} spans V but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if V is of infinite dimension. Given that \{\mathbf{e}_k\} spans V and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan: • Removing arbitrary vectors from the set may cause it to be unable to span V before it becomes linearly independent. • Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite. • In some applications, it may be an advantage to use more vectors than necessary to represent \mathbf{v}. This means that we want to find the coefficients c_k without removing elements in \{\mathbf{e}_k\}. The coefficients c_k will no longer be uniquely determined by \mathbf{v}. Therefore, the vector \mathbf{v} can be represented as a linear combination of \{\mathbf{e}_{k}\} in more than one way.
Definition Let V be an
inner product space and \{\mathbf{e}_k\}_{k \in \mathbb{N}} be a set of vectors in V. The set \{\mathbf{e}_k\}_{k \in \mathbb{N}} is a
frame of V if it satisfies the so called
frame condition. That is, if there exist two constants 0 such that : A \left\| \mathbf{v} \right\| ^2 \leq \sum_{k \in \mathbb{N}} \left| \langle \mathbf{v}, \mathbf{e}_k \rangle \right| ^2 \leq B \left\| \mathbf{v} \right\| ^2 , \quad \forall \mathbf{v}\in V. A frame is called
overcomplete (or
redundant) if it is not a
Riesz basis for the vector space. The redundancy of the frame is measured by the lower and upper frame bounds (or
redundancy factors) A and B, respectively. That is, a frame of K \geq N
normalized vectors \|\mathbf{e}_k\| = 1 in an N-dimensional space V has frame bounds which satisfiy : 0 If the frame is a Riesz basis and is therefore
linearly independent, then A\leq 1 \leq B. The frame bounds are not unique because numbers less than A and greater than B are also valid frame bounds. The
optimal lower bound is the
supremum of all lower bounds and the
optimal upper bound is the
infimum of all upper bounds.
Analysis operator If the frame condition is satisfied, then the
linear operator defined as : \mathbf{T}: V \to \ell^2, \quad \mathbf{v} \mapsto \mathbf{T}\mathbf{v} = \{\langle \mathbf{v},\mathbf{e_k}\rangle\}_{k\in \mathbb{N}}, mapping \mathbf{v} \in V to the sequence of
frame coefficients c_k = \langle \mathbf{v},\mathbf{e_k}\rangle, is called the
analysis operator. Using this definition, the frame condition can be rewritten as : A \left\| \mathbf{v} \right\| ^2 \leq \left\| \mathbf{T} \mathbf{v} \right\| ^2 = \sum_k \left| \langle \mathbf{v} , \mathbf{e}_k \rangle \right| ^2 \leq B \left\| \mathbf{v} \right\| ^2.
Synthesis operator The
adjoint of the analysis operator is called the
synthesis operator of the frame and defined as : \mathbf{T}^*: \ell^2 \to V, \quad \{c_k\}_{k \in \mathbb{N}} \mapsto \sum_k c_k\mathbf{e}_k.
Frame operator The composition of the analysis operator and the synthesis operator leads to the
frame operator defined as : \mathbf{S} : V \rightarrow V, \quad \mathbf{v}\mapsto \mathbf{S} \mathbf{v} = \mathbf{T}^*\mathbf{T}\mathbf{v} = \sum_{k} \langle \mathbf{v} , \mathbf{e}_{k} \rangle \mathbf{e}_{k}. From this definition and linearity in the first argument of the inner product, the frame condition now yields : A \left\| \mathbf{v} \right\| ^2 \leq \left\| \mathbf{T} \mathbf{v} \right\| ^2 = \langle \mathbf{S} \mathbf{v} , \mathbf{v} \rangle \leq B \left\| \mathbf{v} \right\| ^2 . If the analysis operator exists, then so does the frame operator \mathbf{S} as well as the inverse \mathbf{S}^{-1}. Both \mathbf{S} and \mathbf{S}^{-1} are
positive definite,
bounded self-adjoint operators, resulting in A and B being the infimum and supremum values of the
spectrum of \mathbf{S}. In finite dimensions, the frame operator is automatically
trace-class, with A and B corresponding to the smallest and largest
eigenvalues of \mathbf{S} or, equivalently, the smallest and largest
singular values of \mathbf{T}.
Relation to bases The frame condition is a generalization of
Parseval's identity that maintains norm equivalence between a signal in V and its sequence of coefficients in \ell^2. If the set \{\mathbf{e}_k\} is a frame of V, it
spans V. Otherwise there would exist at least one non-zero \mathbf{v} \in V which would be orthogonal to all \mathbf{e}_k such that : A \left\| \mathbf{v} \right\| ^2 \leq 0 \leq B \left\| \mathbf{v} \right\| ^{2}; either violating the frame condition or the assumption that \mathbf{v} \neq 0. However, a spanning set of V is not necessarily a frame. For example, consider V = \mathbb{R}^2 with the
dot product, and the infinite set \{\mathbf{e}_k\} given by :\left\{ (1,0) , \, (0,1), \, \left(0,\tfrac{1}{\sqrt{2}}\right) , \, \left(0,\tfrac{1}{\sqrt{3}}\right), \dotsc \right\}. This set spans V but since :\sum_k \left| \langle \mathbf{e}_k , (0,1)\rangle \right| ^2 = 0 + 1 + \tfrac{1}{2} + \tfrac{1}{3} +\dotsb = \infty, we cannot choose a finite upper frame bound
B. Consequently, the set \{\mathbf{e}_k\} is not a frame. == Dual frames ==