The zero vector If one or more vectors from a given sequence of vectors \mathbf{v}_1, \dots, \mathbf{v}_k is the zero vector \mathbf{0} then the vectors \mathbf{v}_1, \dots, \mathbf{v}_k are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that i is an index (i.e. an element of \{ 1, \ldots, k \}) such that \mathbf{v}_i = \mathbf{0}. Then let a_{i} := 1 (alternatively, letting a_{i} be equal to any other non-zero scalar will also work) and then let all other scalars be 0 (explicitly, this means that for any index j other than i (i.e. for j \neq i), let a_{j} := 0 so that consequently a_{j} \mathbf{v}_j = 0 \mathbf{v}_j = \mathbf{0}). Simplifying a_1 \mathbf{v}_1 + \cdots + a_k\mathbf{v}_k gives: :a_1 \mathbf{v}_1 + \cdots + a_k\mathbf{v}_k = \mathbf{0} + \cdots + \mathbf{0} + a_i \mathbf{v}_i + \mathbf{0} + \cdots + \mathbf{0} = a_i \mathbf{v}_i = a_i \mathbf{0} = \mathbf{0}. Because not all scalars are zero (in particular, a_{i} \neq 0), this proves that the vectors \mathbf{v}_1, \dots, \mathbf{v}_k are linearly dependent. As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly
independent. Now consider the special case where the sequence of \mathbf{v}_1, \dots, \mathbf{v}_k has length 1 (i.e. the case where k = 1). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, if \mathbf{v}_1 is any vector then the sequence \mathbf{v}_1 (which is a sequence of length 1) is linearly dependent if and only if {{nowrap|\mathbf{v}_1 = \mathbf{0};}} alternatively, the collection \mathbf{v}_1 is linearly independent if and only if \mathbf{v}_1 \neq \mathbf{0}.
Linear dependence and independence of two vectors This example considers the special case where there are exactly two vector \mathbf{u} and \mathbf{v} from some real or complex vector space. The vectors \mathbf{u} and \mathbf{v} are linearly dependent
if and only if at least one of the following is true: • \mathbf{u} is a scalar multiple of \mathbf{v} (explicitly, this means that there exists a scalar c such that \mathbf{u} = c \mathbf{v}) or • \mathbf{v} is a scalar multiple of \mathbf{u} (explicitly, this means that there exists a scalar c such that \mathbf{v} = c \mathbf{u}). If \mathbf{u} = \mathbf{0} then by setting c := 0 we have c \mathbf{v} = 0 \mathbf{v} = \mathbf{0} = \mathbf{u} (this equality holds no matter what the value of \mathbf{v} is), which shows that (1) is true in this particular case. Similarly, if \mathbf{v} = \mathbf{0} then (2) is true because \mathbf{v} = 0 \mathbf{u}. If \mathbf{u} = \mathbf{v} (for instance, if they are both equal to the zero vector \mathbf{0}) then
both (1) and (2) are true (by using c := 1 for both). If \mathbf{u} = c \mathbf{v} then \mathbf{u} \neq \mathbf{0} is only possible if c \neq 0
and \mathbf{v} \neq \mathbf{0}; in this case, it is possible to multiply both sides by \frac{1}{c} to conclude \mathbf{v} = \frac{1}{c} \mathbf{u}. This shows that if \mathbf{u} \neq \mathbf{0} and \mathbf{v} \neq \mathbf{0} then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly
independent). If \mathbf{u} = c \mathbf{v} but instead \mathbf{u} = \mathbf{0} then at least one of c and \mathbf{v} must be zero. Moreover, if exactly one of \mathbf{u} and \mathbf{v} is \mathbf{0} (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). The vectors \mathbf{u} and \mathbf{v} are linearly
independent if and only if \mathbf{u} is not a scalar multiple of \mathbf{v}
and \mathbf{v} is not a scalar multiple of \mathbf{u}.
Vectors in R2 Three vectors: Consider the set of vectors \mathbf{v}_1 = (1, 1), \mathbf{v}_2 = (-3, 2), and \mathbf{v}_3 = (2, 4), then the condition for linear dependence seeks a set of non-zero scalars, such that :a_1 \begin{bmatrix} 1\\1\end{bmatrix} + a_2 \begin{bmatrix} -3\\2\end{bmatrix} + a_3 \begin{bmatrix} 2\\4\end{bmatrix} =\begin{bmatrix} 0\\0\end{bmatrix}, or :\begin{bmatrix} 1 & -3 & 2 \\ 1 & 2 & 4 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}.
Row reduce this matrix equation by subtracting the first row from the second to obtain, :\begin{bmatrix} 1 & -3 & 2 \\ 0 & 5 & 2 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is :\begin{bmatrix} 1 & 0 & 16/5 \\ 0 & 1 & 2/5 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. Rearranging this equation allows us to obtain :\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \end{bmatrix}= \begin{bmatrix} a_1\\ a_2 \end{bmatrix}=-a_3\begin{bmatrix} 16/5\\2/5\end{bmatrix}. which shows that non-zero
ai exist such that \mathbf{v}_3 = (2, 4) can be defined in terms of \mathbf{v}_1 = (1, 1) and \mathbf{v}_2 = (-3, 2). Thus, the three vectors are linearly dependent.
Two vectors: Now consider the linear dependence of the two vectors \mathbf{v}_1 = (1, 1) and \mathbf{v}_2 = (-3, 2), and check, :a_1 \begin{bmatrix} 1\\1\end{bmatrix} + a_2 \begin{bmatrix} -3\\2\end{bmatrix} =\begin{bmatrix} 0\\0\end{bmatrix}, or :\begin{bmatrix} 1 & -3 \\ 1 & 2 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. The same row reduction presented above yields, :\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. This shows that a_i = 0, which means that the vectors \mathbf{v}_1 = (1, 1) and \mathbf{v}_2 = (-3, 2) are linearly independent.
Vectors in R4 In order to determine if the three vectors in \mathbb{R}^4, :\mathbf{v}_1= \begin{bmatrix}1\\4\\2\\-3\end{bmatrix}, \mathbf{v}_2=\begin{bmatrix}7\\10\\-4\\-1\end{bmatrix}, \mathbf{v}_3=\begin{bmatrix}-2\\1\\5\\-4\end{bmatrix}. are linearly dependent, form the matrix equation, :\begin{bmatrix}1&7&-2\\4& 10& 1\\2&-4&5\\-3&-1&-4\end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix} = \begin{bmatrix}0\\0\\0\\0\end{bmatrix}. Row reduce this equation to obtain, :\begin{bmatrix} 1& 7 & -2 \\ 0& -18& 9\\ 0 & 0 & 0\\ 0& 0& 0\end{bmatrix} \begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix} = \begin{bmatrix}0\\0\\0\\0\end{bmatrix}. Rearrange to solve for v3 and obtain, :\begin{bmatrix} 1& 7 \\ 0& -18 \end{bmatrix} \begin{bmatrix} a_1\\ a_2 \end{bmatrix} = -a_3\begin{bmatrix}-2\\9\end{bmatrix}. This equation is easily solved to define non-zero
ai, :a_1 = -3 a_3 /2, a_2 = a_3/2, where a_3 can be chosen arbitrarily. Thus, the vectors \mathbf{v}_1, \mathbf{v}_2, and \mathbf{v}_3 are linearly dependent.
Alternative method using determinants An alternative method relies on the fact that n vectors in \mathbb{R}^n are linearly
independent if and only if the
determinant of the
matrix formed by taking the vectors as its columns is non-zero. In this case, the matrix formed by the vectors is :A = \begin{bmatrix}1&-3\\1&2\end{bmatrix} . We may write a linear combination of the columns as :A \Lambda = \begin{bmatrix}1&-3\\1&2\end{bmatrix} \begin{bmatrix}\lambda_1 \\ \lambda_2 \end{bmatrix} . We are interested in whether for some nonzero vector Λ. This depends on the determinant of A, which is :\det A = 1\cdot2 - 1\cdot(-3) = 5 \ne 0. Since the
determinant is non-zero, the vectors (1, 1) and (-3, 2) are linearly independent. Otherwise, suppose we have m vectors of n coordinates, with m Then
A is an
n×
m matrix and Λ is a column vector with m entries, and we are again interested in
AΛ =
0. As we saw previously, this is equivalent to a list of n equations. Consider the first m rows of A, the first m equations; any solution of the full list of equations must also be true of the reduced list. In fact, if is any list of m rows, then the equation must be true for those rows. :A_{\lang i_1,\dots,i_m \rang} \Lambda = \mathbf{0} . Furthermore, the reverse is true. That is, we can test whether the m vectors are linearly dependent by testing whether :\det A_{\lang i_1,\dots,i_m \rang} = 0 for all possible lists of m rows. (In case m = n, this requires only one determinant, as above. If m > n, then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available.
More vectors than dimensions If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in \R^2. == Natural basis vectors ==