Lesson: Solving Span and Linear Independence Problems

Question 1

1 point

The fact that every \(n\)-dimensional vector space is isomorphic to \(\mathbb{R}^n\) is a very powerful tool, since it allows us to apply our results from \(\mathbb{R}^n\) to all finite dimensional vector spaces. Let's take a look at how we can simplify our lives a bit. For starters, let's consider the following question:

Is \(p(x)=3+x+6x^2-4x^3\) in the span of \(A=\{1+2x+x^2-x^3,5x-5x^2+4x^3,2+4x+x^3\}\)? If it is, write \(p(x)\) as a linear combination of the polynomials in \(A\).

Using traditional methods, we start by noting that for \(p(x)\) to be in the span of \(A\), there must be scalars \(a\), \(b\), and \(c\) such that\[ a(1+2x+x^2-x^3)+b(5x-5x^2+4x^3)+c(2+4x+x^3)=3+x+6x^2-4x^3 \]which means we need\[ (a+c)+(2a+5b+4c)x+(a-5b)x^2+(-a+4b+c)x^3=3+x+6x^2-4x^3 \]

Setting the coefficients equal to each other, we see that this is equivalent to the following system of linear equations:\[ \begin{array}{rrrl} a& &+c&=3 \\ 2a&+5b&+4c& = 1 \\ a& -5b& &=6 \\ -a&+4b&+c&=-4 \end{array} \]

To solve this system, we need to row reduce its augmented matrix:\[\left[\begin{array}{rrr|r} 1&0&2&3 \\ 2&5&4&1 \\ 1&-5&0&6 \\ -1&4&1&-4 \end{array}\right] \begin{array}{l} \\ R_2-2R_1 \\ R_3-R_1 \\ R_4+R_1 \end{array} \sim \left[\begin{array}{rrr|r} 1&0&2&3 \\ 0&5&0&-5 \\ 0&-5&-2&3 \\ 0&4&3&-1 \end{array}\right] \begin{array}{l} \\ (1/5)R_2 \\ \\ \\ \end{array} \\ \sim \]\[ \left[\begin{array}{rrr|r} 1&0&2&3 \\ 0&1&0&-1 \\ 0&-5&-2&3 \\ 0&4&3&-1 \end{array}\right] \begin{array}{l} \\ \\ R_3+5R_2 \\ R_4-4R_2 \end{array} \sim\left[\begin{array}{rrr|r} 1&0&2&3 \\ 0&1&0&-1 \\ 0&0&-2&-2 \\ 0&0&3&3 \end{array}\right] \begin{array}{l} \\ \\ (-1/2)R_3 \\ \\ \end{array} \\ \sim \]\[\left[\begin{array}{rrr|r} 1&0&2&3 \\ 0&1&0&-1 \\ 0&0&1&1 \\ 0&0&3&3 \end{array}\right] \begin{array}{l} R_1-2R_3 \\ \\ \\ R_4-3R_3 \end{array} \sim \left[\begin{array}{rrr|r} 1&0&0&1 \\ 0&1&0&-1 \\ 0&0&1&1 \\ 0&0&0&0 \end{array}\right]\]

The last matrix is in reduced row echelon form. As such, we see that there are no bad rows, which means that our system does have a system, and this means that \(p(x)\) is in the span of \(A\). Reading of the solution from the RREF matrix, we see that \(a=1\), \(b=-1\), and \(c=1\), so we have\[ 3+x+6x^2-4x^3 = (1+2x+x^2-x^3)-(5x-5x^2+4x^3)+(2+4x+x^3) \]

So, what changes can we make to make this shorter? (Notice I didn't say “easier” —just shorter.) Well, the first thing I would do would be to turn the problem into a question in \(\mathbb{R}^4\), by saying that the question “is \(p(x)\) in the span of \(A\)” is equivalent to asking if the vector \(\left[\begin{array}{r}3\\1\\6\\-4 \end{array}\right]\) is in the span of \(B=\left\{\left[\begin{array}{r}1\\2\\1\\-1 \end{array}\right],\left[\begin{array}{r}0\\5\\-5\\4 \end{array}\right],\left[\begin{array}{r}2\\4\\0\\1 \end{array}\right]\right\}\). This means we are looking for scalars \(a\), \(b\), and \(c\) such that\[ a\left[\begin{array}{r}1\\2\\1\\-1 \end{array}\right]+b\left[\begin{array}{r}0\\5\\-5\\4 \end{array}\right]+c\left[\begin{array}{r}2\\4\\0\\1 \end{array}\right] = \left[\begin{array}{r}3\\1\\6\\-4 \end{array}\right] \]

Here's where my next shortcut comes in. Instead of actually computing the linear combination on the left, at this point we all know that in order to solve any vector equation of the form\[ a_1\vec{x}_1+\cdots+a_n\vec{x}_n =\vec{b} \]we row reduce the augmented matrix \(\left[\begin{array}{ccc|c} \vec{x}_1 & \cdots & \vec{x}_n & \vec{b} \end{array}\right]\) So, once we have our vector equation, we can jump straight to row reducing the augmented matrix. Note that we do, in fact, get the same augmented matrix as before and so, of course, the row reduction is exactly the same. After the matrix is row reduced, we'd proceed exactly as we'd done before.

Let's take a look at how we can solve more classic problems.

Example:

Is the set\[ A=\left\{\left[\begin{array}{rr} 1&-2 \\ -1&-3 \end{array}\right],\left[\begin{array}{rr} 3&-5\\0&-4 \end{array}\right],\left[\begin{array}{rr} 2&-4\\-2&-6 \end{array}\right],\left[\begin{array}{rr} -1&4 \\ 2&3 \end{array}\right],\left[\begin{array}{rr} -6&-1 \\ -8&3 \end{array}\right]\right\} \]linearly independent? No, because the vectors are contained in \(M(2,2)\), which is a 4-dimensional space, but the set \(A\) contains more than 4 vectors.Okay, let's try something along the same lines, but that actually requires some work!

Example:

Let \(A\) be as in the previous example. Find a basis for \(\mbox{Span }A\).

With this question, the fact that we already know that \(A\) is linearly dependent doesn't help us, because we also need to determine which members are dependent so that we can remove them. As such, we solve this question as follows:This is the same as asking us to find a basis for the span of \(B\), where \(B=\left\{\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right],\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right],\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right],\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right],\left[\begin{array}{r}-6\\-1 \\ -8\\3\end{array}\right]\right\}\). We already know that \(B\) is a spanning set for \(\mbox{Span }B\). We also already know that \(B\) is linearly dependent, but to find our dependent members, we will look for solutions to the equation\[ a\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]+b\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]+c\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]+d\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right]+e\left[\begin{array}{r}-6\\-1 \\ -8\\3\end{array}\right] = \left[\begin{array}{r}0\\0\\0\\0 \end{array}\right] \]To do this, we row reduce the coefficient matrix:\[\left[\begin{array}{rrrrr} 1&3&2&-1&-6 \\ -2&-5&-4&4&-1 \\ -1&0&-2&2&-8 \\ -3&4&-6&3&3 \end{array}\right] \sim \left[\begin{array}{rrrrr} 1&3&2&-1&-6 \\ 0&1&0&2&-13 \\ 0&3&0&1&-14 \\ 0&5&0&0&-15 \end{array}\right] \sim \]\[ \left[\begin{array}{rrrrr} 1&3&2&-1&-6 \\ 0&1&0&2&-13 \\ 0&0&0&-5&25 \\ 0&0&0&-10&50 \end{array}\right] \sim \left[\begin{array}{rrrrr} 1&3&2&-1&-6 \\ 0&1&0&2&-13 \\ 0&0&0&1&-5 \\ 0&0&0&0&0 \end{array}\right] \sim \]\[\left[\begin{array}{rrrrr} 1&3&2&0&-11 \\ 0&1&0&0&-3 \\ 0&0&0&1&-5 \\ 0&0&0&0&0 \end{array}\right] \sim \left[\begin{array}{rrrrr} 1&0&2&0&-2 \\ 0&1&0&0&-3 \\ 0&0&0&1&-5 \\ 0&0&0&0&0 \end{array}\right]\]

Note: Here's where I want to start changing things up. First of all, it is actually easy to read off the dependent members from a reduced row echelon form matrix. To see this, notice that our RREF matrix is equivalent to the system\[ \begin{array}{rrrrrl} a & &+2c & & -2e & =0 \\ & b & & &-3e& =0 \\ & & & d&-5e & =0 \end{array} \]which we can rewrite as\[ \begin{array}{rl} a&=-2c+2e \\ b&=3e \\ d&=5e \end{array} \]

If we set \(c=1\) and \(e=0\), we get the solution \(a=-2\), \(b=0\), \(c=1\), \(d=0\) and \(e=0\), which means\[ -2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]+0\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]+1\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]+0\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right]+0\left[\begin{array}{r}-6\\-1 \\ -8\\3\end{array}\right] = \left[\begin{array}{r}0\\0\\0\\0 \end{array}\right] \]or\[ \left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]=2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right] \]and so we see that we can write the vector \(\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]\) as a linear combination of the other vectors in \(B\). Of more interest to me, though, is the fact that we can pretty much jump straight to this fact from the RREF matrix. If we look at the third column (which is the one for \(\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]\)), we see that it has a \(2\) in the first row — the row that also has a leading entry in the first column (which is the column for \(\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]\) ). And if you go through the calculations we did, you can see that this is not a coincidence: we multiplied all the other vectors by zero, essentially saying “ignore those other columns,” and were left with the equation \(a=-2c\), which became \(-2c\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]+c\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]=\left[\begin{array}{r}0\\0\\0\\0 \end{array}\right]\), which becomes \(2c\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]=c\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]\), (where setting \(c=1\) gives us \(2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]=\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]\)). So the 2 in the third column really is the same 2 as in our dependency equation. This becomes easier to see when we look at the dependency equation for our fifth vector. Going back to our system of equations, we now set \(c=0\) and \(e=1\). Then we have\[ a=2 \quad b=3 \quad c=0 \quad d=5 \quad e=1 \]Which means that\[ 2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]+3\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]+0\left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]+5\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right]+1\left[\begin{array}{r}-6\\-1 \\ -8\\3\end{array}\right] = \left[\begin{array}{r}0\\0\\0\\0 \end{array}\right] \]which gets us\[ \left[\begin{array}{r}-6\\-1 \\ -8\\3\end{array}\right] = -2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]-3\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]-5\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right] \]

Again, if you track our calculations the \(-2\) we put with \(\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]\) is the same as the \(-2\) in the first row of the fifth column (it switches to a “\(2\)” and then back to a “\(-2\)”), and the \(-3\) we put with \(\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]\) is the same as the \(-3\) in the second row of the fifth column (noting that the second row has its leading one in the second column), and the \(-5\) we put with \(\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right]\) is the same as the \(-5\) in the third row of the fifth column (noting that the third row has its leading one in the fourth column, so we put it with the fourth vector).Reading the dependency equations off of the RREF may take a bit of practice, so you may prefer to go ahead and find the general solution and work from there — it doesn't take that much more time. The real time saver is the fact that we can be done as soon as we remove these dependent vectors. First of all, I showed at the time that we can go ahead and remove both of these vectors as they are both linear combinations of the vectors in \(B_1=\left\{\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right],\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right],\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right]\right\}\), and so we have that \(\mbox{Span }B=\mbox{Span }B_1\). All that remains is to see that \(B_1\) is linearly independent. To do that, we would look for solutions to the equation\[ a\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]+b\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]+e\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right] = \left[\begin{array}{r}0\\0\\0\\0 \end{array}\right] \]And to look for solutions to this, we would row reduce the matrix\(\left[\begin{array}{rrr} 1&3&-2 \\ -2&-5&4 \\ -1&0&2 \\ -3&-4&3 \end{array}\right]\)

This is the same as the matrix we have already row reduced, except that the third and fifth columns have been removed. But, because row operations are contained within the columns (that is, the value in column \(i\) will not affect the value in column \(j\) if \(i\neq j\)), we would end up doing the exact same row reductions steps, and in fact we can simply copy our previous row reduction, slashing through the third and fifth columns all the way. In this case, we will end up with\(\left[\begin{array}{rrr} 1&0&0 \\ 0&1&0 \\ 0&0&1 \\ 0&0&0 \end{array}\right]\).But, in a more general case, we know that we will end up with something that looks like this. Because we removed the columns that do not have a leading \(1\), we are now only left with such columns. And the rules for being in reduced row echelon form mean that these columns will have all zeros both above and below the leading \(1\). The format also requires that the position of the leading \(1\)s move from left to right, and that any rows of all zeros are at the bottom. We can't even have a column of all zeros somewhere in the middle (such a column can only have come from the zero vector which is clearly a dependent vector as well), so we would have removed its column. The only possible result is something that looks like the identity matrix, maybe with some rows of zeros tacked on to the bottom. But, it will definitely be a matrix whose rank is the same as the number of columns, and as such, we know that the starting vectors are linearly independent. And since we ALWAYS know that this will happen, there is no need to verify it every time.So, I've talked a bit here, but now I'll show how I would proceed from the point where we row reduced the matrix:

Our final matrix is in reduced row echelon form, and from this form we see that\[ \left[\begin{array}{r}2\\-4\\-2\\-6 \end{array}\right]=2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right] \quad \mbox{and} \quad \left[\begin{array}{r}-6\\-1 \\ -8\\3\end{array}\right] = -2\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right]-3\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right]-5\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right] \]and thus the set
\(B_1=\left\{\left[\begin{array}{r}1\\-2 \\ -1\\-3 \end{array}\right],\left[\begin{array}{r}3\\-5\\0\\-4 \end{array}\right],\left[\begin{array}{r}-1\\4 \\ 2\\3 \end{array}\right]\right\}\) we get from \(B\) by removing its dependent members is a basis for \(\mbox{Span }B\)

Before we move on to the next module, I want to point out that, while I have endorsed some short cut moves (e.g., going straight from the vector equation to the matrix, going straight from removing dependent members to stating that the resulting set is a basis), I have not removed all the words from our solutions. I prefer to think of it as providing you with a couple more theorems that we can use. But you still need to write down some of the thought process that goes into your question. Write out the definition of span or linear independence as it relates to the question, and point out when it is satisfied or why it isn't at the end. Write out the dependency relation that lets us know we can remove a vector. There are still a few leaps that we don't want to cut out of our proofs!

© University of Waterloo and others, Powered by Maplesoft