# Lesson: Obtaining a Basis from an Arbitrary Finite Spanning Set

1 point

## Transcript

Now that we know how to see if a set is a basis, we are left with the problem of trying to find a set that we think might be a basis. In general, there are two ways to go about this problem. Either we start with a linearly independent set and add more independent vectors until we finally span the vector space or we start with a spanning set and remove vectors until we end up with a linearly independent set.

We will focus our attention first on this latter method since we frequently define a vector space as the span of a set of vectors, which means that the very definition of our vector space provides us with a spanning set. And if that set is already linearly independent, then we already have a basis. But if not, then one of the vectors in our spanning set can be written as a linear combination of the other members.

To see this, suppose that the set {v1 through vk} is a set of vectors that is linearly dependent. Then there is a non-trivial solution to the equation t1v1 + all the way through to tkvk = 0. This means that there is a solution with at least one ti not equal to 0, and that means that vi can be written as a linear combination of the other vectors as follows. We have that t1v1 summed through to (t-sub-(i-1))(v-sub-(i-1)) + (t-sub-(i+1))(v-sub-(i+1)) through to tkvk = –tivi. And since ti is not equal to 0, we can divide by it to get that vi = (-t1/ti)v1 + through to –(t(i-1)/ti)v(i-1) + –(t(i+1)/ti)v(i+1) + through to –(tk/ti)vk.

As a bit of notation, when one element of a set can be written as a linear combination of the other members of the set, we say that this element is a dependent member of the set.

Okay, so now we see that if our spanning set is linearly dependent, then it has a dependent member. To get our basis, we will remove the dependent members from our spanning set. But we need to verify that we still have a spanning set for our vector space afterwards, and that’s what the following theorem will show.

Theorem a: If the set of vectors T = {v1 through vk} is a spanning set for a non-trivial vector space V, and if there is some vi in T that can be written as a linear combination of the other vj—that is, if there are scalars t1 through t(i-1), t(i+1) through tk such that if vi = t1v1 + through to t(i-1)v(i-1) + t(i+1)v(i+1) + through to tkvk—then our set T minus {vi} is also a spanning set for V.

Here’s the proof of our theorem. Again, let T be a spanning set for a non-trivial vector space V, and let our scalars be so that vi can be written as a linear combination of the other v’s. Now let’s let x be an element of V. Well, since T is a spanning set for V, we know that there must be scalars a1 through ak such that x equals the sum of a1v1 through akvk. Now we simply plug in our known value for vi, so we see that x equals the sum of a1v1 through to a(i-1)v(i-1) + now we take ai times our huge description of vi and then + add in a(i+1)v(i+1) through to akvk. But by simply grouping things properly, we can now write that x = (a1 + ait1)v1 + through to (a(i-1) + ait(i-1))v(i-1) + (a(i+1) + ait(i+1))v(i+1) + through to (ak + aitk)vk. And so we’ve written x as a linear combination of the elements of T minus {vi}. And since we can do this for all of our x in V, we see that T minus {vi} is still a spanning set for V.

Okay, so how do we use this fact to find a basis? Well, we’ll start with our spanning set T, and if it’s linearly independent, then we already have a basis, so we are done. If not, we find a dependent member vi of T, and then we look to see if T subtract {vi} is linearly independent. If it is, then by Theorem a, we know that it is also a spanning set, so it is the basis we are looking for. If this is still linearly dependent, then we again find a dependent member vj, and we look to see if our new set T minus {vi, vj} is linearly independent. If it is, then by a second use of Theorem a, we know it is still a spanning set, and so it is the basis we are looking for. If it isn’t, then we continue the process, and so we eventually end up with a linearly independent set. T only started with k elements, so at the extreme, we would eventually end up with a single element set, which is, of course, linearly independent, and as such, we know that our process will definitely produce a linearly independent set eventually.

Let’s see the process in action. So let’s let T be this set of 2-by-2 matrices. Let’s determine a subset of T that is a basis for the span of T. Well, we already know that T is a spanning set for the span of T, so we want to check if T is linearly independent. To that end, we’re looking for solutions to this equation. If I do the calculations on the right, then I get this matrix equality. So setting our entries equal, we see that this is equivalent to looking for solutions to this system. Of course, to find the solutions to the, a homogeneous system, we row reduce the coefficient matrix, as seen here.

All right, here’s where the fun starts. Our last matrix is in reduced row echelon form, and from it, we see that the rank of the coefficient matrix is 3. Since this is less than the number of variables, we know that our equation has an infinite number of solutions, which means that T is linearly dependent. That is to say, T is not a basis. This means that we need to remove at least one dependent member of T. To find such a dependent member, we will find the general solution to our equation.

From our reduced REF matrix, we see that our system is equivalent to the following. Now if we replace our variable t3 with the parameter s and the variable t5 with the parameter t, we get that the general solution to our equation is that our vector t will equal [-5s + t; -2s – 3t; s; -3t; t]. If I set s = -1 and t = 0, then I’ve learned that the 0 matrix equals 5 times my first matrix + 2 times my second matrix minus the third matrix + 0 times the fourth matrix + 0 times the fifth matrix. Well, this means I can write my third matrix as this combination of our first matrix and our second matrix, so our third matrix is a dependent member of T. But notice also that if I were to set t = -1 and s = 0, then I would find that the 0 matrix is equal to -1 times the first matrix + 3 times the second matrix + 0 times the third matrix + 3 times the fourth matrix minus the fifth matrix, and this means that I can write our fifth matrix as this linear combination of our first, second, and fourth matrices. So that means that our fifth matrix is also a dependent member of T.

So which one should we remove? Actually, both. The easy way to see this is to realize that we can first remove either one, so let’s go ahead and remove our third matrix, leaving us with a new set T1 = {our first, second, fourth, and fifth matrices}. Now by Theorem a, we know that T1 is a spanning set for span of T. But we already know that our now fourth matrix can be written as a com, linear combination of the first three matrices in T1, so T1 is not linearly independent, and we can now remove our dependent fourth matrix from T1 to get our new set T2, which is now the first, second, and fourth matrices from our original set T.

Again by Theorem a, we know that T2 is still a spanning set for span of T. Now that we’ve removed both the third and fifth matrices from T1, let’s check to see if our set T2 is linearly independent. To that end, we will look for solutions to this equation. Performing the calculations on the right, we see that we are looking at this matrix equality. Setting the entries equal, we see that this is equivalent to looking for solutions to this system. To find the solution to the system, we row reduce the coefficient matrix, as seen here. This last matrix is in row echelon form, and so we see that the rank of the coefficient matrix is 3. Since this is the same as the number of variables, we know that our system has a unique solution, and so we’ve shown that T2 is linearly independent. As such, we have that T2 is a spanning set for Span T that is linearly independent, so T2 is a basis for the span of T.