# Lesson: Bases

1 point

## Transcript

The next step after defining a spanning set and linear independence is to look at a basis, a set that is both at the same time. We did not pay much attention to bases in Linear Algebra I, but they will play a much greater role in this class, and the following theorem is the reason why.

Theorem 4.3.1, the Unique Representation Theorem: Let B be the set of vectors {v1 through vn} be a spanning set for a vector space V. Then every vector in V can be expressed in a unique way as a linear combination of the vectors of B if and only if the set B is linearly independent.

Here’s the proof. So we’ll let B be the set of vectors {v1 through vn} be a spanning set for a vector space V. Our if-and-only-if proof consists of two parts. Part 1 is to show that if every vector in V can be expressed as a unique linear combination of the vectors in B, then B is linearly independent. To see this, we note that if every vector in V can be expressed as a unique linear combination of the vectors in B, then we specifically know that the 0 vector can be expressed as a unique linear combination of the vectors in B. This means that we know there is only one collection of scalars t1 through tn such that the sum t1v1 + all the way through to tnvn = our 0 vector. And since we know that t1 through tn = 0 is such a collection, we have that this is the only collection, and so by definition, our set is linearly independent.

Part 2 of our proof is to show that if B is linearly independent, then every vector in V can be expressed as a unique linear combination of the vectors in B. Like all “unique” theorems, we will assume that we have two expressions for some x, and show that they are, in fact, the same. To that end, let a1 through an and b1 through bn be real numbers such that x = the sum of a1v1 through anvn and x = the sum of b1v1 through bnvn. From this, we’ll see that 0, which is equal to x – x, equals (the sum of a1v1 through anvn) minus (the sum of b1v1 through bnvn), which equals (a1 – b1)v1 + dot dot dot + (an – bn)vn. So now we’ve written 0 as a linear combination of our vectors v1 through vn. But since B is linearly independent, the only solution to our equation is the trivial solution ai – bi = 0 for all i from 1 to n. This means that ai = bi for all i, 1 through n, and so we see that if be is linearly independent, then ai = bi for all i from 1 to n, which means that there is only one way to write x as a linear combination of the vectors in B.

And so now, we define a basis. A set B of vectors in a vector space V is a basis for V if it is a linearly independent spanning set for V. Now note that, using this definition, the vector space O equal to just the 0 vector cannot have a basis since the only set of vectors from O is O itself. But since the set containing only the 0 vector contains the 0 vector, it is not linearly independent. However, we would like every vector space to have a basis. Therefore, we define the basis of O to be the empty set.

Now, the Unique Representation Theorem will be a very powerful tool for us to use, but in order to use it, we will need to have a basis for our vector space. The first step in this process will be making sure that we can identify a basis.

For example, let’s show that this set A of 2-by-2 matrices is a basis for M(2, 2). To show that A is a basis for M(2, 2), we need to show that it is a spanning set for M(2, 2) and that it is linearly independent. To see that it is a spanning set for M(2, 2), we need to see that there is a solution to the equation t1 times our first matrix + t2 times our second matrix + t3 times our third matrix + t4 times our fourth matrix equal to the matrix [a, b; c, d], a generic matrix from M(2, 2). Performing our calculations on the left, we get this following matrix equality, and then we set the entries equal to each other to get this equivalent system of linear equations. Now, to see if this system has solutions, we need to row reduce its augmented matrix, as seen here. The last matrix is in row echelon form, and since it does not have any bad rows, we see that our system does have a solution, which means that A is a spanning set for M(2, 2).

Now we want to verify that A is linearly independent. To do this, we need to see how many solutions there are to the similar equation t1 times our first matrix + t2 times our second matrix + t3 times our third matrix + t4 times our fourth matrix, but this time, set equal to the 0 matrix. So this is just a special case of our earlier equation, now setting a, b, c, and d equal to 0, and so we can use the exact same system, simply setting the right side equal to 0. To find the solutions of this homogeneous system, we would row reduce the coefficient matrix. Again, we’ll use the exact same steps as before to see that the row echelon form of the coefficient matrix is the following. From this, we see that the rank of the coefficient matrix is 4, which is the same as the number of columns, so the system has a unique solution. And from this, we know that A is linearly independent. And so, having shown that A is a linearly independent spanning set for M(2, 2), we have shown that A is a basis for M(2, 2).

What would our row reduction look like if something was not a basis? Well, let’s show that the set B = {1, 1 + x, 1 + x-squared, x + x-squared} is not a basis for P2. Well, if it’s not a basis, then it’s either not a spanning set or it’s not linearly independent. Let’s look at the question of span first. To see if B is a spanning set for P2, we need to see if this equation has a solution for every possible p0, p1, and p2. If we go ahead and do the calculation on the left, we see that we are looking at setting this polynomial equal to p0 + p1x + p2(x-squared). So we set the coefficients equal to each other and see that this is equivalent to the following system of linear equations: that t1 + t2 + t3 = p0, that t2 + t4 = p1, and that t3 + t4 = p2. To solve this system, we look at its augmented matrix. Lucky for us, this matrix is already in row echelon form, so we see that there are no bad rows. This means that there is a solution, which means that B is a spanning set for P2.

Hmm, so it must be that B is linearly dependent. To show this, we need to show that there are non-trivial solutions to this equation. As before, this is simply a specific example of the equation we were looking at before, with p0 = p1 = p2 set equal to 0. As such, we know that the solution to this equation can be found by looking at the following matrix. Again, our matrix is already in row echelon form, and of interest to us now is that the rank of the coefficient matrix is 3. Since this is less than the number of columns in the coefficient matrix, there are parameters in the general solution, which means that the solution is not unique. So, our set is not linearly independent, and therefore is not a basis.

Now, one of the unusual properties of the polynomial spaces is that any given polynomial is a member of not just one space, but instead of all the spaces big enough. So we do not know from just looking at a polynomial what space we are talking about. This never happens in Rn or M(m, n), though, where we are very specific about the number of entries in each member. But these polynomials do have different properties depending on the space they are in.

Consider the following example. Show that the set C = {1 + x + x-squared, 1 – x – 2(x-squared), 4x} is a basis for P2 but not a basis for P3. So first we want to show that the span of C equals P2 and that C is linearly independent. To show span, we need to see that the equation t1 times our first polynomial + t2 times our second polynomial + t3 times our third polynomial equals a generic polynomial p0 + p1x + p2(x-squared) from P2. Doing our calculation on the left, we see we are looking at the following, and by setting the coefficients equal to each other, we see that this is equivalent to the following system of linear equations. To solve this system, we will row reduce its augmented matrix. Our augmented matrix is now in row echelon form, and since there are no bad rows, we know that our system has a solution, and thus, that C is a spanning set for P2.

Now we need to show that C is linearly independent. That means we need to look for solutions to this equation. As before, this is just a special case of the equation we were looking at for span, so we can simply plug in that p0 = p1 = p2 = 0 into our previous work, and we end up looking at the following row reduced coefficient matrix. Now, since the rank of this matrix is 3, which equals the number of columns in our coefficient matrix, our system has a unique solution, and this means that our set is linearly independent. As such, we have seen that C is a basis for P2.

But what changes when we consider C as a subset of P3? Well, we need to add a “+ 0(x-cubed)” to every polynomial and a corresponding “+ p3(x-cubed)” to our general polynomial. So, when we consider whether or not C is a spanning set for P3, we are now looking for solutions to this equation, and this turns our system of equations into the following. Now normally, we wouldn’t bother adding all the 0 entries in the last row, but I’ve put them in for emphasis. Now, we don’t even need to put this into matrix form to see that the last row will give us problems. And so we see that C is not a basis for P3 because it is not a spanning set for P3.

But what about linear independence? We don’t need to check for this since we’ve already shown that C is not a basis for P3, but it is interesting to note that C is still linearly independent in P3. To see this, note that we are simply looking for solutions in the case when p0 = p1 = p2 = p3 = 0. So we are looking for solutions to this system. Now, by setting p3 = 0, we no longer have any problems with our last row, and to find solutions to this system, we simply row reduce the coefficient matrix. Using the same steps as we did in P2, we now end up with this matrix in row echelon form. The extra row of zeroes does not change the fact that the rank of this matrix is 3, which is still the same as the number of columns, and so there is still a unique solution to our equation, and this means that C is linearly independent in P3 as well.

Before we finish this lecture, I want to point out that each of our common vector spaces have a well-known basis, which we will refer to as the standard basis. We’ve already talked about the standard basis for Rn, which is the set of vectors {e1 through en}, where ei is the vector with a 1 in the ith entry and a 0 in all other entries. We can let Ei,j be an m-by-n matrix with a 1 in the ijth entry and a 0 in all other entries, and then our set of matrices Ei,j where i goes from 1 to m and j is from 1 to n is the standard basis for M(m, n). For example, this set would be the standard basis for M(2, 3). The standard basis for Pn is simply the set {1, x, x-squared, dot dot dot, x-to-the-n}.