## Transcript — Introduction

In the last lecture, we saw that the columns of a matrix A which correspond to the columns in the reduced row echelon form of A containing leading ones form a basis for the columnspace of A. In this lecture, we continue to look at finding bases for the fundamental subspaces.

Theorem 7.1.3: Let A be an m-by-n matrix. The set of all non-zero rows in the reduced row echelon form of A form a basis for the rowspace of A. Hence, the dimension of the rowspace of A equals the rank of A.

Observe that, unlike our method for finding a basis for the columnspace of A, the basis vectors for the rowspace are directly from the reduced row echelon form. This makes the proof quite a bit easier.

Proof: Let R be the reduced row echelon form of A. By definition of the reduced row echelon form, we have that the set of all non-zero rows of R form a basis for the rowspace of R. Moreover, observe that if E is the product of elementary matrices that row reduce A to R, then since E is invertible, we get that A = (E-inverse)R. Then the rowspace of A, by definition, equals (A-transpose)x such that x is in Rm. This is equal to (((E-inverse)R)-transpose)x such that x is in Rm. By properties of transposes, this equals (R-transpose)((E-inverse)-transpose)x such that x is in Rm. And this can be shown to be equal to the set of (R-transpose)y such that y is in Rm. But this is just the definition of the rowspace of R. And so, the rowspace of R equals the rowspace of A, and hence, the set of all non-zero rows of R also form a basis for the rowspace of A as required. Poof.

Take a minute to read over the proof and make sure that you can justify each step. All but one of the justifications are by definition. However, the justification for moving from the third line to the fourth line definitely requires proof. Make sure that you can give a proof that these two lines are equivalent. Note that on an assignment or test, we would require you to fill in the justification for a step like this.

Theorem 7.1.2 and Theorem 7.1.3 give us a useful corollary. Corollary 7.1.4: For any m-by-n matrix A, we have the rank of A equals the rank of A-transpose.

The two previous theorems make it very easy to find a basis for the rowspace and columnspace of a matrix. We demonstrate this with an example.

Example: Find a basis for the rowspace and columnspace of A = [2, -1, -3, 0; -2, 1, 1, -4; 4, -2, -4, 4]. By the theorems, to find the basis, we first row reduce A to reduced row echelon form. We get R = [1, -1/2, 0, 3; 0, 0, 1, 2; 0, 0, 0, 0]. By Theorem 7.1.2, a basis for the columnspace of A is the columns from the original matrix A which correspond to the columns in the reduced row echelon form of A with leading ones. That is, since the first and third columns of R have leading ones, the first and third columns of the original matrix A form a basis for the columnspace of A. Thus, a basis for the columnspace of A is {[2; -2; 4], [-3; 1; -4]}. By Theorem 7.1.3, a basis for the rowspace of A is the non-zero rows from the reduced row echelon form. Hence, a basis for the rowspace of A is {[1; -1/2; 0; 3], [0; 0; 1; 2]}.

We know how to find a basis for the nullspace of a matrix from our work on finding bases of eigenspaces in Linear Algebra I. You may recall that Theorem 2.2.5 from Linear Algebra I guaranteed that the standard procedure for finding a spanning set of a nullspace always led to a linearly independent set. To refresh your memory, we will do one example completely.

Example: Find a basis for the nullspace of A = [2, -1, -3, 0; -2, 1, 1, -4; 4, -2, -4, 4]. To solve the homogeneous system Ax = 0, we row reduce the coefficient matrix A. We get reduced row echelon form [1, -1/2, 0, 3; 0, 0, 1, 2; 0, 0, 0, 0]. Rewriting the reduced row echelon form back as a homogeneous system gives x1 – (1/2)x2 + 3x4 = 0, x3 + 2x4 = 0. Hence, moving the free variables to the other side, we get x1 = (1/2)x2 – 3x4, and x3 = -2x4. Thus, every vector x in the nullspace of A satisfies x = [x1; x2; x3; x4], which equals [(1/2)x2 – 3x4; x2; -2x4; x4]. Using operations on vectors, we can rewrite this as (x2)[1/2; 1; 0; 0] + (x4)[-3; 0; -2; 1]. Thus, by Theorem 2.2.5, the set B = {[1/2; 1; 0; 0], [-3; 0; -2; 1]} is a basis for the nullspace of A.

Notice, we could determine a basis for the rowspace and columnspace of A by just looking at the reduced row echelon form of A. We can actually do the same thing for the nullspace. If you have not already noticed how to do this, you are greatly encouraged to figure out how to do this. On tests, we will expect that you can instantaneously go from the reduced row echelon form of the matrix to the basis for the nullspace of the matrix.

Let’s now do another example in which we find a basis for all four fundamental subspaces of the matrix. Example: Let A = [1, 1, 3; 2, 1, 5; 1, 3, 5; 1, 1, 3]. Find a basis for the four fundamental subspaces. Solution: Row reducing A, we get reduced row echelon form R = [1, 0, 2; 0, 1, 1; 0, 0, 0; 0, 0, 0]. Thus, a basis for the rowspace of A is the set of non-zero rows of R—that is, {[1; 0; 2], [0; 1; 1]}. A basis for the columnspace of A is the columns of A which correspond to the columns of R which contain leading ones. Hence, a basis for the columnspace of A is {[1; 2; 1; 1], [1; 1; 3; 1]}. From the reduced row echelon form, we can see that a basis for the nullspace of A is {[-2; -1; 1]}. To find a basis for the left nullspace of A, we can just use our method for finding the nullspace of a matrix on A-transpose. That is, we row reduce A-transpose to get [1, 2, 1, 1; 1, 1, 3, 1; 3, 5, 5, 3] row reduces to its reduced row echelon form [1, 0, 5, 1; 0, 1, -2, 0; 0, 0, 0, 0]. Thus, from the reduced row echelon form, we can see a basis for the left nullspace of A is {[-5; 2; 1; 0], [-1; 0; 0; 1]}.

Take a minute to look over the example, and, in particular, the answers. Do you notice anything about the bases for the four fundamental subspaces?

You may have noticed that if we combine the bases for the columnspace and left nullspace, then we get a basis for all of R4. Also, if we combine the bases for the rowspace and nullspace, we get a basis for R3. There is one more thing to notice about these bases, but we will leave that for later in the course.

Note that it is not actually necessary to row reduce A-transpose to find a basis for the left nullspace of A. We can use the fact that EA = R, where E is the product of elementary matrices used to bring A to its reduced row echelon form R, to find a basis for the left nullspace. This is left as an exercise.

## Dimension Theorem

We have now seen that the dimension of the columnspace of a matrix equals the rank, and the dimension of the nullspace of a matrix equals the number of free variables—which is the number of columns which do not have leading ones. Hence, we get the following theorem. The Dimension Theorem: If A is an m-by-n matrix, then the rank of A + the dimension of the nullspace of A = n.

At first glance, this theorem may not seem very profound. However, it is very important. Moreover, the extension of this result to general linear mappings, which we will do soon, will also be extremely useful. This theorem follows immediately from Theorem 2.2.5 and Theorem 7.1.2. However, I will provide an alternate proof of the theorem for two reasons—first, because we didn’t prove Theorem 2.2.5 back in Linear Algebra I, and second, because it is a good example to help you learn to make proofs.

Let’s begin thinking about how to prove it. To prove that rank(A) + the dimension of the nullspace of A = n, we can just prove that the dimension of the nullspace of A = n – the rank of A. However, this way turns out to be notationally ugly, which is why we skipped the proof of Theorem 2.2.5. So alternately, we could prove that the rank of A = n – the dimension of the nullspace of A.

So let’s assume that the dimension of the nullspace of A equals k. Then by definition of dimension, there exists a basis B, {v1 to vk}, for the nullspace of A. Since Theorem 7.1.2 says that the dimension of the columnspace of A equals the rank of A, we need to ask ourselves how to prove that the dimension of the columnspace of A = n – k. By definition of dimension, we need to find a basis for the columnspace of A that has n – k vectors in it. How do we construct this basis? First, we see that if we extend our basis B to a basis for all of Rn—say, {v1 to vk, v(k+1) to vn}—then we have added n – k vectors to the set.

But these vectors certainly do not have to be in the columnspace. So how can we transfer these vectors into the columnspace of A? The definition of the columnspace is all vectors of the form Ax. Thus, the set C = {Av(k+1) to Avn} is a set of n – k vectors in the columnspace of A. If we can prove this is a basis, then we will be done the proof.

Let’s start with linear independence. Consider a linear combination of {Av(k+1) up to Avn} which equals the 0 vector. Factoring out the matrix A, we get the 0 vector = A(c(k+1)v(k+1) + up to cnvn). Thus, by definition, c(k+1)v(k+1) + up to cnvn is in the nullspace of A. Since it is in the nullspace of A, we can write this vector as a linear combination of the basis vectors for the nullspace—that is, a linear combination of the vectors in B—say, c(k+1)v(k+1) + up to cnvn is equal to the linear combination d1v1 + up to dkvk. Moving the vectors on the right to the left, we get –d1v1 – up to – dkvk + c(k+1)v(k+1) + up to cnvn = the 0 vector. But {v1 to vn} is a basis for Rn, and hence is a linearly independent set. So, the only solution to this equation must be where all the coefficients are 0—that is, d1 = up to dk = c(k+1) = up to cn = 0. Thus, the only solution to the linear combination c(k+1)Av(k+1) + up to cnAvn = the 0 vector is where all of the coefficients c(k+1) up to cn are 0, and so by definition, C is linearly independent.

We now need to prove that C also spans the columnspace of A. That is, we need to prove that if we pick any vector in the columnspace, then we can write it as a linear combination of the vectors in C. So we let b be any vector in the columnspace of A. Then, by definition of the columnspace of A, there exists a vector x in Rn such that b = Ax. Since x is in Rn, we can write it as a linear combination of the basis vectors {v1 to vn}—say, x = x1v1 + up to xnvn. Thus we have b = A(x1v1 + up to xnvn). Multiplying the A into the brackets, we get b = x1Av1 + up to xkAvk + x(k+1)Av(k+1) + up to xnAvn. But v1 to vk are in the nullspace of A, so A times all of these vectors is the 0 vector. Thus we have that b is a linear combination of {Av(k+1) up to Avn} as required.

Hence, the dimension of the columnspace = n – k, which is n – the dimension of the nullspace as required. Poof.

This ends this lecture.