## Transcript — Introduction

In this lecture, we will review the definition of the four fundamental subspaces of a matrix, and prove a theorem which gives us an easy way to find a basis for the columnspace of a matrix.

Definition: Let A be an m by n matrix. The four fundamental subspaces of A are

- The columnspace of A, which is the set of all possible linear combinations of the columns of A. Recall that our second definition of matrix-vector multiplication is that a matrix A times a vector x is a linear combination of the columns of the matrix. Thus, we can represent the columnspace of A in set notation as the set of {Ax | x in Rn}.
- The rowspace of A, which is the set of all possible linear combinations of the rows of A. Since taking the transpose of a matrix turns its rows into columns, we can write this as the set of all vectors {(A-transpose)x | x is in Rm}. Observe that this is equivalent to the columnspace of A-transpose.
- The nullspace of A is the set of all vectors {x in Rn | Ax = 0}.
- The left nullspace of A is the set of all vectors {x in Rm | (A-transpose)x = 0}. Notice that this is also the nullspace of A-transpose.

We can easily prove the following theorem using the Subspace Test. Theorem 7.1.1: If A is an m by n matrix, then the columnspace of A and the left nullspace of A are subspaces of Rm, and the rowspace of A and the nullspace of A are subspaces of Rn.

In Linear Algebra I, we learned how to find bases for the four fundamental subspaces of a matrix. In particular, we had a lot of practice finding bases for nullspaces when we were finding bases for eigenspaces. Our goal now, though, is to find an easier way to determine a basis for the columnspace and rowspace of a matrix.

The next theorem shows us that finding the basis of the columnspace of a matrix is very, very easy. However, proving that the method works is a little difficult, and uses notation that some students will find tricky. I will refer to an example at the same time that I do the beginning of the proof to help you understand the notation. Again, I will remind you that you should move very slowly through the proof, and review all of the elements from Linear Algebra I that you don’t remember before proceeding in the proof. Some students may find that it takes them more than an hour to go through this proof. Do not worry if it does take you more than an hour. If you put that kind of dedication into this course, you should do really well.

## Theorem 7.1.2

Theorem 7.1.2: Let A be an m by n matrix. The columns of A which correspond to leading ones in the reduced row echelon form of A form a basis for the columnspace of A. Moreover, the dimension of the columnspace of A equals the rank of A.

Proof: We first observe that if A = 0(m,n), then the result is trivial. Hence, we can assume that rank(A) = r is greater than 0. To prove that the columns of A which correspond to the columns of the reduced row echelon form R of A with leading ones forms a basis for Col(A), we will need to prove that these columns form a linearly independent spanning set for Col(A).

At first, this seems like it might be a little tricky, since we know absolutely nothing about the columns of A. However, by definition of the reduced row echelon form, we know lots about the columns of R. Hence, the key to the proof is to relate the columns of A to the columns of the reduced row echelon form of A.

To do this effectively, we need to invent some notation. Denote the columns of the reduced row echelon form R of A by r1 up to rn. Since rank(A) = r, the reduced row echelon form contains r leading ones. Let t1 to tr denote the indices of the columns of R which contain leading ones. To help you understand the notation, let’s demonstrate this with a numerical example. Consider the example A = [1, 1, 3, -1; 2, 2, 1, 3; 1, 1, -1, 3]. The matrix A has reduced row echelon form R = [1, 1, 0, 2; 0, 0, 1, -1; 0, 0, 0, 0]. In this case, we are denoting the columns of R by r1 = [1; 0; 0], r2 = [1; 0; 0], r3 = [0; 1; 0], and r4 = [2; -1; 0]. The first and third columns of R contain leading ones; hence, rank(A) = r = 2. Thus, in this case, we have that t1 = 1 and t2 = 3, as these are the indices of the columns of R containing leading ones.

Our first goal in the proof is to show that the set B of vectors {rt1 up to rtr} is a basis for Col(R). In the case of our example, we want to prove that B = {r1, r3} = {[1; 0; 0], [0; 1; 0]} is a basis for Col(R). It should be clear that this is indeed the case. Observe that the example indicates how we can prove this in general. Observe that by definition of the reduced row echelon form, the vectors rt1 up to rtr are standard basis vectors of Rn, and hence form a linearly independent set. Moreover, by definition of the reduced row echelon form, every column of R which does not contain a leading 1 can be written as a linear combination of the columns which do contain leading ones. So, Span B = Col(R). Therefore, B is a basis for Col(R) as claimed.

We now want to use this fact to show that the corresponding columns in A form a basis for Col(A). To do this, we need notation for the appropriate columns of A. Denote the columns of A by a1 up to an. So, we want to show that C = {at1 up to atr} is a basis for Col(A) by using the fact that B is a basis for Col(R). In terms of the example, we are denoting the columns of A by a1 = [1; 2; 1], a2 = [1; 2; 1], a3 = [3; 1; -1], and a4 = [-1; 3; 3]. Since t1 = 1, by a-subscript-t1, we mean a1. Since t2 = 3, by a-subscript-t2, we mean a3. That is, we want to prove that the set {a1, a3} is a basis for Col(A).

To do this, we first need to find a relationship between the vectors in B and C. That is, we need to find a relationship between columns in A and columns in R. Of course, these are related through the elementary row operations used to row reduce A to R. For purposes of proving things, we represent elementary row operations with elementary matrices. Since R is the reduced row echelon form of A, we know that there exists a sequence of elementary matrices E1 to Ek such that Ek times up to E1A = R. For simplicity, let E = Ek times up to E1. Recall that every elementary matrix is invertible; hence, E-inverse = (E1-inverse) times up to (Ek-inverse) exists. Then we have that R = EA = [Ea1 up to Ean] by definition of matrix-matrix multiplication. Upon comparing columns of each side, we see that we have ri = Eai, or we can rewrite this as ai = (E-inverse)ri.

We now use the relationship to prove that C is a basis. To prove that C is linearly independent, as usual, we use the definition of linear independence. Consider c1(at1) + up to cr(atr) = 0. Recall, to prove that the set is linearly independent, we need to prove the only solution to this equation is where all of the coefficients c1 to cr are 0. So, multiplying both sides by E, and using the above relationship, we get that c1(rt1) + up to cr(rtr) = 0. Thus, all of the coefficients c1 up to cr are 0 since we have already proved that {rt1 up to rtr} is linearly independent. Hence, we have proven that C is linearly independent.

To show that C spans Col(A), we need to show that if we pick any vector b in Col(A), then we can write it as a linear combination of the vectors {at1 up to atr}. Pick any vector b in Col(A). Then by definition of the columnspace, there exists a vector x in Rn such that b = Ax, which is (E-inverse)Rx. Multiplying by E on the left of both sides gives Eb = Rx. Notice that, by definition, Rx is in Col(R). Therefore, Eb is in Col(R), and hence can be written as a linear combination of the basis vectors {rt1 up to rtr}. Hence, we have Eb = d1(rt1) + up to dr(rtr). Multiplying again on the left of both sides by E-inverse gives b = d1(E-inverse)(rt1) + up to dr(E-inverse)(rtr), and hence, b = d1(at1) + up to dr(atr) as required. Thus, C spans Col(A), and therefore C is a basis for Col(A).

Finally, recall that the dimension of a vector space is the number of vectors in any basis, and thus the dimension of Col(A) = r = rank(A). Poof.

I will again remind you that it is not expected that you can immediately follow this proof. It will take you time to truly understand all of it. You may find it helpful to follow along with a numerical example of your own. If the elementary matrix part at the end is causing you some confusion, you may want to follow along with that part in your numerical example as well.

In the next lecture, we will see how to easily find a basis for the rowspace of a matrix, and we will prove another very important theorem.