Lesson: The Fundamental Theorem of Linear Algebra

Question 2

1 point

Transcript — Introduction

We have now seen that if W is a subspace of an inner product space V, then every vector v in V can be written as a sum of a vector in W and a vector in the orthogonal complement of W. In this lecture, we start by inventing some notation for this idea. We will then use our work with orthogonal complements and fundamental subspaces to prove the Fundamental Theorem of Linear Algebra.

Direct Sums

Definition: Let U and W be subspaces of a vector space V such that the intersection of U and W only contains the 0 vector. We define the direct sum of U and W by the direct sum of U and W is the set of all vectors (u + w) such that u is in U and w is in W. It is important to notice that the condition on the definition of the direct sum is that U and W must have trivial intersection. This means that before you can take the direct sum of two subspaces, you must check that their intersection is trivial. On the other hand, whenever we write down U direct sum W, we are implying that the intersection of U and W only contains the 0 vector.

As usual, this is best demonstrated with an example. Example: Let U be the subspace of R3 spanned by {[1; 0; 0]} and W be the subspace of R3 spanned by {[0; 1; 2]}. What is the direct sum of U and W? Solution: By definition, we have that U direct sum W is equal to the set of all (u + w) such that u is in U and w is in W. But every vector u in U has the form s[1; 0; 0], and every vector in w has the form t[0; 1; 2]. Thus, we get that U direct sum W is the subspace spanned by {[1; 0; 0] and [0; 1; 2]}.

Even though this is a simple example, by analyzing it, we can conjecture an important property of the direct sum—in particular, that a basis for the direct sum of U and W should be the union of a basis for U and a basis for W. Of course, to be sure, we need to prove this. Theorem 9.5.1: If U and W are subspaces of a vector space V, then the direct sum of U and W is a subspace of V. Moreover, if {u1 to uk} is a basis for U, and {w1 to wl} is a basis for W, then the set {u1 to uk, w1 to wl} is a basis for the direct sum of U and W. I will leave the proof of this as an exercise.

This idea of combining the bases to be a basis for the direct sum reminds us a lot of our work with orthogonal complements. Recall that if W is a subspace of an inner product space V, then if we combine bases for W and its orthogonal complement, we get a basis for all of V. Hence, using the notation for the direct sum, we get the following theorem. Theorem 9.5.2: If V is a finite-dimensional inner product space, and W is a subspace of V, then the direct sum of W and its orthogonal complement equals all of V.

Let’s consider what this shows us in picture form. Observe that we can represent the inner product space V like a plane with axes W and the orthogonal complement of W. That is, every vector v in V can be represented by a point in the plane, (w, x), where v is equal to w + x, and w is a vector in W, and x is a vector in the orthogonal complement of W. Notice that with this interpretation, the projection and perpendicular look just like the projection and perpendicular of a vector in R2 onto the standard coordinate axes.

It turns out that this result is particularly important when applied to the subspaces of a matrix. Let’s consider an example.

Example: Find a basis for each of the four fundamental subspaces of the matrix A = [1, 2, 0, -1; 3, 6, 1, -1; -2, -4, 2, 6]. Solution: We solve this just as we did back in Module 7. So first, we row reduce A to its reduced row echelon form R = [1, 2, 0, -1; 0, 0, 1, 2; 0, 0, 0, 0]. Then we know a basis for the rowspace of A is all of the non-zero rows from the reduced row echelon form, and so a basis is {[1, 2, 0, -1], [0, 0, 1, 2]}. A basis for the columnspace of A is the columns from the original matrix A that correspond to the columns of R which contain leading ones, and hence, a basis for the columnspace of A is {[1; 3; -2], [0; 1; 2]}. And, as mentioned before, we should be able to determine a basis for the nullspace of A by inspection from the reduced row echelon form R, and so doing this, we find a basis for the nullspace of A is {[-2; 1; 0; 0], [1; 0; -2; 1]}. Finally, to get a basis for the left nullspace of A, we row reduce A-transpose to get [1, 0, -8; 0, 1, 2; 0, 0, 0; 0, 0, 0]. And so, again by inspection, a basis for the left nullspace of A is {[8; -2; 1]}.

Now as usual, whenever we do an example or solve a problem, we should compare the result with our theory. So, how does this solution compare with what we have been doing? We first notice that the direct sum of the rowspace and the nullspace is all of R4, and the direct sum of the columnspace and the left nullspace is R3. But there’s one more thing we should notice. We should also notice that all of the vectors in the basis for the rowspace are orthogonal to each of the vectors in the basis for the nullspace, and the basis vector for the left nullspace is orthogonal to the basis vectors for the columnspace. That is, the orthogonal complement of the rowspace of the matrix is the nullspace, and the orthogonal complement of the columnspace is the left nullspace.

The Fundamental Theorem of Linear Algebra

This is an extremely important result, and so we call it the Fundamental Theorem of Linear Algebra. Let A be an m-by-n matrix. Then the orthogonal complement of the columnspace is the left nullspace, and the orthogonal complement of the rowspace is the nullspace. In particular, using our direct sum notation, we have that Rn is the direct sum of the rowspace and the nullspace, and Rm is the direct sum of the columnspace and the left nullspace.

Of course, we need to prove this. We will start by proving that the orthogonal complement of the rowspace is the nullspace. Since we need to work with the rows of the matrix, let’s denote A by its rows, [v1-transpose to vm-transpose]. Let x be any vector in the orthogonal complement of the rowspace. Then, by definition of the orthogonal complement, x is orthogonal to every row of A. That is, the dot product of vi and x equals 0 for all i from 1 to m. We want to show that x is in the nullspace of A, and so we consider Ax, which, by definition of matrix-vector multiplication, is [v1 dot product x; down to vm dot product x], but all of those dot products are equal to 0, and so Ax is the 0 vector, and hence, x is in the nullspace of A. Therefore, we have shown that the orthogonal complement of the rowspace of A is a subset of the nullspace of A. On the other hand, let’s let y be any vector in the nullspace of A. Then Ay is the 0 vector, and so the dot product of y and any of the vectors v1 to vm is 0. Consequently, y is orthogonal to all of the rows of A, and so by Theorem 9.4.1, y is in the orthogonal complement of the rowspace of A. Therefore, the nullspace is also a subset of the orthogonal complement of the rowspace, and so they are equal.

We could prove that the orthogonal complement of the columnspace is the left nullspace by using a similar method with the columns of A. However, we can prove it much quicker by substituting A-transpose into what we have just proven. In particular, the orthogonal complement of the columnspace of A is equal to the orthogonal complement of the rowspace of A-transpose, which, by what we have just proven, is equal to the nullspace of A-transpose, which is just the left nullspace.

The remaining part of the Fundamental Theorem follows from Theorem 9.5.2.

Take a minute to look over the proof, and make sure you understand all of the steps.

I personally don’t think that the Fundamental Theorem of Linear Algebra is the most important theorem in linear algebra. However, what makes it special is that it summarizes a lot of linear algebra theory into one nice statement. First, observe that since the orthogonal complement of the rowspace is the nullspace, we have, from our work with orthogonal complements, that the dimension of the nullspace is n minus the dimension of the rowspace. That is, the dimension of the nullspace is n minus the rank of the matrix. That is, the Fundamental Theorem of Linear Algebra implies the Dimension Theorem.

We next refer back to the diagram we had earlier. Our work with direct sums and the Fundamental Theorem of Linear Algebra shows us that we can divide Rn into the rowspace and nullspace, and Rm into the columnspace and left nullspace. Thus, if x is any vector in Rn, then it can be written as x = r + n, where r is a vector in the rowspace and n is a vector in the nullspace. If we multiply this by A, we get Ax = A(r + n), which is Ar + An, which equals Ar + the 0 vector since n is in the nullspace of A, which equals Ar. Therefore, every vector in Rn along the line x = r is mapped to the same vector Ax in the columnspace of A. This implies that a consistent system Ax = b can have a unique solution only if the nullspace of A only contains the 0 vector.

This is only a small sample of everything that the Fundamental Theorem implies. In this course, we will use the Fundamental Theorem in a couple of proofs, including one in the next lecture, so it is important that you understand it and remember it. This ends this lecture.

© University of Waterloo and others, Powered by Maplesoft