## Transcript — Introduction

The goal of this lecture is to extend the definitions of range and kernel to general linear mappings, and to prove the very important Rank-Nullity Theorem.

## Subspaces of a Linear Mapping

We begin by making some familiar definitions. Definition: Let L from V to W be a linear mapping. We define the kernel of L to be the set of all vectors v in V such that L(v) = the 0 vector in W. Definition: Let L from V to W be a linear mapping. We define the range of L by the range of L is the set of all L(v) such that v is in V. These definitions are the same as what we had for linear mappings from Rn to Rm, so of course, the procedure for finding bases for the range and kernel for general linear mappings is also the same.

Example: Find a basis for the range and kernel of the linear mapping L from R3 to P2(R) defined by L(a, b, c) = a + (a + b + c)(x-squared). First, observe that every vector in the range of L has the form a + (a + b + c)(x-squared). Factoring out the variables, we can write this as a(1 + x-squared) + (b + c)(x-squared). Hence, the set {1 + x-squared, x-squared} spans the range of L, and is clearly linearly independent, so it is a basis for the range. Now let’s find a basis for the kernel. Let [a; b; c] be any vector in the kernel of L. Then, by definition of the kernel, we have L(a, b, c) is the 0 polynomial, 0 + 0x + 0x-squared. But, by definition of the mapping, L(a, b, c) = a + (a + b + c)(x-squared). Comparing like powers of x, this implies that a = 0 and a + b + c = 0, which gives a = 0 and b = -c. Thus, every vector in the kernel has the form [0; -c; c], which is c[0; -1; 1]. Hence, {[0; -1; 1]} is a basis for the kernel of L.

As mentioned before, the concepts in linear algebra are closely connected. If we look over this example carefully, we should see some similarity with what we were doing with matrices—in particular, the Dimension Theorem. Observe that the dimension of the range plus the dimension of the kernel equals the dimension of the domain. Our goal now is to show that we have an equivalent to the Dimension Theorem for general linear mappings.

To make this look even more like the Dimension Theorem, we first make a couple of definitions. Definition: Let L from V to W be a linear mapping. We define the rank of L by the rank of L equals the dimension of the range of L, and we define the nullity of L to be the dimension of the kernel of L.

Theorem 8.2.1: Let L from V to W be a linear mapping. Then the kernel of L is a subspace of V, and the range of L is a subspace of W.

## The Rank-Nullity Theorem

We then get the very useful Rank-Nullity Theorem. Theorem 8.2.2, the Rank-Nullity Theorem: Let V be an n-dimensional vector space, and let W be a vector space. If L from V to W is linear, then the rank of L plus the nullity of L equals the dimension of the domain, which is n. Since this theorem is so similar to the Dimension Theorem, it should not be surprising that the proof is essentially the same as the proof of the Dimension Theorem. If you have previously taken the time to understand the proof of the Dimension Theorem, which you should have, then this proof should be very easy.

Proof: Assume that the nullity of L equals k. Then there exists a basis {v1 to vk} for the kernel of L. Since the kernel of L is a subspace of V, we know that we can extend the basis for the kernel of L to a basis {v1 to vk, v(k+1) to vn} for V. We notice that the set C = {L(v(k+1)) up to L(vn)} is a set of n – k vectors in the range of L. Let’s prove that this is a basis for the range of L. To do this, we need to prove it is a linearly independent spanning set.

First, consider c(k+1)L(v(k+1)) + up to cnL(vn) = the 0 vector. Since L is linear, we get that this is L(c(k+1)v(k+1) + up to cnvn) = the 0 vector. But by definition, this implies that the vector c(k+1)v(k+1) + up to cnvn is in the kernel of L. Since this is in the kernel of L, we can write as a linear combination of the basis vectors for the kernel. That is, there exists d1 to dk such that c(k+1)v(k+1) + up to cnvn = d1v1 + up to dkvk. Moving all the vectors to the left side gives –d1v1 – up to – dkvk + c(k+1)v(k+1) + up to cnvn = the 0 vector. Hence, all of the coefficients d1 up to dk, c(k+1) up to cn are equal to 0 since the set {v1 to vk, v(k+1) up to vn} is linearly independent, as it is a basis for V. And thus, the set C is linearly independent.

For spanning, we let y be any vector in the range of L. Then, by definition, there exists some vector v in V such that y = L(v). Since v is in V, it can be written as a linear combination of the basis vectors {v1 to vn}. Then, using the fact that L is linear, we get that this equals c1L(v1) + up to ckL(vk) + c(k+1)L(v(k+1)) + up to cnL(vn). Now observe that the first k vectors are all the 0 vector, since v1 to vk are in the kernel of L. And so, we have written y as a linear combination of the vectors in C, and so C also spans the range of L. Therefore, we have shown that C is a basis for the range of L, and hence, by definition, the rank of L equals the dimension of the range of L, which is n – k, which equals n minus the nullity of L. Poof.

Take a minute to read over the proof, and compare it to the proof of the Dimension Theorem. Notice that they are essentially identical. This is not the only time we will see proofs essentially repeated. Thus, studying proofs as we do them can have real long-term benefits. Also, notice that the theorem and the proof further demonstrate the close connection between matrices and linear mappings.

## Examples

Example: Let U be the subspace of M(2-by-2)(R) of upper triangular matrices, and let L from U to M(2-by-2)(R) be the linear mapping defined by L([a, b; 0, c]) = [a – b, a + c; b + c, 0]. Find the rank and nullity of L. Notice that we can find the rank by finding a basis for the range of L, and then use the Rank-Nullity Theorem to find the nullity, or we could find the nullity by finding a basis for the kernel, and then find the rank by using the Rank-Nullity Theorem. In this example, we will first find the rank.

Every matrix B in the range of L has the form [a – b, a + c; b + c, 0]. Factoring out the variables, we get this equals a[1, 1; 0, 0] + b[-1, 0; 1, 0] + c[0, 1; 1, 0]. Hence, the set {[1, 1; 0, 0], [-1, 0; 1, 0], [0, 1; 1, 0]} is a spanning set for the range. Is it a basis? No. It is linearly dependent. In particular, we see the sum of the first two vectors is the third vector. Hence, we have that the range of L is spanned by {[1, 1; 0, 0], [-1, 0; 1, 0]}. Since this set is clearly linearly independent, it is a basis for the range of L. Thus, the rank of L is 2. To find the nullity, we just use the Rank-Nullity Theorem. We get the nullity of L is equal to the dimension of the domain minus the rank. The dimension of U is 3, and hence the nullity of L is 1.

Let’s do one more example. Let L from P2(R) to M(2-by-3)(R) be the linear mapping defined by L(a + bx + cx-squared) = [a, a, c; a, a, c]. Find the rank and nullity of L. This time, let’s find a basis for the kernel of L. Let a + bx + cx-squared be any polynomial in the kernel of L. Then, by definition, we have that the 0 matrix equals L(a + bx + cx-squared), which, by definition of the mapping, equals [a, a, c; a, a, c]. Equating corresponding entries, this implies that a = 0 and c = 0. Thus, every vector in the kernel has the form a + bx + cx-squared where a = 0 and c = 0—i.e., every vector in the kernel has the form bx. Hence, a basis for the kernel of L is the set containing x, so the nullity of L is the dimension of the kernel, which is 1. Then, by the Rank-Nullity Theorem, we get that the rank of L equals the dimension of the domain minus the nullity, which is 2.

This ends this lecture. In the next lecture, we will begin to look at matrix representations for general linear mappings.