## Transcript

Now that we are studying vector spaces in general, and as such, linear mappings in general, we can delve deeper into our study of linear mappings. We will, of course, start with some new definitions.

A mapping L is said to be one-to-one if L(u1) = L(u2) implies that u1 = u2.

So for example, the mapping L from R4 to R2 defined by L([a; b; c; d]) = [a; d] is not one-to-one. One counterexample is that L([1; 2; 1; 2]) = [1; 2] and L([1; -1; -2; 2]) also equals [1; 2]. So L([1; 2; 1; 2]) = L([1; -1; -2; 2]) even though [1; 2; 1; 2] does not equal [1; -1; -2; 2]. But the mapping L from R2 to R4 defined by L([a; b]) = [a; b; a; b] is one-to-one. To prove this, we assume that L([a1; b1]) = L([a2; b2]). Well, that means that [a1; b1; a1; b1] = [a2; b2; a2; b2]. From this, we clearly have that a1 = a2 and b1 = b2, which means that the vector [a1; b1] = [a2; b2]. So we’ve shown that L([a1; b1]) = L([a2; b2]) implies that [a1; b1] = [a2; b2].

The mapping L from R3 to P2 defined by L([a; b; c]) = a + (a + b)x + (a – c)(x-squared) is also one-to-one. To prove this, we assume that L([a1; b1; c1]) = L([a2; b2; c2]). That means that a1 + (a1 + b1)x + (a1 – c1)(x-squared) = a2 + (a2 + b2)x + (a2 – c2)(x-squared). Well, two polynomials are equal only if their coefficients are equal. So from our x-to-the-0 terms, we see that we have a1 = a2. From our x terms, we see that a1 + b1 = a2 + b2. And from our x-squared terms, we see that a1 – c1 = a2 – c2. But plugging in our first fact, that a1 = a2, into the other two equations, we quickly see that b1 will equal b2, and c1 will equal c2. And since a1 = a2, b1 = b2, and c1 = c2, we have that the vector [a1; b1; c1] equals the vector [a2; b2; c2], and thus, we’ve shown that L is one-to-one.

Another way to show that L is one-to-one is to use the following lemma. L is one-to-one if and only if the nullspace of L contains only the 0 vector.

To prove this, we need to prove both directions of this if-and-only-if statement. First, let’s show that if L is one-to-one, then the nullspace of L only contains the 0 vector. To that end, we’ll assume that L is one-to-one, and we will let x be in the nullspace of L. Well, by the definition of being in the nullspace, we know that L(x) = 0. But we also know that L(0) = 0. Since L is one-to-one and L(x) = L(0), we have that x = 0. As such, we’ve seen that 0 is the only element of the nullspace.

So now let’s prove the other direction of the if-and-only-if statement. We’ll assume that the nullspace of L contains only the 0 vector, and prove that L is one-to-one. So suppose that the nullspace of L contains only the 0 vector, and we’ll further suppose that we have two vectors u1 and u2 such that L(u1) = L(u2). Well, then L(u1 – u2) must equal L(u1) – L(u2), which equals the 0 vector. Well, this means that (u1 – u2) is in the nullspace of L, but 0 is the only element of the nullspace, so we have that u1 – u2 = 0. And by the uniqueness of the additive inverse, this means that u1 = u2, and this shows that L is one-to-one.

There’s another feature we often look for in a linear mapping, and here is its definition. A mapping L from U to V is said to be onto if, for every v in V, there exists some u in U such that L(u) = v. That is, L is onto if the range of L equals V.

So for example, our mapping L from R4 to R2 defined by L([a; b; c; d]) = [a; d] is onto. To prove this, let’s let [s1; s2] be some element of R2. Well, then the vector [s1; 0; 0; s2] in R4 is such that L([s1; 0; 0; s2]) = [s1; s2]. Now note that I could have also used [s1; 1; 1; s2] or [s1; s1; s2; s2] or countless other vectors from R4. We need to be able to find at least one vector u such that L(u) = v, not exactly one.

To continue with our example, we’ll note that the mapping L from R2 to R4 defined by L([a; b]) = [a; b; a; b] is not onto. One counterexample is that there is no [a; b] in R2 such that L([a; b]) = [1; 2; 3; 4] since we would need to have a = 1 and a = 3, which is not possible.

The mapping L from R3 to P2 defined by L([a; b; c]) = a + (a + b)x + (a – c)(x-squared) is onto. To prove this, let’s let s0 + s1x + s2(x-squared) be any old element of P2. To show that L is onto, we need to find an [a; b; c] in R3 such that L([a; b; c]) = s0 + s1x + s2(x-squared). That is to say, we need to find a, b, and c in R such that a + (a + b)x + (a – c)(x-squared) = s0 + s1x + s2(x-squared). Setting the coefficients equal to each other, we see that we need a to equal s0, a + b to equal s1, and a – c to equal s2. Plugging a = s0 into the other two equations, we get that s0 + b = s1, so b = s1 – s0, and s0 – c = s2, so c = s0 – s2. And so we have found that [s0; s1 – s0; s0 – s2] is an element of R3 such that L([s0; s1 – s0; s0 – s2]) = s0 + s1x + s2(x-squared), and this shows that L is onto.

Having a linear mapping being either one-to-one or onto can provide us with many benefits, but if a linear mapping is both, then we find some truly amazing results.

First, we’ll note that the linear mapping L from U to V has an inverse mapping L-inverse from V to U if and only if L is one-to-one and onto. To prove this, let’s first assume that L has an inverse mapping. Then to see that L is one-to-one, we assume that L(u1) = L(u2). Well, then u1 = (L-inverse)(L(u1)). But this equals (L-inverse)(L(u2)), which equals u2. And so we’ve shown that L is one-to-one. To see that L is onto, for any v in V, the element (L-inverse)(v) is in
U, and is such that L((L-inverse)(v)) = v, and so we see that L is onto.

To prove the other direction of our if-and-only-if theorem, we’ll now suppose that L is one-to-one and onto, and we’ll try to define a mapping M from V to U that will turn out to be L-inverse. Well, there are two things we need to do to define any mapping from V to U. The first is to make sure that we define M for every element of V. So let’s let v be in V. Well, since L is onto, we know that there is some u in U such that L(u) = v. So if we say that M(v) = u such that L(u) = v, then we are defining M for every element of V. The other thing we have to do when we define a mapping is to make sure that we don’t send our vector v to two different vectors in U. However, because L is one-to-one, we know that if L(u1) = v and L(u2) = v, well then we have that L(u1) = L(u2), so u1 = u2. As such, we can define M(v) to be the unique vector u in U such that L(u) = v. And of course, this mapping M satisfies that M(L(u)) = M(v), which equals u, and that L(M(v)) will equal L(u), which equals v. And so we have that M = L-inverse.

We are so fond of mappings that are both one-to-one and onto that we give them their own name. If U and V are vector spaces over R, and if L from U to V is a linear, one-to-one, and onto mapping, then L is called an isomorphism, or a vector space isomorphism, and U and V are said to be isomorphic.

Note that the reason we include the alternate name “vector space isomorphism” is that there are lots of different definitions for an isomorphism in the world of mathematics. You may have already encountered a definition that only requires a function that is one-to-one and onto, but not linear.

The word “isomorphism” comes from Greek words meaning “same form”, so the definition of an isomorphism depends on what form you want to have be the same. In linear algebra, linear combinations are an important part of the form of a vector space, so we add the requirement that our function preserve linear combinations. You’ll find that different areas of study have a different concept of form. You get used to it.

For now, let’s look at an example. I’d like to point out that we’ve already seen that the linear mapping L from R3 to P2 defined by L([a; b; c]) = a + (a + b)x + (a – c)(x-squared) is both one-to-one and onto, and so L is an isomorphism, and R3 and P2 are isomorphic.

Such isomorphisms lead to an extremely powerful result in linear algebra. Suppose that U and V are finite-dimensional vector spaces over R. Then U and V are isomorphic if and only if they are the same dimension.

Again, this is an if-and-only-if statement, so we will prove both directions. Let’s first prove that if U and V are isomorphic, then they have the same dimension. So we’ll suppose that U and V are isomorphic vector spaces, and we’ll let L be an isomorphism from U to V. Furthermore, we’ll let B = {u1 through un} be a basis for U. Then I claim that the set C = {L(u1) through L(un)} is a basis for V.

First, let’s show that C is linearly independent. To that end, we’ll have scalars t1 through tn in R be such that t1(L(u1)) + through to tn(L(un)) equals our 0 vector. Well, by the linearity of L, we know that L(t1u1 + through to tnun) must equal the 0 vector. So this means that t1u1 + through to tnun is in the nullspace of L. But since L is an isomorphism, L is one-to-one, which means that we know that the nullspace of L contains only the 0 vector. So this means we’ve shown that t1u1 + through to tnun equals the 0 vector. Now, since our vectors {u1 through un} are a basis for U, they are linearly independent, so we know that t1 = 0, t2 = 0, all the way through to tn = 0 is the only possible solution. As such, this means we’ve shown that the vectors {L(u1) through to L(un)} are also linearly independent.

Well, now we need to show that these vectors are a spanning set for V. So that is, given any v from V, we need to find s1 through sn in R such that s1(L(u1)) + through to sn(L(un)) = v. Well, since L is an isomorphism, we know that L is onto, so there is some u in U such that L(u) = v. And since {u1 through un} is a basis for U, we know that there are scalars r1 through rn in R such that r1u1 + through to rnun = u Well, this means that v, which equals L(u), equals L(r1u1 + through to rnun), which equals r1(L(u1)) + through to rn(L(un)), and so we’ve seen that v is in the span of C. And since we’ve shown that C is a linearly independent spanning set for V, we know that it is a basis for V. And this means that the dimension of V is n, which is the same as the dimension of U.

Now we will show the other direction of our if-and-only-if statement, which is that if U and V have the same dimension, then they are isomorphic. So let’s assume that U and V are vector spaces that both have dimension n. We can let B = {u1 through un} be a basis for U, and C = {v1 through vn} be a basis for V. Then I claim that the linear mapping L from U to V defined by L(t1u1 + through to tnun) = t1v1 + through to tnvn is an isomorphism. I will take for granted that it is a linear mapping, and instead focus on showing that it is one-to-one and onto.

To see that L is one-to-one, suppose that L(r1u1 + through to rnun) = L(s1u1 + through to snun). Well, this means that r1v1 + through to rnvn = s1v1 + through to snvn. Bringing everything to one side, we see that this means that (r1 – s1)v1 + through to (rn – sn)vn equals the 0 vector. Well, since {v1 through vn} is a basis for V, it is linearly independent, so we must have that ri – si = 0 for all i, or that ri = si for all i. Well, this means, of course, r1u1 + through to rnun = s1u1 + through to snun, and so L is one-to-one.

Now let’s see that L is onto. To that end, let’s let v be a vector in V. Well, then there must be some s1 through sn in R such that v = s1v1 + through to snvn. Well, then we have that L(s1u1 + through to snun) = s1v1 + through to snvn, which equals v. And so we’ve shown that L is onto.

I find the following statement to be quite important in the study of linear algebra, so I’ll list it as a corollary to Theorem 4.7.3, which is that if U is a vector space with dimension n, then U is isomorphic to Rn. So that is, every finite-dimensional vector space is really just the same as Rn, which is fabulous because we know a lot of things about Rn, and Rn is easy to work with.

I’ll discuss the wonders of this new revelation later, but for now, we finish with one last theorem. If U and V are n-dimensional vector spaces over R, then a linear mapping L from U to V is one-to-one if and only if it is onto.

To prove this, let’s let U and V be n-dimensional vector spaces over R, and let B = {u1 through un} be a basis for U, and C = {v1 through vn} be a basis for V. Now suppose that L is one-to-one. To show that L is onto, I’m first going to show that the set C1 = {L(u1) through L(un)} is a basis for V. To do that, I first want to show that C1 is linearly independent. To that end, suppose that t1 through tn in R are such that t1(L(u1)) + through to tn(L(un)) = 0. Using the linearity of L, this means that L(t1u1 + through to tnun) = 0, and so we see that (t1u1 + through to tnun) is in the nullspace of L. But since we’ve assumed that L is one-to-one, we know that the nullspace of L contains only the 0 vector. So this means we’ve shown that t1u1 + through to tnun equals the 0 vector. At this point, we use the fact that {u1 through un} is a basis for U, and therefore is linearly independent, to see that t1 through to tn must all equal 0. Well, this means that our set C1 is also linearly independent.

We also know that C1 contains n vectors. This is actually not immediately obvious since, in general, we could have L(ui) = L(uj) for some i not equal to j, but because L is one-to-one, we know that L(ui) does not equal L(uj) whenever i does not equal j because the vectors ui are all distinct. An even easier way to see this is to note that the set C1 is linearly independent, and a linearly independent set cannot contain duplicate vectors. Either way, we have found that C1 is a linearly independent set containing dimension(V) number of elements, so by the two-out-of-three rule (a.k.a. Theorem 4.3.4, part 3), C1 is also a spanning set for V, and therefore is a basis for V. Of course, what I actually need is the fact that C1 is a spanning set for V because now, given any v in V, there are scalars s1 through sn such that s1(L(u1)) + through to sn(L(un)) = v. And so this means that s1u1 + through to snun is an element of U such that L(s1u1 + through to snun) = s1(L(u1)) + through to sn(L(un)), which equals v, and at long last, we have shown that L is onto.

Now we turn our attention to the other direction of our proof. So let us assume that L is onto, and show that L is also one-to-one. The first step in that process will be to note that, because L is onto, for every vector vi in our basis C, there is a vector wi in U such that L(wi) = vi. I claim that the set B1 equal to these {w1 through wn} is a basis for U. To prove this claim, I will first show that B1 is linearly independent. To that end, let’s let t1 through tn in R be such that t1w1 + through to tnwn = 0. Well, then we have that L(t1w1 + through tnwn) = L(0), which must equal 0. And so this means that t1(L(w1)) + through to tn(L(wn)) = 0, which is to say that t1v1 + through to tnvn = 0. Since {v1 through vn} is linearly independent, we must have that t1 through tn = 0. And so we have shown that our set B1 is also linearly independent. As noted earlier, this means that B1 does not contain any repeated entries, which means that B1 has n (which equals the dimension of U) elements. Again using the two-out-of-three rule, B1 must be a spanning set for U, and thus is a basis for U. Although again, it is the fact that B1 is a spanning set for U that I want to use.

For now, to show that L is one-to-one, suppose that L(u) = L(w). Since B1 is a spanning set for U, we know that there are scalars a1 through an and b1 through bn such that u must equal a1w1 + through to anwn, and w must equal b1w1 + through to bnwn. Well, then L(u), which equals L(a1w1 + through to anwn), equals a1(L(w1)) + through to an(L(wn)), which equals a1v1 + through to anvn by our choice of our w’s. And similarly, L(w) must equal L(b1w1 + through to bnwn), which equals b1L(w1)) + through to bn(L(wn)), which equals b1v1 + through to bnvn. So if L(u) = L(w), we must have that a1v1 + through to anvn = b1v1 + through to bnvn. Bringing everything to one side, we get that (a1 – b1)v1 + through to (an – bn)vn equals the 0 vector. But since our set {v1 through vn} is linearly independent, we must have that (a1 – b1) through to (an – bn) all equal 0, which is to say that ai = bi for all i from 1 to n. Well, this clearly means that u = w, and so we have shown that L is one-to-one.