Lesson: Isomorphisms

Question 2

1 point

Transcript — Introduction

In Linear Algebra I, we saw that vectors in Rn satisfy ten properties. We then saw that matrices, linear mappings, polynomials, and in fact lots of sets with operations of addition and scalar multiplication satisfy these same ten properties, and thus we were motivated to define the abstract concept of a vector space. We now recall that the ten vector space axioms define a structure. That is, they define how addition and scalar multiplication must work in the vector space. Thus, all n-dimensional vector spaces should have the same structure. In this lecture, we will develop the tools to mathematically prove when two vector spaces are essentially the same vector space.

One-to-One and Onto

We start by making some familiar definitions. Definition: Let V and W be vector spaces, and let L from V to W be a linear mapping. If L(v) = L(u) implies v = u, then L is said to be one-to-one. If, for every w in W, there exists a vector v in V such that L(v) = w, then L is said to be onto. Note that many people use the terms “injective” and “surjective” instead of “one-to-one” and “onto”. You should be familiar with these concepts from previous courses.

Lemma 8.4.1: Let L from V to W be a linear mapping. L is one-to-one if and only if the kernel of L only contains the 0 vector.

Let’s figure out the proof for this lemma. When proving things, it is always important to take time to make sure that you know what you are supposed to prove, and figure out a strategy of how you are going to prove it. Note that sometimes your first strategy may not work out. If that happens, you need to figure out a second, third, fourth, fifth, et cetera, strategies until one finally works. The more practice you get in proving things, the easier it becomes to pick the right strategy earlier.

In our case, since it’s an if-an-only-if statement, we need to prove both directions. So first, let’s assume that L is one-to-one, and then we need to prove that the kernel of L only contains the 0 vector. So how do we show that the kernel of L only contains the 0 vector? Spend a minute to think about a few strategies of how to do this. Note that there is more than one way that will work. The method that I will use is to pick any vector in the kernel, and then show that that vector must be the 0 vector. So let x be any vector in the kernel. Then, by definition of the kernel, L(x) = the 0 vector. But we also know that L(the 0 vector) = the 0 vector. Hence, we have L(x) = L(the 0 vector), but since L is one-to-one, this implies that x = the 0 vector. Therefore, the only vector in the kernel is the 0 vector.

Now let’s prove the other direction. Assume the kernel of L only contains the 0 vector. We need to prove that L is one-to-one. Right now, the only thing we know about one-to-one is the definition, so let’s use the definition. We consider L(u) = L(v). We need to relate this somehow to the kernel. So, let’s get a 0 vector on one side. We can do this by moving L(v) to the other side to get L(u) – L(v) = the 0 vector. Since L is linear, this gives L(u – v) = the 0 vector. But this implies that u – v is in the kernel of L, but the only vector in the kernel of L is the 0 vector. Hence, we have that u – v = the 0 vector, and so u = v. Thus, L is one-to-one. Poof.

Take a minute to read over the proof. Remember, when reading over the proofs from the lectures, in the course notes, or assignment solutions, it is important to not only make sure you understand all the steps in the proof, but try to think about how the author of the proof came up with those steps.

Isomorphisms

We now want to define what it means for two vector spaces to be “the same”. For two vector spaces to be “the same”, we want, for each vector in one vector space, there to be a unique corresponding vector in the other vector space, and linear combinations of corresponding vectors to result in corresponding vectors. If, for each vector in one vector space, there is a unique corresponding vector in the other vector space, then there should be a one-to-one and onto mapping between the two vector spaces. And if linear combinations are going to be preserved, then this mapping needs to be linear. And hence, we make the following definition.

Definition: Let V and W be vector spaces. We say that V and W are isomorphic if there exists a linear mapping L from V to W that is one-to-one and onto. Such a mapping L is called an isomorphism. Two vector spaces being isomorphic means that the two vector spaces really are the same vector space.

Let’s demonstrate this with an example. We will soon prove that P3(R), M(2-by-2)(R), and R4 are all isomorphic, so addition of the corresponding vectors in these vector spaces will give exactly the same results. Observe that if we add (1 + 2x + 3x-squared + 4x-cubed) + (5 + 6x + 7x-squared + 8x-cubed), we get 6 + 8x + 10x-squared + 12x-cubed. And if we add the corresponding matrices [1, 2; 3, 4] and [5, 6; 7, 8], we get the corresponding result [6, 8; 10, 12]. Note that the isomorphism only applies to the structure in terms of a vector space. It is quite possible that isomorphic vector spaces have additional operations that are not common to each other. For example, the factorization of polynomials in P3(R) will not necessarily have an analog in M(2-by-2)(R).

Let’s now demonstrate the procedure for proving two vector spaces are isomorphic with a couple of examples. Example: Prove that P3(R) and M(2-by-2)(R) are isomorphic. To prove that they are isomorphic, we need to prove that there exists an isomorphism between them. Usually, the hard part is defining the isomorphism. Once we have correctly defined an isomorphism, proving it is linear, one-to-one, and onto is usually quite easy.

So, we need to figure out how to define our isomorphism between P3(R) and M(2-by-2)(R). We need to define it so that we can show how we are relating the vectors in one vector space to the other. As in the previous example, we saw we can relate a polynomial a + bx + cx-squared + dx-cubed to the matrix [a, b; c, d]. That is, we will define our isomorphism L by L(a + bx + cx-squared + dx-cubed) = the matrix [a, b; c, d]. To prove this is an isomorphism, we need to prove it’s linear, one-to-one, and onto.

Linear: Since you should be an expert at proving mappings are linear, I will leave this as an exercise. Note that on tests, you are not allowed to leave parts of the solution of a problem as an exercise for the marker.

One-to-one: We have two ways of doing this. We can use the definition, or we can use Lemma 8.4.1. In this example, I will use the definition; in the next example, I will use the lemma. Observe that L(a + bx + cx-squared + dx-cubed) = L(e + fx + gx-squared + hx-cubed) gives, by definition of the mapping, [a, b; c, d] = [e, f; g, h]. Hence, comparing entries, we get a = e, b = f, c = g, and d = h. Thus, a + bx + cx-squared + dx-cubed = e + fx + gx-squared + hx-cubed, so L is one-to-one by definition.

To prove onto, we need to show that every vector in the codomain can be mapped to by something in the domain. So, we pick any vector in the codomain, and need to find the vector that maps to it. We pick any matrix [a, b; c, d] in M(2-by-2)(R). We need to find the polynomial that maps to this matrix. This is easy. We see that L(a + bx + cx-squared + dx-cubed) is [a, b; c, d], and so L is onto.

Hence, L is an isomorphism, and so P3(R) and M(2-by-2)(R) are isomorphic.

Note, if V and W are isomorphic, then it does not mean every linear mapping L from V to W must be an isomorphism. For example, the mapping T from P3(R) to M(2-by-2)(R) defined by T(a + bx + cx-squared + dx-cubed) is the 0 matrix is definitely not one-to-one, nor onto.

Take a minute to look over this example again. Think carefully about how we figured out how to define the isomorphism. It is important to understand why we picked it in that way so we can figure out how to define an isomorphism in a more complicated case. We have defined the isomorphism by matching coordinates of the vectors with respect to the standard bases. That is, we have defined the isomorphism by making it map the standard basis vectors of P3(R) to the standard basis vectors of M(2-by-2)(R).

Example: Prove that P2(R) and the vector space V = {[x1; x2; x3; x4] in R4 such that x1 + x2 + x3 + x4 = 0} are isomorphic. As we saw in the last example, once we define the isomorphism, it will be easy to prove it is linear, one-to-one, and onto. But how do we define the isomorphism? As we just discussed, we want to map basis vectors to basis vectors. So, we start by picking any basis for P2(R)—say, the standard basis—and any basis for V—say, {[1; 0; 0; -1], [0; 1; 0; -1], [0; 0; 1; -1]}. Thus, we define our isomorphism L from P2(R) to V by L(a + bx + cx-squared) = a[1; 0; 0; -1] + b[0; 1; 0; -1] + c[0; 0; 1; -1], which equals [a; b; c; -a – b – c]. To prove it is an isomorphism, we need to prove it is linear, one-to-one, and onto.

Linear: Since you should be an expert at proving mappings are linear, I will leave this as an exercise.

One-to-one: In this example, I will use Lemma 8.4.1. We pick any vector in the kernel, and we need to prove that it must be the 0 vector. Let a + bx + cx-squared be any vector in the kernel. By definition of the kernel, we have that the 0 vector equals L(a + bx + cx-squared), which, by definition of the mapping, is [a; b; c; -a – b – c]. Comparing entries, we get a = 0, b = 0, c = 0, so the only vector in the kernel is 0, and so L is one-to-one by Lemma 8.4.1.

For onto, we pick any vector v in V, and we need to find the polynomial which maps to it. Since v is in V, we can write it as a linear combination of the basis vectors for V—say, v = a[1; 0; 0; -1] + b[0; 1; 0; -1] + c[0; 0; 1; -1]. And hence, we see that L(a + bx + cx-squared) = v, and so L is also onto.

Therefore, L is an isomorphism, and so P2(R) and V are isomorphic.

To conclude this lecture, we note that if an isomorphism must map basis vectors to basis vectors, then isomorphic vector spaces must have the same dimension. We will prove this, and more, in the next lecture.

© University of Waterloo and others, Powered by Maplesoft