Lesson: Linear Mappings

Question 2

1 point

Transcript — Introduction

In the last lecture, we looked at the four fundamental subspaces of a matrix. We also saw in Linear Algebra I that linear mappings from Rn to Rm are connected to matrices through the matrix of a linear mapping. So, at the beginning of this lecture, we will review the connection of the columnspace and nullspace of the standard matrix of a linear mapping to the range and kernel of the linear mapping. After this review, we will begin to generalize the definition of linear mappings to where the domain and codomain can be any vector space.

Definition: A mapping L from Rn to Rm is said to be linear if L(sx + ty) = sL(x) + tL(y) for all x and y in Rn and real scalars s and t.

Definition: The range of a linear mapping L from Rn to Rm is the set of all L(x) such that x is in Rn.

Definition: The kernel of a linear mapping L from Rn to Rm is the set of all vectors x in Rn such that L(x) is the 0 vector.

And finally, definition: The standard matrix of a linear mapping L from Rn to Rm is defined to be the matrix whose columns are the images of the standard basis vectors e1 to en for Rn under L. It satisfies L(x) = the standard matrix of L times x.

Theorem 7.2.1: If L from Rn to Rm is a linear mapping, then the range of L equals the columnspace of the standard matrix of L. I know that some students have a bit of a phobia about proofs. However, it is important to realize that many proofs are actually very easy. Keep this in mind—especially on tests. Just because it is a proof question doesn’t mean it’s hard. We give the proof of Theorem 7.2.1 to demonstrate this. Proof: By definition, the range of L equals the set of all L(x) such that x is in Rn. But this equals the set of all vectors [the standard matrix of L times x] such that x is in Rn, by definition of the standard matrix of L. But this is exactly the definition of the columnspace of the standard matrix of L, and so, poof, we’re done.

Of course, we get a similar result for the kernel of L. Theorem 7.2.2: If L from Rn to Rm is a linear mapping, then the kernel of L equals the nullspace of the standard matrix of L. I’ll leave this proof as an easy exercise.

Using these two theorems and the Dimension Theorem, we get the following result. Theorem 7.2.3: Let L from Rn to Rm be a linear mapping. Then the dimension of the range of L + the dimension of the kernel of L = the dimension of Rn, which is n. The generalization of this theorem, which we will see soon, is very useful.

This ends the review of Linear Algebra I material. As mentioned before, if there is anything we went over that you don’t remember, it is highly recommended that you take the time now to review it further. We are going to use and extend all of this material, and more, so if you do not have a good understanding of the Linear Algebra I material, you may find it difficult to understand many of the concepts in Linear Algebra II.

General Linear Mappings

General linear mappings: We now look at linear mappings whose domain is a vector space V, and whose codomain is a vector space W. It is important not to assume that any results that held for linear mappings L from Rn to Rm also hold for linear mappings L from V to W. In particular, our goal will be to prove which results are the same and which are different.

We begin with a definition. Definition: Let V and W be vector spaces. A mapping L from V to W is said to be a linear mapping if L(tx + sy) = tL(x) + sL(y) for all x and y in V and real scalars s and t.

Let’s demonstrate this with an example. Example: Let L from M(2-by-2)(R) to P2(R) by defined by L([a, b; c, d]) = the polynomial (a + b + c)x + (a – b – d)(x-squared). Part (a): Evaluate L([1, 2; -1, 1]). By definition of the mapping, this is equal to (1 + 2 + (-1))x + (1 – 2 – 1)(x-squared), which is 2x – 2x-squared. Part (b): Find a matrix A such that L(A) = 2x + x-squared. To do this, we need to find the matrix A = [a, b; c, d] such that 2x + x-squared = L([a, b; c, d]), which, by definition of the mapping, equals (a + b + c)x + (a – b – d)(x-squared). Hence, for this to be true, we need a + b + c = 2 and a – b – d = 1. Solving this system, we see that one choice is a = 3/2, b = 1/2, c = 0, and d = 0. That is, L([3/2, 1/2; 0, 0]) = 2x + x-squared. Part (c): Prove that L is linear. Of course, the procedure for doing this will be exactly the same as what we did back in Linear Algebra I with linear mappings from Rn to Rm. So we let [a1, b1; c1, d1] and [a2, b2; c2, d2] be any two matrices in M(2-by-2)(R), and s and t be any two real numbers. Then L(s[a1, b1; c1, d1] + t[a2, b2; c2, d2]) = L([sa1 + ta2, sb1 + tb2; sc1 + tc2, sd1 + td2]). By definition of the mapping, this is equal to (sa1 + ta2 + sb1 + tb2 + sc1 + tc2)x + (sa1 + ta2 – sb1 – tb2 – sd1 – td2)(x-squared). We can rearrange this as s((a1 + b1 + c1)x + (a1 – b1 – d1)(x-squared)) + t((a2 + b2 + c2)x + (a2 – b2 – d2)(x-squared)). But this is just equal to sL([a1, b1; c1, d1]) + tL([a2, b2; c2, d2]). And hence, we have proven that L is linear.

We now continue to look at theorems and definitions which are the same for linear mappings from V to W as we saw before for linear mappings from Rn to Rm.

Theorem 8.1.1: Let V and W be vector spaces, and let L from V to W be a linear mapping. Then L(the 0 vector in V) = the 0 vector in W. I will leave the proof of this as an exercise.

Operations on Linear Mappings

Definition: Let L from V to W and M from V to W be linear mappings. We define the mapping L + M by (L + M)(v) = L(v) + M(v) for all vectors v in V. For any real number t, we define the mapping tL by (tL)(v) = tL(v) for all vectors v in V. Notice that we have defined L + M and tL for every vector v in V, so the domain of these mappings is V. Also, since L(v) and M(v) are in W, and t is a real scalar, then L(v) + M(v) and tL(v) are in W since W is closed under addition and scalar multiplication since it is a vector space. Thus, the codomain of L + M and tL is W.

Theorem 8.1.2: Let V and W be vector spaces. The set L of all linear mappings L from V to W is a vector space. Proof: To prove that L is a vector space, we need to show that it satisfies all ten vector space axioms. For now, we will just prove properties V1 and V2, and leave the rest as exercises.

Let L and M be linear mappings in the set L.

V1: To prove that the set L is closed under addition, we need to show that the mapping L + M is a linear mapping. For any vectors v1 and v2 in V, and real scalars s and t, we have (L + M)(sv1 + tv2), by definition of addition of mappings, is equal to L(sv1 + tv2) + M(sv1 + tv2), which is equal to sL(v1) + tL(v2) + sM(v1) + tM(v2) since L and M are linear. We can rearrange this to get s(L(v1) + M(v1)) + t(L(v2) + M(v2)). Using the definition again of addition of mappings, this is equal to s(L + M)(v1) + t(L + M)(v2). And thus, L + M is in the set L.

V2: For any vector v in V, we have (L + M)(v) = L(v) + M(v) by definition of addition of mappings. But L(v) and M(v) are just vectors in W, and vectors in W are commutative since W is a vector space, and so this is equal to M(v) + L(v). And again using the definition of addition of mappings, this is equal to (M + L)(v), and so V2 is also satisfied.

As mentioned before, I will leave the rest of the properties as an exercise. I strongly recommend proving at least a couple of them to get used to this kind of vector space.

We now define our other familiar operation on mappings for general linear mappings. Definition: Let L from V to W and M from W to U be linear mappings. We define the composition M-composed-of-L in the usual way. That is, (M-composed-of-L)(v) = M(L(v)) for any vector v in V.

Theorem 8.1.3: If L from V to W and M from W to U are linear mappings, then M-composed-of-L is a linear mapping from V to U.

This ends this lecture. In the next lecture, we will look at the range and kernel of general linear mappings, and prove that Theorem 7.2.3 extends to general linear mappings.

© University of Waterloo and others, Powered by Maplesoft