Lesson: The Matrix of a Linear Mapping

Question 2

1 point

Transcript

When we first studied linear mappings, they were only between Rn and Rm. In this setting, we were always able to find a matrix A such that our linear mapping L(x) was the same as Ax. That’s because, while x in Rn isn’t technically a matrix, we could temporarily think of x like a matrix, and the matrix product Ax had the properties we wanted. In general, we made great use of the matrix A, and would like to be able to do this for any linear mapping, not just ones between Rn and Rm. But the product A(2 + 3x – x-squared), for example, doesn’t make any sense at all.

So before we can find a matrix for a linear mapping whose domain is not Rn, we first need to find some way to make our random vector look like a vector from Rn. But wait, we’ve already done that. Once we fix a basis B, say, vectors {v1 through vn}, for a vector space V, we can look at the B-coordinates for x. Now, that’s only the first step in finding the matrix of a general linear mapping L from V to W, though, because our product A([x]B) will be a vector from Rm, not W. So we will also need to fix a basis C, say, of vectors {w1 through wm} for W, and we will have our output be in C-coordinates. So while we can’t find a matrix for L directly, we can find a matrix for L relative to the B- and C-coordinates.

As such, instead of simply denoting the matrix for the linear mapping as bracket-[L], we will denote it as C[L]B so that we can remember what coordinates we are using. And this matrix C[L]B will be such that the C-coordinates of L(x) will equal (C[L]B)([x]B). We know that the matrix C[L]B must exist since the equation above defines a linear mapping from Rn to Rm. Now our only problem is to find it.

The first thing we want to note is that there is a vector [x]B = [x1 through xn] such that x = x1v1 + through to xnvn. Then we have that L(x) = L(x1v1 + through to xnvn), which equals x1(L(v1)) + through to xn(L(vn)). Recalling Theorem 4.4.1, we then have that the C-coordinates of L(x) will be the C-coordinates of (x1(L(v1)) + through to xn(L(vn))). But our coordinates are linear, so we can pull out the x1, and we get that this equals (x1 times the C-coordinates of L(v1)) + through to (xn times the C-coordinates of L(vn)). We can write this as the matrix product (the matrix whose columns are the C-coordinates of L(v1) through the C-coordinates of L(vn)) times our vector [x1 through xn]. That is to say, we’ve shown that the C-coordinates of L(x) equal the product of (the matrix whose columns are the C-coordinates of L(v1) through the C-coordinates of L(vn)) times the B-coordinates of x. From this, we’ve shown that the matrix C[L]B is a matrix whose columns are the C-coordinates of L(v1) through the C-coordinates of L(vn), remembering that our v1 through vn are our basis vectors for B. We will use this as our definition.

Let V be a vector space with basis B = {v1 through vn}, and let W be a vector space with basis C. Let L from V to W be a linear mapping. We define the matrix of L with respect to the bases B and C to be the matrix C[L]B, whose columns are the C-coordinates of L(v1) through the C-coordinates of L(vn).

Let’s look at an example. Let’s let L from M(2, 2) to P2 be defined by L([a, b; c, d]) = (a + b) + (a + c)x + (a + d)(x-squared), and we’ll let B be the standard basis for M(2, 2), so B equals this set, and we’ll let C be the standard basis for P2, so C equals this set. To find C[L]B, we need to compute the C-coordinates of the image under L of the B basis vectors. First, we find that L([1, 0; 0, 0]) = 1 + x + x-squared, so that means that our C-coordinates of L([1, 0; 0, 0]) equals [1; 1; 1]. L([0, 1; 0, 0]) = 1, so that means that our C-coordinates of L([0, 1; 0, 0]) is [1; 0; 0]. L([0, 0; 1, 0]) = x, so the C-coordinates of L([0, 0; 1, 0]) equals [0; 1; 0]. And lastly, we find that L([0, 0; 0, 1]) = x-squared, so the C-coordinates of L([0, 0; 0, 1]) are [0; 0; 1]. And thus, we’ve shown that C[L]B equals the matrix [1, 1, 0, 0; 1, 0, 1, 0; 1, 0, 0, 1].

Most of the time, we will be using the standard bases, but let’s go ahead and look at the same linear mapping under different bases. Again, we will let L from M(2, 2) to P2 be defined by L([a, b; c, d]) = (a + b) + (a + c)x + (a + d)(x-squared). This time, let’s let B be this set, and we’ll let C be this set. To find C[L]B, we again need to compute the C-coordinates of the image under L of the B basis vectors. This time, we find that L([1, 0; 0, 0]) still equals 1 + x + x-squared, but in C-coordinates, this is now [0; 0; 1]. We compute L([1, 1; 0, 0]) and find that it is equal to 2 + x + x-squared. In terms of our C-coordinates, this equals [1; 0; 1]. L([1, 1; 1, 0]) = 2 + 2x + x-squared, which, in terms of C-coordinates, is [0; 1; 1]. And lastly, we find that L([1, 1; 1, 1]) = 2 + 2x + 2(x-squared), which, in terms of C-coordinates, is [0; 0; 2]. And thus, we’ve found C[L]B equals the matrix [0, 1, 0, 0; 0, 0, 1, 0; 1, 1, 1, 2].

Now, if our domain and codomain are the same vector space, then we might be using the same basis B for both. In these situations, we will simply write C[L]B as [L]B.

For example, let’s let L from R3 to R3 be defined by L([a; b; c]) = [2a; a + b; 4b + c], but instead of using the standard basis, we use the basis B, as seen here. To find [L]B, we need to compute the B-coordinates of the image under L of our B basis vectors. First, let’s just find the image under L of the B basis vectors. We can compute that L([1; 0; 1]) = [2; 1; 1], that L([2; 1; 1]) = [4; 3; 5], and L([-1; 1; 0]) = [-2; 0; 4].

Now let’s try to find the B-coordinates for our results. We can quickly see that the B-coordinates of [2; 1; 1] are [0; 1; 0] since [2; 1; 1] is our second basis vector. But the other two are harder to see, so let’s solve for them. That is, we need to find scalars a1, b1, and c1 such that (a1 times our first basis vector) + (b1 times our second basis vector) + (c1 times the third basis vector) = L([2; 1; 1])—that is, equals [4; 3; 5]. We also need to find scalars a2, b2, and c2 such that (a2 times our first basis vector) + (b2 times our second basis vector) + (c2 times our third basis vector) = L([-1; 1; 0])—that is, it equals [-2; 0; 4]. Our first equation is equivalent to this system, while our second equation is equivalent to this system. Now, since these systems have the same coefficient matrix, we can solve them simultaneously by row reducing the following doubly augmented matrix. When we reach reduced row echelon form, we can see that a1 = 4, b1 = 1, and c1 = 2, and that a2 = 7, b2 = -3, and c2 = 3. So this means that the B-coordinates of [4; 3; 5] are [4; 1; 2] and the B-coordinates of [-2; 0; 4] are [7; -3; 3]. And all of this means that [L]B is the matrix [0, 4, 7; 1, 1, -3; 0, 2, 3].

So far, we’ve spent a lot of time looking at how to find C[L]B, but what do we do with it when we have it? We use it to compute the C-coordinates of L(x) from the B-coordinates of x, of course.

So for example, let’s let L from M(2, 2) to P2 be defined as previously defined: L([a, b; c, d]) will equal (a + b) + (a + c)x + (a + d)(x-squared). And we’re going to let our bases B be this, and C be this, as in our earlier example, so that we have already found the matrix C[L]B. Now, if we have that the B-coordinates of x are [1; 2; -1; 4], then we get that the C-coordinates of L(x) must equal C[L]B times the B-coordinates of x. So that’s this matrix product, which equals [2; -1; 10]. Well, if [2; -1; 10] are the C-coordinates of L(x), then L(x) must equal (2 times our first C basis polynomial) – (1 times the second C basis polynomial) + (10 times the third C basis polynomial), which equals 11 + 9x + 10(x-squared).

Of course, we could also compute L(x) by first finding x. Since the B-coordinates of x equal [1; 2; -1; 4], we have that x = (our first basis matrix) + (2 times the second basis matrix) – (the third basis matrix) + (4 times the fourth basis matrix), which equals [6, 5; 3, 4]. And then we can compute that L(x), which equals L([6, 5; 3, 4]), equals (6 + 5) + (6 + 3)x + (6 + 4)(x-squared), which equals 11 + 9x + 10(x-squared), the same as before.

© University of Waterloo and others, Powered by Maplesoft