# Lesson: Matrix of a Linear Mapping Continued

1 point

## Transcript — Introduction

In the last lecture, we saw that the matrix of a linear mapping with respect to bases B and C is the matrix whose columns are the C-coordinate vectors of the images of the vectors in B under L. It satisfies the C-coordinate vector of L(x) equals the matrix of L with respect to bases B and C times the B-coordinate vector of x for all vectors x in V. In this lecture, we are going to look at a special case of this, where we have a linear operator L on a vector space V, and we are using the same basis B for the domain and codomain.

## B-Matrix of a Linear Mapping

We start with the definition. Let L from V to V be a linear operator, and let B = {v1 to vn} be a basis for V. We define the matrix of L with respect to the basis B, also called the B-matrix of L, by the matrix of L with respect to B is the matrix whose columns are the B-coordinate vectors of L(v1) to L(vn). It satisfies the B-coordinate vector of L(x) equals the B-matrix of L times the coordinate vector of x with respect to B, for any vector x in V. Notice that, as mentioned above, this is just what we were doing last lecture, where now the codomain is also V, and we are using the basis B for the basis of the codomain.

Let’s do a couple of examples. This will look pretty much the same as we did last lecture. Example: Let L from P2(R) to P2(R) be the linear mapping defined by L(a + bx + cx-squared) = (a + b) + bx + (a + b + c)(x-squared). Find the matrix of L with respect to the basis B = {1, x, x-squared}. Solution: We need to find the images of the vectors in B under L, and then write the images as linear combinations of the vectors in B. In this case, this will be really easy, since B is just the standard basis for P2(R).

We first need to find L of the first vector in B. By definition of the mapping, we have L(1) = 1 + x-squared. We observe that this is already written as a linear combination of the vectors in B, so we repeat with the next vector. We find the image of the second vector in B under L. By definition of the mapping, we get L(x) = 1 + x + x-squared, which is a linear combination of the vectors in B. Finally, we have L of the third vector in B is L(x-squared) = x-squared.

Then, by definition of the B-matrix of L, we have that it is the matrix whose first column is the B-coordinate vector of L(1), whose second column is the B-coordinate vector of L(x), and the third column is the B-coordinate vector of L(x-squared). Using our work above, the B-coordinate vector of L(1) is [1; 0; 1], the B-coordinate vector of L(x) is [1; 1; 1], and the B-coordinate vector of L(x-squared) is [0; 0; 1]. As usual, we check our answer. Take any polynomial a + bx + cx-squared in P2(R). To use the B-matrix of L, we need the B-coordinate vector of this polynomial. Since B is the standard basis, the B-coordinate vector is [a; b; c]. Then, by definition of the B-matrix, we get the B-coordinate vector of L(a + bx + cx-squared) is the B-matrix of L times the B-coordinate vector of (a + bx + cx-squared). Calculating this, we get [a + b; b; a + b + c]. Notice that [a + b; b; a + b + c] is in fact the B-coordinate vector of L(a + bx + cx-squared), so we are correct.

Let’s do a slightly harder example. Example: Let U be the subspace of M(2-by-2)(R) of upper triangular matrices, and let T from U to U be the linear mapping defined by T of the upper triangular matrix [a, b; 0, c] equals the upper triangular matrix [a, b + c; 0, a + b + c]. Let B be the basis {[1, 1; 0, 0], [1, 0; 0, 1], [1, 1; 0, 1]}. Find the matrix of T with respect to the basis B. The procedure is, of course, the same as in the last example, only it will be more difficult to write the images of the vectors in B as linear combinations of the vectors in B.

Solution: We first take the first vector in B and find the image under T. By definition of the mapping, this is [1, 1; 0, 2]. We then write that as a linear combination of the vectors in B. By either setting up and solving a system of linear equations or by inspection (if possible), we get -1 times the first basis vector plus 0 times the second basis vector plus 2 times the third basis vector. We then repeat the procedure for the second vector in B. We get T of the second vector in B is also [1, 1; 0, 2], so we get the same linear combination as above. Finally, for the third vector in B, we get the image under T is [1, 2; 0, 3], which is -2 times the first basis vector minus 1 times the second basis vector plus 4 times the third basis vector. So, by definition, the B-matrix of T is the matrix whose first column is the B-coordinate vector of the image of the first vector in B—this is [-1; 0; 2]. The second column is the B-coordinate vector of the image of the second basis vector, which is also [-1; 0; 2]. Finally, the third column is the B-coordinate vector of the image of the third basis vector, which is [-2; -1; 4].

A couple of notes: First, notice that finding the matrix of a linear mapping is very algorithmic. With enough practice, these are very easy. Make sure that you practice these enough so that you can solve these kinds of questions quickly and correctly on tests. Second, the place where students typically have the most difficulty with these is when working with coordinates. If you do not have a strong understanding of coordinate vectors, it is highly recommended that you take the time now to go back to your Linear Algebra I notes and review and practice this concept. On that note, I will leave the check of this example as a strongly recommended exercise to ensure you have a good working knowledge of coordinate vectors and of the matrix of a linear mapping.

## Geometrically Natural Bases

We now look at some of the usefulness of the matrix of a linear mapping. We will do this with an example. Example: Let L from R2 to R2 be the linear mapping defined by L(x1, x2) = ((9/25)x1 + (12/25)x2, (12/25)x1 + (16/25)x2). This is a nice mapping. Can you tell just by looking at this what the simple geometric interpretation of the mapping is? Probably not. In general, if we are just given some definition like this of a linear mapping, then you are not going to be able to tell just by looking at it what the geometric interpretation is. The idea of the matrix of a linear mapping with respect to a basis B (or with respect to bases B and C) is to help us understand the action of the mapping.

In the case of this example, we have that the standard matrix of L is [9/25, 12/25; 12/25, 16/25]. This matrix is ugly. How can we make this look nicer? We can diagonalize. Recall from Linear Algebra I that the point of diagonalization is to find a diagonal matrix that is similar to the standard matrix of L. Moreover, this similar matrix is the B-matrix of L, where the basis vectors in B are the eigenvectors taken from the matrix P which diagonalizes the standard matrix.

If we diagonalize the standard matrix of L, we get the matrix [1, 0; 0, 0], and the matrix P has columns [3; 4] and [-4; 3]. In particular, our work in Linear Algebra I shows us that the B-matrix of L with respect to the basis B = {[3; 4], [-4; 3]} is [1, 0; 0, 0]. We can now use this to get a clear geometric understanding of this mapping. By definition of the B-matrix, we have that the B-coordinate vector of L(x) is the B-matrix of L times the B-coordinate vector of x. Denote the vectors in B by {v1, v2}. Let x = b1v1 + b2v2 be any vector in R2, so the B-coordinate vector of x is [b1; b2]. Then the B-coordinate vector of L(x) is [1, 0; 0, 0][b1; b2], which is [b1; 0]. Hence, by definition of coordinates, we have that L(x) is b1v1. Thus, the mapping takes a vector x and returns the amount of x in the direction of v1. Hence, we recognize this mapping as a projection of x onto v1 = [3; 4].

Definition: Let L from V to V be a linear operator. If B is a basis for V such that the B-matrix of L is diagonal, then B is called a geometrically natural basis for L. Notice, the whole point of diagonalizing the standard matrix of a linear operator L is to find a geometrically natural basis B for the linear operator. The vectors in the geometrically natural basis will be eigenvectors of the standard matrix. We can then use the B-matrix of L to help us understand the mapping L.

Example: Let A be the matrix [3, 2; 2, 0], and define the matrix mapping L(x) = Ax. Describe geometrically the action of the linear mapping. Solution: We find that the eigenvectors of A are v1 = [2; 1] with corresponding eigenvalue lambda_1 = 4, and v2 = [-1; 2] with corresponding eigenvalue lambda_2 = -1. Hence, taking P = [2, -1; 1, 2] gives (P-inverse)AP = [4, 0; 0, -1]. Thus, if we take B = {v1, v2}, we get the B-matrix of L equals [4, 0; 0, -1]. So, for any vector x = b1v1 + b2v2 in R2, we have the B-coordinate vector of L(x) equals the B-matrix of L times the B-coordinate vector of x, which is [4, 0; 0, -1][b1; b2], which is [4b1; -b2]. Therefore, L(x) = 4b1v1 – b2v2. Hence, the linear mapping L takes a vector x and stretches it by a factor of 4 in the v1 direction and reflects it in the v2 direction.

We have now seen that finding the eigenvectors of a linear mapping can help us figure out the geometry of the mapping. Of course, this works in reverse as well. That is, if we know the geometry of the mapping, then we should be able to use the geometry to help us figure out a geometrically natural basis for the mapping. We will demonstrate this with an example.

Example: Let P be the plane in R3 with normal vector n = [1; 2; 1]. Find a geometrically natural basis B for the reflection of a vector over the plane P, and find the B-matrix of the reflection. Solution: We first recall the action of a reflection. A reflection takes a vector x and returns its mirror image in the plane P, as in the figure. Our goal now is to pick a basis B for R3 of vectors that are geometrically suited for the mapping. Since we are reflecting over the plane, it makes sense to form our geometrically natural basis by picking the normal vector for the plane and a basis for the plane. To pick a basis for the plane, we just need to pick a linearly independent set of two vectors in the plane. By definition, any vector orthogonal to the normal vector is in the plane, and so we pick v2 = [0; 1; -2] and v3 = [1; 0; -1]. Thus, our geometrically natural basis is B = {n, v2, v3}.

To find the B-matrix of the reflection, we use our normal procedure. The reflection of the normal vector n over P is –n. We now need to write this as a linear combination of the vectors in B. This, of course, is really easy. It is (-1)n + 0v2 + 0v3. Next, since v2 is in the plane, the reflection of v2 over the plane is, of course, itself. Hence, we have the reflection of v2 over the plane is 0n + 1v2 + 0v3. Similarly, the reflection of v3 over the plane is 0n + 0v2 + 1v3. Notice that, indeed, the vectors we have picked—n, v2, and v3—are indeed eigenvectors of the mapping. Thus, by definition, the matrix of the reflection with respect to the basis B is [-1, 0, 0; 0, 1, 0; 0, 0, 1].

To conclude this lecture, I will just remind you that a large part of this course involves diagonalization. So, if you do not have a very good understanding of diagonalization, it is important that you review it.