## Transcript — Introduction

In this lecture, we will continue extending what we did with linear mappings from Rn to Rm to general linear mappings L from V to W. In doing this, we will see that there are some differences in the general case.

We now show that every linear mapping L from V to W can be represented as a matrix mapping. However, we must be careful when dealing with general vector spaces as our domain and codomain. For example, it is impossible to represent a linear mapping L from P2(R) to M(2-by-2)(R) as a matrix mapping of the form L(x) = Ax since we cannot multiply a matrix A by a polynomial in P2(R). Moreover, we would require the result of Ax to be a 2-by-2 matrix. Therefore, we cannot have L(x) = Ax.

But then how are we going to represent this mapping as a matrix mapping? To make this work, we need to figure out how to represent any vector in any vector space as a vector in Rn. In Linear Algebra I, we learned how to do this. We can use coordinates of a vector with respect to a basis.

Let’s quickly recall the definition of coordinates. Definition: If B = {v1 to vn} is a basis for a vector space V, and v = b1v1 + up to bnvn is any vector in V, then the coordinate vector of v with respect to the basis B is the vector in Rn [b1 to bn].

So, in terms of our problem, we will use the coordinates of a vector to turn polynomials in P2(R) into a vector in R3. Thus, if B is any basis for P2(R), then we can have A times the B-coordinate vector of x. But then the product A times the B-coordinate vector of x is still a vector in R4. We need this to be a matrix. However, we can interpret this as the coordinate vector of the image with respect to some basis C of M(2-by-2)(R). That is, we can use the C-coordinate vector of L(x) is equal to the matrix A times the B-coordinate vector of x, where B is a basis for P2(R) and C is a basis for M(2-by-2)(R).

Now let’s do this in general. Let L from V to W be a linear mapping. Let B = {v1 to vn} be a basis for the vector space V and C be a basis for the vector space W. Then, for any vector v in V, we want to define a matrix A such that the C-coordinate vector of L(v) is equal to the matrix A times the B-coordinate vector of v.

We will figure out how to define the matrix A in exactly the same way we did in Linear Algebra I to figure out the definition of the standard matrix of a linear mapping from Rn to Rm. That is, we will start with the left-hand side, the C-coordinate vector of L(v), and we will simplify this until we get a matrix times the B-coordinate vector of x. Since v is in V, we can write it as a linear combination of the vectors in B, so this equals the C-coordinate vector of L(b1v1 + up to bnvn). L is a linear mapping, so it preserves linear combinations, so this is the C-coordinate vector of b1L(v1) + up to bnL(vn). In Linear Algebra I, we learned that taking coordinates is a linear operation, so we can write this as b1 times the C-coordinate vector of L(v1) plus up to bn times the C-coordinate vector of L(vn). By definition, a matrix times a vector is a linear combination of the columns of the matrix. So using this in reverse, we can take any linear combination of vectors and turn it into a matrix times a vector, where the vectors are the columns of the matrix and the coefficients of the linear combination are the entries in the vector. That is, we can write our linear combination as the matrix with columns the C-coordinate vector of L(v1) up to the C-coordinate vector of L(vn) times the vector [b1 to bn]. We now notice the vector [b1 to bn] is the B-coordinate vector of v with respect to the basis B.

## Matrix of a Linear Mapping

We use this to make the following definition. Definition: Suppose B = {v1 to vn} is any basis for a vector space V, and C is a basis for a finite-dimensional vector space W. We define the matrix of a linear mapping L from V to W by C[L]B is the matrix whose columns are the C-coordinate vector of L(v1) up to the C-coordinate vector of L(vn). It satisfies the C-coordinate vector of L(v) is equal to C[L]B times the B-coordinate vector of v for all vectors v in V.

Notice that the forward subscript B of the matrix of the linear mapping C[L]B is the basis of the domain V, and the backward subscript C is the basis of the codomain W. Observe that the notation can help you remember which side contains the coordinate vector with respect to which basis. Also notice, if V = Rn and W = Rm, and B and C are the respective standard bases, then this matches the definition of the standard matrix. That is, the standard matrix is a special case of this definition.

We will demonstrate this with a few examples. We will start with a relatively easy example to help you understand what is happening, and then look at a more complicated example.

## Examples

Example: Let B = {v1, v2, v3} be a basis for a vector space V, and C = {w1, w2, w3, w4} be a basis for a vector space W. If L from V to W is a linear mapping such that L(v1) = 2w1 + 3w2 – w4, L(v2) = w1 + 3w2 + 2w3 – w4, and L(v3) = -w1 + 2w4, then find the matrix C[L]B of L, and use it to find L(x) where the B-coordinate vector of x is [5; -3; 1]. Solution: By definition, the first column of the matrix of L with respect to bases B and C is the coordinate vector of L(v1) with respect to C, the second column is the coordinate vector of L(v2) with respect to C, and the third column is the coordinate vector of L(v3) with respect to C. So we now just need to find the coordinate vectors. We have the coordinates of L(v1) with respect to C are the coefficients of the linear combination of the basis vectors in C which make L(v1). Hence, the coordinate vector is [2; 3; 0; -1]. Similarly, the coordinate vector of L(v2) with respect to C is [1; 3; 2; -1], and the coordinate vector of L(v3) with respect to C is [-1; 0; 0; 2]. Hence, the matrix of L with respect to bases B and C is [2, 1, -1; 3, 3, 0; 0, 2, 0; -1, -1, 2]. Now, we are supposed to use the matrix to find L(x) where the B-coordinate vector of x is [5; -3; 1]. By definition, we have the C-coordinate vector of L(x) = C[L]B times the B-coordinate vector of x. Multiplying this out, we get [6; 6; -6; 0]. Notice that this is the C-coordinate vector of L(x), so to find L(x), we need to apply the definition of coordinates, and so we find that L(x) is 6w1 + 6w2 – 6w3 + 0w4.

Example: Let T from R2 to M(2-by-2)(R) be the linear mapping defined by T(a, b) = [a + b, 0; 0, a – b]. Let B be the basis for R2 {[2; -1], [1; 2]}, and let C be the basis for M(2-by-2)(R) {[1, 1; 0, 0], [1, 0; 0, 1], [1, 1; 0, 1], [0, 0; 1, 0]}. Determine C[T]B and use it to calculate T(v) where the B-coordinate vector of v is [2; -3]. By definition of C[T]B, we need to determine the coordinate vectors with respect to C of the images of the vectors in B under T. For the first vector in B, we have T(2, -1) = [1, 0; 0, 3]. To find the C-coordinate vector of T(2, -1), we need to write this matrix as a linear combination of the vectors in C. How do we do this? We do the usual thing in linear algebra—we make a system of linear equations and row reduce. So, we need to find c1, c2, c3, and c4 such that [1, 0; 0, 3] = c1[1, 1; 0, 0] + c2[1, 0; 0, 1] + c3[1, 1; 0, 1] + c4[0, 0; 1, 0]. Performing operations on the right-hand side and then comparing entries, we get our system of linear equations. We row reduce the corresponding augmented matrix to get [1, 0, 0, 0 | -2; 0, 1, 0, 0 | 1; 0, 0, 1, 0 | 2; 0, 0, 0, 1 | 0]. Hence, the C-coordinate vector of T(2, -1) is [-2; 1; 2; 0]. We now repeat for the second vector. Using the same procedure, we find that T(1, 2) = [3, 0; 0, -1], which is 4 times the first basis vector plus 3 times the second basis vector plus -4 times the third basis vector plus 0 times the fourth basis vector, and so the C-coordinate vector of T(1, 2) is [4; 3; -4; 0]. Hence, C[T]B has first column [-2; 1; 2; 0] and second column [4; 3; -4; 0].

To finish the question, we still need to find T(v). We get the C-coordinate vector of T(v) equals the matrix of T with respect to bases B and C times the B-coordinate vector of v. Multiplying these, we get [-16; -7; 16; 0]. Finally, we need to convert this to a matrix in M(2-by-2)(R) by using the definition of coordinates. We get T(v) = -16 times the first basis vector in C minus 7 times the second basis vector plus 16 times the third basis vector plus 0 times the fourth basis vector. Calculating this, we get [-7, 0; 0, 9]. As usual, we can check our answer. In this case, we can check our answer by first figuring out v and then applying the mapping to it. We have v = 2 times the first basis vector in B minus 3 times the second basis vector, and so v = [1; -8]. Plugging this into the mapping, we get T(1, -8) = [-7, 0; 0, 9] as before.

Notice that the check seems like a lot easier way of calculating T(v). But the point of the matrix of a linear mapping is not to just calculate one value of T(v). In fact, we will see that one of the main uses of the matrix of a linear mapping is to help us understand and analyze the mapping, just as we did with the matrix of a linear mapping from Rn to Rm with respect to a basis B in Linear Algebra I.

Note, it is clear from the previous examples that for a linear mapping L from V to W, we generally have the range of L is not equal to the columnspace of the matrix of the mapping with respect to bases B and C. In particular, the columnspace of the matrix is a subspace of some Rn, while the range of L will be a subspace of W.

This ends this lecture. In the next lecture, we will look at the special case where we have a linear operator L from V to V and one basis B for V.