## Transcript

Once we have a basis for a vector space V, perhaps a set B of vectors {v1 through vn}, the Unique Representation Theorem tells us that for any given vector x in our vector space, there is a unique way to write x as a linear combination of the vectors in B. That is, there is a unique collection of scalars x1 through xn from the real numbers such that our vector x = x1 times vector v1 + dot dot dot + xn times the vector vn.

So this leads to a situation similar to the one that led to the creation of matrices. That is to say, if we take the basis B as a given, do we really need to write the vectors v1 through vn every time? The real information is contained in the list of scalars, so let’s just write those. Since we are only looking at a single list of numbers, this is best represented as a vector from Rn.

Suppose that the set B = {v1 through vn} is a basis for the vector space V. If x is a vector in V with x = x1v1 + x2v2 + dot dot dot + xnvn, then the coordinate vector of x with respect to the basis B is written as [x]-bracket-B, and is equal to the vector [x1 through xn]. We also refer to [x]-brackets-B as the coordinates of x with respect to B, or the B-coordinates of x.

The easiest examples of coordinates are those we get using the standard bases of our common vector spaces. Easiest of all is when we use the standard basis for Rn to find the coordinates of a vector in Rn since they are the same. Consider this example. Let B be the standard basis for R3. Then if x = [2; -5; 7], we have that the B-coordinates of x are [2; -5; 7] since [2; -5; 7] = 2[1; 0; 0] – 5[0; 1; 0] + 7[0; 0; 1]. In general, if x = [a; b; c], then we have that the B-coordinates of x are [a; b; c] since [a; b; c] = a[1; 0; 0] + b[0; 1; 0] + c[0; 0; 1].

Things get at least a little more interesting when we look at matrix spaces or polynomial spaces. Now let B be this set, which is the standard basis for M(2, 2). If x is the matrix [3, -9; -8, 2], then the B-coordinates of x are [3; -9; -8; 2] since the matrix [3, -9; -8, 2] = 3 times the matrix [1, 0; 0, 0] – 9 times the matrix [0, 1; 0, 0] – 8 times the matrix [0, 0; 1, 0] + 2 times the matrix [0, 0; 0, 1].

For our next example, let’s let B be the standard basis for P5. Then if x is the polynomial 5 – 3x + 7(x-squared) + x-to-the-fourth – 8(x-to-the-fifth), then the B-coordinates of x are [5; -3; 7; 0; 1; -8] since the polynomial 5 – 3x + 7(x-squared) + x-to-the-fourth – 8(x-to-the-fifth) = 5(1) – 3(x) + 7(x-squared) + 0(x-cubed) + 1(x-to-the-fourth) – 8(x-to-the-fifth).

Of course, things get really interesting when we use something that is not a standard basis. Now, in lecture 1h, we showed that the set A shown here is a basis for M(2, 2). If we note that 2 times our first matrix + 5 times the second – 2 times the third + the fourth equals the matrix [-10, 4; -14, 25], then we’ve seen that the A-coordinates of [-10, 4; -14, 25] are [2; 5; -2; 1].

But usually we start with a vector and want to find its coordinates with respect to our basis. To that end, let’s try to find the A-coordinates for the matrix [-7, 18; 2, 38]. That is, we want to find scalars x1, x2, x3, and x4 that satisfy this matrix equation. If we perform the calculation on the left, we get this matrix equality. Setting the entries equal, we get the following system of linear equations. Of course, to solve a system of linear equations, we row reduce its augmented matrix. Once the matrix is in reduced row echelon form, we see that the solution is that x1 = 4, x2 = 5, x3 = -1, and x4 = 3. This means that 4 times the first matrix + 5 times the second matrix – the third matrix + 3 times the fourth matrix equals our matrix, [-7, 18; 2, 38], and so we have found that the A-coordinates of [-7, 18; 2, 38] are [4; 5; -1; 3].

One last thing to note in regards to coordinates is that the order of the basis vectors matters. That is to say, the set {1, x, x-squared} is not the same basis for P2 as the set {x, x-squared, 1} simply because the order we wrote the basis vectors in changed, and this change results in different coordinates for our vectors.

For example, we could let E be the standard basis for P2, but let B be the basis {x, x-squared, 1}. If we look at the polynomial p(x) = 6 – 2x + 2(x-squared), then we can easily see that the E-coordinates of p are [6; -2; 2] since p(x) = 6(1) – 2(x) + 2(x-squared). But we see that the B-coordinates of p are [-2; 2; 6] since p(x) also equals -2(x) + 2(x-squared) + 6(1).