Lesson: A Note on Rotation Transformations

Question 2

1 point

Transcript

Back in Chapter 3, the textbook briefly discusses the matrix of a rotation around a coordinate axis in R3. At that time, the textbook also noted that we would not be able to find the matrix of a rotation around a general vector in R3 until Chapter 7. Well, here we are in Chapter 7. Before we jump to the general case, let’s take a closer look at how we can find the matrix of a rotation around a coordinate axis in R3.

Now, the easiest of all the cases is if we are rotating around the x3-axis. Looking at this diagram, you’ll see that a rotation around the x3-axis will leave our x3 component the same, as the height of our point remains the same, but the x1x2-plane is rotated by theta. So the matrix for a rotation needs to leave our x3 component unchanged, and it needs to treat our x1 and x2 components as if they are in R2, being rotated by theta. If you recall, the matrix for a rotation by theta in R2 is the matrix [cosine(theta), -sine(theta); sine(theta), cos(theta)]. So we see that the matrix for the rotation about the x3-axis through theta is the 3-by-3 matrix [cosine(theta), -sin(theta), 0; sine(theta), cosine(theta), 0; 0, 0, 1].

So now what about rotating about another axis? Again, the idea will be that one component will remain unchanged, while the plane containing the other two components will be rotated by theta. You can see that here. And so we see that the matrix for a rotation about the x1-axis through theta is the matrix [1, 0, 0; 0, cosine(theta), -sine(theta); 0, sine(theta), cosine(theta)], while the matrix for a rotation about the x2-axis through theta is [cosine(theta), 0, sine(theta); 0, 1, 0; -sine(theta), 0, cosine(theta)].

Now, the matrix for x2 looks a bit different because, in order to maintain the necessary right-hand system, the rotation in the x1x3-plane is reversed. This is easier to see if we reorient our matrices so that x2 becomes “up”. So, you can see this in this picture here.

Now, the notion of reorienting up is how we will find a matrix around any vector. That is to say that if we want to find the matrix of a rotation about the vector v through theta, the easiest thing to do is to make v be our “up” direction, and if we write our coordinates with respect to a basis that has v as its third member, then the value of our third coordinate stays the same, and the values of the first two coordinates are subject to a rotation in their plane.

We will need our new basis B = {b1, b2, b3} to be orthonormal to ensure that rotation by theta is preserved instead of being stretched or et cetera, because remember that orthonormal bases preserve length. We also need to make sure that it forms a right-hand system so that the direction of our rotation is preserved. Lastly, we want v to be pointing up, so we want b3 to be in the same direction as v. If we meet these goals, then the matrix for a rotation around the x3-axis will become our matrix [L]B, the matrix of the transformation with respect to B-coordinates. But since our goal is to find the matrix of the transformation with respect to the usual coordinates—that is, with respect to the standard basis—we will use the change of coordinates matrix P from B to S, and the fact that [L]B = (P-inverse)([L]S)P, to get that [L]S = P([L]B)(P-inverse).

Okay, well, with this list in mind, we are left with the question of how to find B. Well, this is more easily seen through an example. Let’s find the standard matrix of the linear transformation L from R3 to R3 that rotates vectors about the axis defined by the vector v = [1; 2; 1] counterclockwise through an angle of pi/6.

Our first goal is to find an orthonormal basis B = {b1, b2, b3} such that B is a right-handed system, and b3 is in the same direction as v. Well, b3 is easy to find, since we’ll have b3 = v/(the norm of v), which ends up simply being [1/(root 6); 2/(root 6); 1/(root 6)]. For b1, we will simply hunt for some vector that is orthogonal to v, and then normalize it. It isn’t all that difficult to find one. For example, I chose to use y = [2; -1; 0], noting that, of course, [2; -1; 0] dot [1; 2; 1] does, in fact, equal 0. Now we simply have to divide by the norm, and we’ll get that b1 = y/(norm of y), which equals [2/(root 5); -1/(root 5); 0]. Lastly, we need to find a vector z that is orthogonal to both v and y. Well, thankfully, that’s what a cross product gives us. Even better, the cross product (v cross y) will give us a vector z such that {y, z, v} is a right-hand system. But, for example, (y cross v) would give us a vector in the opposite direction, and we would have similar problems if we made y our second basis vector instead of our first. If you are familiar with right-hand systems, have fun twisting your hand into the various directions to see that (v cross y) is, in fact, the vector we want. If you are not familiar with right-hand systems, just trust me on this one. And so we calculate that z = [1; 2; 1] cross [2; -1; 0], which equals the vector [1; 2; -5]. And so, we set b2 = z/(the norm of z), which is [1/(root 30); 2/(root 30); -5/(root 30)]. With this, we have satisfied our first goal. We have our set B, which is an orthonormal basis for R3 that forms a right-hand system, and whose third basis vector is in the same direction as the vector we are rotating around.

So this means that the matrix for our rotation relative to B is the matrix for a rotation of theta = pi/6 about the x3-axis. That is to say that [L]B equals the matrix [cosine(pi/6), -sine(pi/6), 0; sine(pi/6), cos(pi/6), 0; 0, 0, 1], which you can write as 1/2 times the matrix [(root 3), -1, 0; 1, (root 3), 0; 0, 0, 2].

Next, we need to find the change of basis matrix from B- to S-coordinates. Well, this will be the matrix whose columns are the vectors in B, so P equals this, which we could rewrite as (1/(root 30))[2(root 6), 1, (root 5); -(root 6), 2, 2(root 5); 0, -5, (root 5)]. Now, since B is an orthonormal basis, P-inverse = P-transpose, so we have that P-inverse = (1/(root 30))[2(root 6), -(root 6), 0; 1, 2, -5; (root 5), 2(root 5), (root 5)].

So all that’s left now is to actually calculate that [L]S = P([L]B)(P-inverse), or as it will be, P([L]B)(P-transpose). We simply write all this out and perform our calculations, and we end up with this matrix, which I won’t even bother saying, because I never said this matrix would be nice. I just said we could now find it.

The textbook ends this chapter with a discussion about the similarity between the matrix for a rotation R by theta in R2, seen here, and the change of basis matrix P from the orthonormal basis B = {[cos(theta); sine(theta)], [-sine(theta); cos(theta)]} to the standard basis. It’s not surprising the text points out this similarity, considering that, in fact, they are the same matrix. What is surprising, however, is that they are the same matrix. Why would these end up being the same matrix? That is, you might expect that the matrix for the linear transformation would actually take S-coordinates to B-coordinates. After all, we know that R(e1) = [cos(theta); sine(theta)], which is our first basis vector in B, and R(e2) would equal [-sine(theta); cos(theta)], which is our second basis vector, b2. But remember that if the matrix [R] was sending S-coordinates to B-coordinates, then we would get that R(e1) equals the B-coordinates of e1, which is not our first basis vector.

What is the B-coordinates of e1? Well, we can use P-inverse, the S to B change of coordinates matrix, to find out. And since B is an orthonormal basis, we know that P-inverse = P-transpose. So we see that the B-coordinates of e1 will equal [cos(theta), sine(theta); -sine(theta), cos(theta)](e1), which equals [cos(theta); -sine(theta)], while the B-coordinates of e2 will equal the matrix [cos(theta), sine(theta); -sine(theta), cos(theta)](e2), which equals [sine(theta); cos(theta)].

So we see that the B-coordinates of e1 is not the same thing as R(e1). It is the inverse of it, though. That is to say that if instead we looked at R(-theta), the linear transformation of a rotation by –theta, then we would have that [R(-theta)] equals the matrix [cos(-theta), -sine(-theta); sine(-theta), cos(-theta)], which equals [cos(theta), sine(theta); -sine(theta), cos(theta)], which is our P-inverse.

And so we have shown that the change of basis matrix from S to B coordinates is the same as the matrix for a rotation by –theta, not a rotation by theta. While this may not seem right on the surface, you simply need to keep track of what you are doing.

Say, for example, that you wanted to find the S-coordinates for our vector b1 = [cos(theta); sine(theta)]. Sure, we get b1 from e1 by rotating by theta, but we aren’t trying to get from e1 to b1. We are actually trying to find which vector gets sent to e1, since [1; 0] is the B-coordinates for b1. To find out which vector is sent to e1 by a rotation of theta, we rotate backwards by theta—i.e., we rotate by –theta. So the S-coordinates of the vector that gets sent to e1 by R(theta) is (R(-theta))(e1).

The textbook provides some drawings that may help you understand the process here. But if nothing else, you should simply remember to be careful when you look at the change of basis inspired by a rotation, since your intuition may not lead you to the correct matrix.

© University of Waterloo and others, Powered by Maplesoft