Lesson: Sketching Quadratic Forms

Question 2

1 point

Transcript — Introduction

Let Q(x) = a(x1-squared) + bx1x2 + c(x2-squared) be a quadratic form in R2. In many applications of quadratic forms—for example, in calculus—it is important to be able to sketch the graph of a quadratic form Q(x) = k for some real number k. In general, it is not easy to graph Q(x) = k. How can we make this easier to graph? We know that we can apply an orthogonal change of variables that can change the quadratic form into diagonal form. That is, there exists an orthogonal matrix P such that the change of variables y = (P-transpose)x will give us k = Q(x) = (lambda1)(y1-squared) + (lambda2)(y2-squared). Observe that this is now quite easy to draw, as these are just conic sections. The possibilities are demonstrated in the table. Take a minute to make sure that you understand all of these.

So, clearly, we want to diagonalize the quadratic form, but how does the corresponding change of variables affect the graph of the quadratic form? Theorem 10.4.1: If Q(x1, x2) = a(x1-squared) + bx1x2 + c(x2-squared) where a, b, and c are not all 0, then there exists an orthogonal matrix P, which corresponds to a rotation, such that the change of variables y = (P-transpose)x brings Q(x) into diagonal form.

Proof: Let A be the symmetric matrix corresponding to Q(x1, x2). Let v1 = [a1; a2] be a unit eigenvector of A. Since v1 is a unit vector, we have that 1 = (the length of v1)-squared, which is equal to a1-squared + a2-squared. Therefore, we see that the components of v1 lie on the unit circle. That is, there exists some value of theta such that a1 = cos theta, and a2 = sin theta. By the Principal Axis Theorem, there must exist an orthogonal basis of eigenvectors of A, so if we let v2 be the vector [-sin theta; cos theta], then v2 is orthogonal to v1, and hence must be an eigenvector of A. Therefore, the matrix P = [v1 v2] = [cos theta, -sin theta; sin theta, cos theta] is an orthogonal matrix which diagonalizes A. Observe that P is a rotation matrix as required.

Examples

As usual, this is best demonstrated with an example. Example: Graph 3(x1-squared) + 4x1x2 + 3(x2-squared) = 5. Solution: Observe that it is difficult to determine the shape of this graph by just looking at this. But if we use our diagonalizing trick, it will not only be very easy to identify the shape, but also easy to graph. We have corresponding symmetric matrix A = [3, 2; 2, 3]. We find that the eigenvalues of A are lambda1 = 1 and lambda2 = 5. Theorem 10.4.1 tells us that the graph of 3(x1-squared) + 4x1x2 + 3(x2-squared) = 5 is going to be a rotation of the graph of 5 = (lambda1)(y1-squared) + (lambda2)(y2-squared), which is y1-squared + 5(y2-squared). Since A has both positive eigenvalues, A is positive definite, and so from the table we did, we have that the graph of this is an ellipse.

Let’s first sketch y1-squared + 5(y2-squared) = 5 in the y1y2-plane. We can easily sketch this ellipse by determining the y1 and y2 intercepts. We get the following picture. However, we want to find the graph of 3(x1-squared) + 4x1x2 + 3(x2-squared) = 5 in the x1x2-plane. To get the diagonal form y1-squared + 5(y2-squared) = 5, we needed to use a change of variables y = (P-transpose)x, so to change the graph from the y1y2-plane to the x1x2-plane, we can use the change of variables x = Py. So we need to find a matrix P which orthogonally diagonalizes A. We find that a unit eigenvector for lambda1 = 1 is v1 = [-1/(root 2); 1/(root 2)], and a unit eigenvector for lambda2 = 5 is v2 = [1/(root 2); 1/(root 2)]. Hence, P = [-1/(root 2), 1/(root 2); 1/(root 2), 1/(root 2)].

Wait—notice that P is not a rotation matrix. The statement of Theorem 10.4.1 only says that there exists a rotation—it does not say that it has to be a rotation. This choice of the matrix P gives a rotation and a reflection. Notice that we could, in a variety of ways, choose P differently to get just a rotation. However, I will continue this to demonstrate the reflection.

Now using the change of variables x = Py, we can transform any vector in the y1y2-plane to a vector in the x1x2-plane. Notice that the y1 axis is just the line spanned by the vector [1; 0]. Thus, using the change of variables x = Py, we get x = [v1 v2] times the vector [1; 0], which is equal to v1. Thus, the y1-axis in the x1x2-plane is the line spanned by the vector v1. Similarly, the y2-axis is spanned by [0; 1]. Therefore, in the x1x2-plane, we have that the y2-axis is the line spanned by x = [v1 v2] times the vector [0; 1], which is v2.

Wow! Notice that the y1- and y2-axes in the x1x2-plane are in the direction of the eigenvectors of A. That is, the main axes for the graph—that is, the principal axes—are the eigenvectors of A, and hence, this is why the theorem is called the Principal Axis Theorem. Also, in this case, notice the reflection of the axes. That is, the y1-axis is now a 90-degree rotation from the y2-axis. We now just copy our ellipse from the graph in the y1y2-plane onto the y1y2-axes in the x1x2-plane to get the sketch of the graph in the x1x2-plane.

We can get a geometric interpretation of another aspect of diagonalization as well. We know in diagonalization that we can pick eigenvalues in any order as long as the columns in P correspond to the order of the eigenvalues in D. Notice in the example above that if we pick the eigenvalues in the opposite order, then we would get the diagonal quadratic form 5(y1-squared) + y2-squared. However, we would also have the eigenvectors in the opposite order as well. In particular, we would have the y1-axis in the x1x2-plane would be in the direction of the eigenvector [1/(root 2); 1/(root 2)], corresponding to lambda2 = 5, and the y2-axis in the x1x2-plane would be in the direction of the eigenvector [-1/(root 2); 1/(root 2)], corresponding to lambda1 = 1. Thus, we would get the picture, which, of course, matches our answer above. Also notice that, with these choices, the graph in the x1x2-plane is just a rotation of the graph in the y1y2-plane.

Let’s try another example. Example: Sketch 3(x1-squared) + 4x1x2 = 16. Solution: We have corresponding symmetric matrix A = [3, 2; 2, 0], so we find the eigenvalues of A are lambda1 = 4 and lambda2 = -1. Therefore, the corresponding diagonal form of the quadratic form is 4(y1-squared) – y2-squared. Hence, we see immediately that the graph of 4(y1-squared) – y2-squared = 16 is a hyperbola. Hence, the graph of 3(x1-squared) + 4x1x2 = 16 is going to be a rotated hyperbola. We find the corresponding eigenvectors of A are v1 = [2; 1] for lambda1 = 4 and v2 = [-1; 2] for lambda2 = -1. Thus, an orthogonal matrix P which diagonalizes A is P = [2/(root 5), -1/(root 5); 1/(root 5), 2/(root 5)].

To relatively accurately sketch the graph of the hyperbola 4(y1-squared) – y2-squared = 16, we need to find the asymptotes of the hyperbola. Remember, to find the equations of the asymptotes of a hyperbola, we set the right-hand side equal to 0 to get 4(y1-squared) – y2-squared = 0. Simplifying, we get plus-or-minus 2y1 = y2. Since this is linear algebra, let’s write the equations of the asymptotes in a more linear algebra form, so we will write down the direction vector for the asymptotes.

For the asymptote 2y1 = y2, we see that if we take y1 = 1, we get y2 = 2, so the direction vector for this asymptote is [1; 2]. Similarly, for -2y1 = y2, if we take y1 = 1, we get y2 = -2, and so a direction vector for this asymptote is [1; -2]. We now sketch these on our graph.

Now recall that the hyperbola must open left-and-right or up-and-down. To determine which it is, it is easiest to check intercepts. Notice that taking y1 = 0 in 4(y1-squared) – y2-squared = 16 gives –(y2-squared) = 16, which is obviously impossible. On the other hand, if we take y2 = 0, we get that y1 = plus-or-minus 2, and thus the graph must open right and left. We can sketch this onto our graph.

We now want to use this sketch in the y1y2-plane to sketch 3(x1-squared) + 4x1x2 = 16 in the x1x2-plane. We saw in the last example that we can use the change of variables x = Py to get that the y1- and y2-axes in the x1x2-plane are in the direction of the eigenvectors v1 and v2. But to sketch the graph of the hyperbola relatively accurately, we need to sketch the asymptotes. We can again use the change of variables to transfer the equations of the asymptotes from the y1y2-plane into the x1x2-plane. For the asymptote 2y1 = y2, we use the direction vector w1 to get x1 = Pw1, which is [0; (root 5)]. For the asymptote -2y1 = y2, we use w2 to get x2 = Pw2, which is [4/(root 5); -3/(root 5)]. We now sketch these into our graph.

Then we can transcribe our hyperbola from the y1y2-plane onto this. In particular, we saw the graph opens in the y1 direction, and hence, we get the graph below.

As with any other computational procedure, the more you practice these, the better you will become at them.

This concludes this lecture.

© University of Waterloo and others, Powered by Maplesoft