Lesson: Eigenvalues and Eigenvectors

Question 2

1 point

Transcript — Introduction

In the last lecture, we looked at how to find the matrix of a linear operator with respect to a basis B. At the end of the lecture, we stated our main goal: to determine if a linear operator L has a basis B such that the matrix of L with respect to the basis B is diagonal, and if it does, how to find B. In this lecture, we will begin to figure out how to do this.

Let L from Rn to Rn be a linear operator. How do we even start looking for the special basis B when we don’t even know it exists? We will use a very good trick in mathematics: we will work in reverse. That is, we will assume that there is a basis B such that the B-matrix of L is diagonal, and then we will use this to figure out what properties B must have.

Assume that there is a basis B = {v1 to vn} such that the matrix of L with respect to B is diagonal. In particular, let the B-matrix of L be the diagonal matrix diag(lambda_1 to lambda_n). In the last lecture, we found that the B-matrix of L was similar to the standard matrix A of L. In particular, using the change of coordinates matrix P = [v1 to vn], we got (P-inverse)AP = the B-matrix of L, which is the diagonal matrix diag(lambda_1 to lambda_n). Multiplying on the left of both sides by P, we get AP = Pdiag(lambda_1 to lambda_n). Doing this lets us use the definition of matrix multiplication and block multiplication to get A[v1 to vn] = [v1 to vn]diag(lambda_1 to lambda_n), which is [Av1 to Avn] = [lambda_1v1 up to lambda_nvn]. Comparing columns, we see that we must have Avi = lambda_ivi for i = 1 to n. Moreover, since P is invertible, the columns of P must be linearly independent. In particular, vi is not equal to 0 for any i. Thus, we have shown that if the standard matrix A is similar to a diagonal matrix, then the vectors v1 to vn in the basis B must satisfy Avi = lambda_ivi for all i. On the other hand, if Avi = lambda_ivi for all i, then we have the B-coordinate vector of L(vi) equals the B-coordinate vector of lambda_ivi, which is just lambda_i times the ith standard basis vector. Hence, we find that the B-matrix of L is the diagonal matrix diag(lambda_1 to lambda_n), and so the B-matrix of L is diagonal.

I cannot stress enough how important this is. The rest of this module hinges on what we have just done here. Not only have we derived the necessary and sufficient condition for the vectors in B, but we have a nice formula for the matrix of L with respect to B. It is just the diagonal matrix whose ith diagonal entry is the value lambda_i corresponding to the ith basis vector in B. Take a minute to review over this slide and make sure you follow everything and remember it. We will use this a lot.

Definition

Based on what we have just derived, we make a definition. Let A be an n by n matrix. If there is a non-zero vector v such that Av = (lambda)(v) for some scalar lambda, then lambda is called an eigenvalue of A and v is called an eigenvector of A corresponding to lambda. We call (lambda, v) an eigenpair.

In many applications we are interested in eigenvalues and eigenvectors of linear mappings. As you may expect, the definition is essentially the same. Let L be a linear operator on Rn. If there is a non-zero vector v such that L(v) = (lambda)(v) for some scalar lambda, then lambda is called an eigenvalue of L and v is called an eigenvector of L corresponding to lambda.

Note that a very important part of the definition is that v is not 0. This is because, as we saw, we will want a basis B of these eigenvectors. Observe that an eigenvector is a special vector of a linear mapping such that the image of the vector under the mapping is just a scalar multiple of the vector. Moreover, the eigenvalue is the amount of this scaling. Thus, these special vectors—eigenvectors—have nice geometric interpretations.

Example 1

Example: Let a be any non-zero vector in Rn. Thinking geometrically, what are the eigenvectors and corresponding eigenvalues of the projection onto a? We want to find all vectors x in Rn such that proj(a)x = (lambda)(x). But by definition, we know that proj(a)x is a scalar multiple of a. Hence, any non-zero scalar multiple of a should be an eigenvector. Indeed, we have proj(a)(ta) = 1(ta). Thus, ta for t not equal to 0 is an eigenvector with eigenvalue 1. The exclusion of 0 means we didn’t include vectors that are projected to 0, but 0 is a scalar multiple of a—it is 0a. Which vectors are projected onto 0? Vectors that are orthogonal to a. So, we observe that if v is any vector orthogonal to a, then we have proj(a)v = 0 = 0a. Hence, all non-zero vectors orthogonal to a are eigenvectors of a with corresponding eigenvalue 0. Notice that any other vector x in Rn will not be an eigenvector since proj(a)x will not be a scalar multiple of x.

Example 2

Example: Thinking geometrically, what are the eigenvectors and corresponding eigenvalues of a rotation R_theta from R2 to R2 by an angle theta where theta is between 0 and 2pi radians? We first realize that if theta = pi radians, then we are rotating v around to –v. Hence, every non-zero vector v is an eigenvector of R_pi with corresponding eigenvalue of -1. On the other hand, if we are rotating by an angle theta with theta between 0 and 2pi, and theta not equal to pi, then the result will not be a scalar multiple of v, and so R_theta will not have any real eigenvalues. However, in Linear Algebra II, we will show that it does have complex eigenvalues and eigenvectors.

Example 3

Example: Consider the matrix A = [3, 6, 7; 3, 3, 7; 5, 6, 5]. Determine which of the following vectors are eigenvectors of A. (a) v1 = [1; -2; 1]. We have Av1 = [-2; 4; -2], which is -2v1. Thus, v1 is an eigenvector of A with corresponding eigenvalue lambda_1 = -2. (b) v2 = [1; 1; 1]. We have that Av2 = [16; 13; 16], which is not a scalar multiple of v2. Thus, v2 is not an eigenvector. (c) v3 = [0; 0; 0]. We have that Av3 = 0, which is 1v3, so v3 is not an eigenvector because, by definition, 0 is not allowed to be an eigenvector.

From this example, we see that it is very easy to check if a particular vector is an eigenvector, and if it is, to determine the corresponding eigenvalue. But how do we show if a scalar lambda is an eigenvalue? This may surprise you, but we will demonstrate this with an example.

Example 4

Example: Consider A = [3, 6, 7; 3, 3, 7; 5, 6, 5]. Is lambda = 1 an eigenvalue of A? We need to determine if there is a non-zero vector v such that Av = (lambda)(v) = v. For this, we would need [3, 6, 7; 3, 3, 7; 5, 6, 5][v1; v2; v3] = [v1; v2; v3]. This looks like a system of linear equations. However, there is a slight problem. Notice that the variables v1, v2, v3 are also on the right-hand side. This is easily fixed, though. Doing the matrix-vector multiplication on the left and comparing entries, we get 3v1 + 6v2 + 7v3 = v1, 3v1 + 3v2 + 7v3 = v2, and 5v1 + 6v2 + 5v3 = v3. Moving the variables on the right side to the left side gives the homogeneous system 2v1 + 6v2 + 7v3 = 0, 3v1 + 2v2 + 7v3 = 0, 5v1 + 6v2 + 4v3 = 0. Solving this system, we find that the only solution is the trivial solution. Hence, the only vector that satisfies Av = v is 0. Hence, lambda1 is not an eigenvalue of A. It is instructive to notice that the coefficient matrix of the homogeneous system was [2, 6, 7; 3, 2, 7; 5, 6, 4], which is the matrix obtained from the original matrix A by subtracting the value of lambda off each of the diagonal entries. Notice that this occurred when moving the v1, v2, and v3 from the right side of the original system to the left side.

We now have a way of checking whether any vector is an eigenvector of a matrix A, or if a scalar lambda is an eigenvalue of A. However, our real goal was to determine for which matrices A, where we are thinking of A as the standard matrix of a linear mapping L, is there a basis B of eigenvectors of A so that the B-matrix of L is diagonal. Thus, we actually want a quick way of finding all eigenvalues and corresponding eigenvectors to a matrix. This will be the topic of the next lecture.

© University of Waterloo and others, Powered by Maplesoft