## Transcript — Introduction

In this lecture, we start extending our theory of diagonalization to the complex case. We will see that this does cause some changes.

The definition, of course, matches the real case. Definition: Let A be a matrix in M(n-by-n)(C). If there exists a complex number lambda and a complex vector z in Cn, with z not equal to the 0 vector, such that Az = (lambda)z, then lambda is called an eigenvalue of A and z is called an eigenvector of A corresponding to lambda. We call the pair (lambda, z) an eigenpair.

All of our theorems about diagonalization that we did in MATH 136 are all the same, except that we now allow complex eigenvalues and eigenvectors.

Let’s use a few examples to explore some of the differences between diagonalizing over R and diagonalizing over C.

Example: Diagonalize A = [0, 1; -1, 0] over C. Solution: The characteristic polynomial is C(lambda) = the determinant of (A – (lambda)I), which is equal to the determinant of |-(lambda), 1; -1, -(lambda)|, which is equal to lambda-squared + 1. So, the eigenvalues are lambda1 = i and lambda2 = -i. In MATH 136, we said that this matrix is not diagonalizable over R. But now, since we are allowing the use of non-real eigenvalues and eigenvectors, we can diagonalize this matrix. We find that the corresponding eigenvectors are v1 = [i; -1] and v2 = [-1; i]. Thus, taking P = [i, -1; -1, i] gives (P-inverse)AP = D = [i, 0; 0, -i].

I need to make two comments about this.

- Remember that we are just diagonalizing. We cannot even think about making the vectors unit vectors or orthogonal since we have not yet defined inner products in complex vector spaces. (We will do this soon.) Do not get confused about diagonalization and orthogonal diagonalization.
- Although we can diagonalize this matrix, it doesn’t mean this is better. Remember that one of the reasons to diagonalize is to find a geometrically natural basis. However, in this example, we have taken a simple linear mapping from R2 to R2 and “simplified” it to a linear mapping from C2 to C2. Depending on what you are trying to do with the matrix, there are other things we could have done to have kept the matrix real. Unfortunately, the real canonical form (or the Jordan normal form) is not covered in this course.

Example: Diagonalize the matrix A = [2, i; i, 4] over C. Before we solve this, what do you notice about the matrix A? Observe that the matrix A is symmetric, so let’s see what happens. Solution: We have the characteristic polynomial is the determinant of (A – (lambda)I), which we can evaluate as lambda-squared – 6(lambda) + 9, which equals (lambda – 3)-squared. Therefore, the only eigenvalue is lambda1 = 3, with algebraic multiplicity 2. We find that a basis for the eigenspace of lambda1 is {[i; 1]}, and therefore, the geometric multiplicity is only 1, which is less than the algebraic multiplicity. And so we have that A is not diagonalizable. Not diagonalizable? But it is symmetric! Argh! All of our theory for symmetric matrices were for real symmetric matrices. It does not apply to matrices with non-real entries.

Example: Diagonalize the matrix A = [4, 1 + i; 1 – i, 3] over C. Solution: We have characteristic polynomial is lambda-squared – 7(lambda) + 10. We get the eigenvalues of A are lambda1 = 2 and lambda2 = 5. Therefore, from our theory of diagonalization, we already know that A is diagonalizable. We find that corresponding eigenvectors are v1 = [-1 – i; 2] and v2 = [1 + i; 1]. Thus, taking P = [-1 – i, 1 + i; 2, 1] gives (P-inverse)AP = [2, 0; 0, 5]. Notice that this is wonderful. We have taken a matrix with non-real entries and simplified it to a real matrix. This is very good.

Let’s do one more example that is a little more complex. Example: Diagonalize the matrix A = [i, 1 + i; 1 – i, 3i] over C. Solution: We find that the characteristic polynomial is lambda-squared – (4i)(lambda) – 5. By the quadratic formula, we get the eigenvalues of A are lambda1 = 1 + 2i and lambda2 = -1 + 2i. Therefore, A is diagonalizable. We find that corresponding eigenvectors are v1 = [1; 1] and v2 = [-i; 1]. Thus, taking P = [1, -i; 1, 1] gives (P-inverse)AP = [1 + 2i, 0; 0, -1 + 2i].

## Properties of Complex Eigenvalues

We now look at a couple of theorems related to allowing the use of complex eigenvalues and eigenvectors. Theorem 11.3.1: If A is an n-by-n matrix with all real entries that has a non-real eigenvalue lambda with corresponding eigenvector z, then the conjugate of lambda is also an eigenvalue of A with corresponding eigenvector the conjugate of z.

Proof: We have Az = (lambda)z, so taking complex conjugates of both sides gives the conjugate of (Az) equals the conjugate of ((lambda)z). By properties of complex conjugates, this is (the conjugate of A)(the conjugate of z) = (the conjugate of lambda)(the conjugate of z). But A has all real entries, and so the conjugate of A just equals A, and so we have A(the conjugate of z) = (the conjugate of lambda)(the conjugate of z) as required.

Corollary 11.3.2: If A is an n-by-n matrix with real entries, and n is odd, then A has at least one real eigenvalue. Proof: Since A is an n-by-n matrix, its characteristic polynomial C(lambda) is degree n. Then, by the Fundamental Theorem of Algebra, C(lambda) has exactly n roots. Since complex roots come in complex conjugate pairs, one root cannot have a pair if n is odd. Thus, C(lambda) has at least one real root, and so A has at least one real eigenvalue.

Notice, in some circumstances, that Theorem 11.3.1 can help us find eigenvalues of a real matrix. Example: Given that lambda1 = i is an eigenvalue of A = [1, 2, 4; 1, 1, 2; -1, 2, 1], find the other eigenvalues of A. Solution: By Theorem 11.3.1, we have that since lambda1 = i is an eigenvalue of A, then lambda2 equals the conjugate of lambda1, which is –i, is also an eigenvalue. We know by theorem the sum of the eigenvalues is the trace of the matrix, and so the other eigenvalue must satisfy 3 is equal to the trace of A is equal to lambda1 + lambda2 + lambda3, which is i + (-i) + lambda3, and thus, lambda3 must be equal to 3.

Now that we have the basics of complex diagonalization down, we want to start working towards our next main goal: to mimic orthogonal diagonalization in the complex case. To do this, we will first need to define complex inner products, which we will do in the next lecture.

This ends this lecture.