## Transcript — Introduction

In the last lecture, we discussed that we would like to find bases in general n-dimensional vector spaces that are as nice as the standard basis in Rn. That is, we wanted to find bases such that every vector in the basis is orthogonal to the others, and all the vectors have unit length. To do this, we need to define the concepts of length and orthogonality in general vector spaces. In the last lecture, we defined the concept of an inner product on a vector space so that we could define the concepts of length and orthogonality. So, in this lecture, we will finally be able to define these concepts.

We start with a theorem. Theorem 9.1.1: If V is an inner product space, then for any vector v in V, we have the inner product of v and the 0 vector equals 0. We will find this theorem very useful throughout the next couple of lectures.

We now define the length of a vector in an inner product space to match the definition we used in Rn under the dot product back in Linear Algebra I. Definition: Let V be an inner product space. For any vector v in V, we define the length of v (also sometimes called the norm) by the length of v equals the square root of the inner product of v with itself.

Let’s look at some examples. Let A = [1, 2; 3, 4] and B = [2, -1; 0, 3] be matrices in M(2-by-2)(R). Find the length of A and B. Since we have not specified another inner product, recall that it means that we are using the standard inner product on M(2-by-2)(R), defined by the inner product of A and B equals the trace of (B-transpose)A. In the last lecture, we saw that we don’t calculate the inner product by actually multiplying out (B-transpose)A and taking the trace. Instead, we can calculate it by mimicking the dot product.

Solution: The length of A equals the square root of the inner product of A with itself. This gives the square root of (1-squared + 2-squared + 3-squared + 4-squared), which is the square root of 30. The length of B equals the square root of the inner product of B with itself, so this gives the square root of (2-squared + (-1)-squared + 0-squared + 3-squared), which is the square root of 14.

Hmm, what does it mean, the length of the matrix A is the square root of 30, and the length of the matrix B is the square root of 14? Well, it doesn’t really mean anything. It is a concept, and we will use these values for calculation purposes, but as the next example demonstrates, we can actually change the length of a vector by changing the inner product being used.

Example: In R3, under the standard inner product, we have that the length of e1 = [1; 0; 0] is 1. However, under the inner product defined by the inner product of x and y equals ax1y1 + x2y2 + x3y3, where a is greater than 0, we get the length of e1 is the square root of the inner product of e1 with itself, which is the square root of (a(1-squared) + 0-squared + 0-squared), which equals the square root of a. Therefore, we can have the length of e1 to be any positive real number we want by picking the appropriate value of a.

Let’s do one more example. Find the length of 1 in P2(R) under the inner product defined by the inner product of p(x) and q(x) is equal to p(-1)q(-1) + p(0)q(0) + p(1)q(1). Solution: We have the length of p(x) = 1 is the square root of the inner product of 1 with itself, which is the square root of (1-squared + 1-squared + 1-squared), and hence, the length of 1 is, of course, the square root of 3.

Definition: Let V be an inner product space. If v is a vector in V with length 1, then v is called a unit vector.

Not surprisingly, lengths in inner product spaces satisfy the same familiar properties we saw in Rn. Theorem 9.2.1: Let v and w be any two vectors in an inner product space V, and t be any real number. Then the length of v is always greater than or equal to 0, and the length of v equals 0 if and only if v is the 0 vector. The length of tv is equal to the absolute value of t times the length of v. The absolute value of the inner product of v and w is less than or equal to the length of v times the length of w, which is called the Cauchy-Schwarz Inequality. And the length of (v + w) is less than or equal to the length of v plus the length of w, which is called the Triangle Inequality.

Note that the proofs are essentially the same as the proofs of the corresponding properties in Rn under the dot product, and so we will omit the proofs here. In many situations, we will be given a vector v in an inner product space V, and need to find a unit vector v-hat in the direction of v. This is called normalizing the vector. Theorem 9.2.1 part 2 shows us that to calculate the unit vector v-hat in the direction of v, we just multiply v by (1 over the length of v).

Next, we extend the concept of orthogonality to general vector spaces. Definition: Let V be an inner product space. If v and w are in V such that the inner product of v and w equals 0, then we say that v and w are orthogonal. If {v1 to vk} is a set of vectors in V such that the inner product of vi and vj equals 0 for all pairs i not equal to j, then the set {v1 to vk} is called an orthogonal set.

Example: In P2(R), define the inner product, the inner product of p(x) and q(x) equals p(-1)q(-1) + p(0)q(0) + p(1)q(1). Determine if p(x) = x and q(x) = 3x-squared – 2 are orthogonal. Solution: We have the inner product of x and 3x-squared – 2 equals (-1)(1) + 0(-2) + 1(1), which equals 0. Thus, x and 3x-squared – 2 are orthogonal.

Example: Let A = [1, 2; 3, -1], B = [-1, 2; -1, 0], and C = [2, 1; 1, 7]. Is the set {A, B, C} orthogonal in M(2-by-2)(R)? Solution: We have the inner product of A and B under the standard inner product for M(2-by-2)(R) is 1(-1) + 2(2) + 3(-1) + (-1)(0) = 0, so A and B are orthogonal. The inner product of A and C is 1(2) + 2(1) + 3(1) + (-1)(7), which is 0, and so A and C are orthogonal. But the inner product of B and C is (-1)(2) + 2(1) + (-1)(1) + 0(7), which equals -1, so B and C are not orthogonal. And hence, the set {A, B, C} is not an orthogonal set in M(2-by-2)(R) under the standard inner product.

Example: Show that, in R2, under the inner product defined by the inner product of x and y equals 2x1y1 – 2x1y2 – 2x2y1 + 4x2y2, the standard basis vectors e1 = [1; 0] and e2 = [0; 1] are not orthogonal. Solution: We have, by definition, the inner product of e1 and e2 equals 2(1)(0) – 2(1)(1) – 2(0)(0) + 4(0)(1), which equals -2. And so, they are not orthogonal under this inner product.

Thus, we see that, like length, orthogonality is dependent on the definition of the inner product. Even though which vectors are orthogonal in an inner product space depends on which inner product is being used, an orthogonal set in an inner product space still satisfies the properties we would expect.

Theorem 9.2.2: Let V be an inner product space. If {v1 to vk} is an orthogonal set in V, then (the length of (v1 + up to vk))-squared equals (the length of v1)-squared + up to (the length of vk)-squared. Notice that this is just a generalization of the Pythagorean Theorem. And I will leave the proof of this as an exercise.

## Nice Bases

Now that we have the concepts of orthogonality and length, we can proceed to define our desired nice bases. From the check-in, we see that, as usual, we have to be careful about the 0 vector. If we exclude the special case of the 0 vector, then we do get what we desire.

Theorem 9.2.3: Let V be an inner product space. If {v1 to vk} is an orthogonal set of non-zero vectors in V, then the set {v1 to vk} is linearly independent. Proof: As usual, to prove a set of vectors is linearly independent, we use the definition of linear independence. So we consider c1v1 + up to ckvk = the 0 vector, and we need to prove that we must have all of the coefficients are 0. But how are we going to do this when we don’t have any information about the entries of the vectors? In fact, the only things that we know about these vectors is that none of them is the 0 vector and they are all orthogonal to each other. So the only thing we can do with this equation is take the inner product.

So, for any vector vi, we have the inner product of 0 and vi is equal to 0, but the 0 vector is equal to c1v1 + up to ckvk, so 0 equals the inner product of (c1v1 + up to ckvk) and vi. Since the inner product is bilinear, this is equal to (c1 times the inner product of v1 and vi) + up to (c(i-1) times the inner product of v(i-1) and vi) + (ci times the inner product of vi with itself) + (c(i+1) times the inner product of v(i+1) and vi) + up to (ck times the inner product of vk and vi). But, the set is orthogonal, and so we get the inner product of vi with any other vector is 0. Also, we know the inner product of vi with itself is (the length of vi)-squared. Therefore, we have that ci((the length of vi)-squared) = 0. But vi is not the 0 vector, and so the length of vi does not equal to 0, and hence, ci must be 0. Since this is valid for all i from 1 to k, we get that c1 = up to ck = 0 is the only solution, and so the set is linearly independent.

This theorem shows us that if we have an orthogonal set of n non-zero vectors in an n-dimensional inner product space V, then the set is a basis for V. And hence, we make the following definition.

Definition: Let V be an inner product space. If {v1 to vn} is an orthogonal set in V that is a basis for V, then we call {v1 to vn} an orthogonal basis for V.

Recall that we wanted our special bases to have one additional property: that all of the vectors are unit vectors. Thus, we make one more definition.

Definition: Let V be an inner product space. If {v1 to vn} is an orthogonal set of unit vectors that is a basis for V, then {v1 to vn} is called an orthonormal basis for V.

In the next lecture, we will show that orthogonal and orthonormal bases are as nice and easy to use as we desire.