Lesson: Overdetermined Systems

1 point

Transcript

Another use of the Approximation Theorem is to find the best fit solution to an inconsistent system. Specifically, we’re going to be looking for a solution for an overdetermined system.

An overdetermined system is a system that has more equations than variables.

Now, overdetermined systems can have solutions, but that requires one of the equations to be a linear combination of the others. If the equations are independent, then overdetermined systems are always inconsistent. However, real life often causes us to want a solution anyway. So while we may not be able to find a perfect solution, we can use the Approximation Theorem to find a vector x that is as close as possible to a solution.

As in the previous lecture, we will develop our technique while we work through an example. To that end, consider this system. First, let’s see if there are any solutions to the system by row reducing its augmented matrix. We go through the row reduction steps. The last row indicates that the system is inconsistent. Since we can’t find an actual solution to the system, we will now try to find an approximate solution to the system.

To that end, consider that a solution to our system is a vector x such that Ax = b, where A, the matrix [1, 3; 3, -1; 2, 2] is the coefficient matrix for our system, and the vector b is our solution vector [-2; 4; 1]. We’ve already shown that no such x exists, and another way of phrasing that is to say that b is not in the columnspace of A. This is key, because now we have our subspace that we are trying to be closest to: the columnspace of A.

So since we can’t find x such that Ax = b, we will instead look for the unique s in the columnspace of A that minimizes the distance between b and s. Now, since s is in the columnspace of A, we’ll know that s = Ax for some x. Again, if we simply square our distance, we’ll find ourselves in the same situation we had in the previous lecture—looking for an x that minimizes ||b – Ax||-quantity-squared.

So, using the exact same argument that we used in the last lecture, we know that x will equal (((A-transpose)A)-inverse)(A-transpose)b. Again, that will hold in general, but let’s go back to our example. We’ll start by calculating (A-transpose)A. This will become [14, 4; 4, 14]. Next, we’ll find ((A-transpose)A)-inverse using the matrix inverse algorithm. Here are the steps, and so we see that ((A-transpose)A)-inverse can be written as (1/90)[7, -2; -2, 7]. Now, this means that (((A-transpose)A)-inverse)(A-transpose) = (1/90)[2, 23, 10; 19, -13, 10]. And lastly, we have that x, which equals (((A-transpose)A)-inverse)(A-transpose)b, is the vector (1/90)[98; -80], which we can write as [49/45; -8/9].

So this vector x is the vector that minimizes the distance between b and Ax. So we see from our example that the question of finding an approximate solution to the system Ax = b is the same as trying to find a vector x that minimizes the distance-squared between b and Ax. And using the same argument that we did when looking to find the least squares approximation, we’ll know that this x is (((A-transpose)A)-quantity-inverse)(A-transpose)b.