Lesson: Span and Linear Independence in Vector Spaces

Question 2

1 point

Transcript

One of the common ways to define a subspace is to think of it as the set of all linear combinations of a set of vectors. First, let’s note that this is a subspace.

Theorem 4.2.2: If the set {v1 through vk} is a set of vectors in a vector space V, and S is the set of all possible linear combinations of these vectors, then S is a subspace of V.

To prove this theorem, let’s look at our three properties. S0: Well, since V is closed under addition and scalar multiplication, we know that every t1v1 + through to tkvk is an element of V, and thus S is a subset of V. And S is not empty since, at the very least, v1 is in S. S1: Let x = s1v1 + through to skvk and y = t1v1 + through to tkvk be elements of S. Then our sum x + y is the sum of s1v1 + through to skvk with t1v1 + through to tkvk. But of course, addition in our vector space is associative and commutative, so this is equal to s1v1 + t1v1 + all the way through to skvk + tkvk, which we can regroup as the sum of (s1 + t1)v1 + through to (sk + tk)vk. And so we see that x + y is an element of S. Now for property S2: Let x again be s1v1 + through to skvk, so it is an element of S, and let t be a real number. Then tx = t times the quantity (s1v1 + through to skvk), but scalar multiplication is distributive, so we could set this equal to t(s1v1) + through to t(skvk), and by the associativity of scalar multiplication, this becomes equal to (ts1)v1 + through to (tsk)vk. And so we see that we have written tx as a linear combination of the vectors v1 through vk, so tx is an element of S. And since we’ve shown that properties S0, S1, and S2 hold, S is a subspace of V.

For example, the set of all 2-by-2 diagonal matrices is a vector space since it is the set of all possible linear combinations of the matrices [1, 0; 0, 0] and [0, 0; 0, 1] in M(2, 2).

As we did with Rn and M(m, n), we will call the set of all linear combinations the span. If S is the subspace of the vector space V consisting of all linear combinations of the vectors v1 through vk in V, then S is called the subspace spanned by the set B = {v1 through vk}, and we say that the set B spans S. The set B is called a spanning set for the subspace S. We denote S by S equals the span of {v1 through vk}, or equals the span of B.

We also extend our definition of linear dependence and linear independence to general vector spaces. So if B is the set of vectors {v1 through vk} in a vector space V, then B is said to be linearly independent if the only solution to the equation t1v1 + dot dot dot + tkvk = the 0 vector is t1 through tk = 0. Otherwise, B is said to be linearly dependent.

I won’t go over examples of spanning sets and linear independence since we’ve already looked at these extensively in our studies of Rn, M(m, n), and Pn. But we do have an opportunity now to prove a statement about linear independence in all vector spaces.

So here is a course author’s theorem. Any set that contains the 0 vector is linearly dependent. Proof: Let V be a vector space, and let A be the set {0, x1, x2, through xk} be a set of vectors from V that contains the 0 vector. To see that A is linearly dependent, we need to find a non-trivial solution to the equation t0(0) + t1x1 + through to tkxk = 0. Well, I claim that setting t0 = 1 and t1 through tk = 0 is such a solution. First, we note that the scalar multiplicative identity property tells us that 1 times the 0 vector equals the 0 vector, so setting t0 = 1 means that we can replace t0(0) with 0. Next, we note that, by Theorem 4.2.1, 0xi equals the 0 vector for all our i from 1 to k, so setting t1 through tk = 0 means that we can replace all the tixi with the 0 vector. And so, our equation becomes the 0 vector + the 0 vector + the 0 vector + dot dot dot + the 0 vector = the 0 vector, which is true, thanks to the repeated uses of the additive identity property.

© University of Waterloo and others, Powered by Maplesoft