# Lesson: Span, Linear Independence, and Basis

1 point

## Transcript

The most common way to define a subspace is as all possible linear combinations of a set of vectors. First, let us verify that this is still a subspace. So I’ll call this Theorem 9.3.a. If the set {v1 through vk} is a set of vectors in a vector space V over C, and if S is the set of all possible linear combinations of these vectors—that is, that S equals the set of all {(alpha1)v1 + through to (alpha_k)vk for (alpha1) through (alpha_k) in C}—then S is a subspace of V.

To prove this, we need to look at our three properties. Property S0: Well, since V is closed under addition and scalar multiplication, we know that every (alpha1)v1 + through to (alpha_k)vk is an element of V, and thus, S is a subset of V. And S is not empty since, at the very least, the vector v1 is an element of S.

Now we can look at property S1. Let’s let w = (alpha)v1 + through to (alpha_k)vk, and we’ll let z = (gamma1)v1 + through to (gamma_k)vk, and these are both elements of S. Then their sum w + z is going to be the sum of ((alpha1)v1 + through to (alpha_k)vk) + through to ((gamma1)v1 + through to (gamma_k)vk). But using properties V2 and V5 from our vector space (that is, the associative and commutative properties of vector addition), we know that we can rewrite this as ((alpha1)v1 + (gamma1)v1) + through to ((alpha_k)vk + (gamma_k)vk), and then using property V8 (that’s the scalar distributive property), we can regroup this as (alpha1 + gamma1)v1 + through to (alpha_k + gamma_k)vk. And so we see that w + z is an element of S.

Finally, we can look at property S2. So we’ll just let z = (alpha1)v1 + through to (alpha_k)vk be an element of S, and we’ll let gamma be any complex number. Then (gamma)z will equal (gamma)((alpha1)v1 + through to (alpha_k)vk), but we could make repeated use of property V9 of our vector space to get that this equals (gamma)((alpha1)v1) + through to (gamma)((alpha_k)vk), and now using vector space property 7, we see that this equals ((gamma)(alpha1))v1 + through to ((gamma)(alpha_k))vk, and since all of our ((gamma)(alpha_i))’s are complex numbers themselves, we see that (gamma)z satisfies the definition of being an element of S. And since properties S0, S1, and S2 hold, S is a subspace of V.

So you’ll note that this theorem I’ve created is exactly the same as Theorem 4.2.2, except that our scalars are now from C, not R. We can do this because our proof only relies on vector space properties. As we will find many similarities between vector spaces over R and vector spaces over C, we sometimes use Greek letters to indicate our scalars to remind us that they are from C and not from R. We’ve already been using the Greek letter alpha for this purpose, and in this proof, I also used the Greek letter gamma. If you’re ever curious about a symbol used in lecture, just ask your instructor.

At this point, though, we’ll continue with our theme of “vector spaces over C are just like vector spaces over R”, and now define spanning sets, linear independence, and bases.

If S is the subspace of the vector space V over C consisting of all linear combinations of the vectors v1 through vk in V, then S is called the subspace spanned by the set B = {v1 through vk}, and we say that the set B spans S. The set B is called a spanning set for the subspace S, and we denote S by S equals the span of {v1 through vk}, or equals Span B.

If B = {v1 through vk} is a set of vectors in a vector space V over C, then B is said to be linearly independent if the only solution to the equation (alpha1)v1 + through to (alpha_k)vk equals the 0 vector is if alpha1 through alpha_k are all equal to 0. Otherwise, B is said to be linearly dependent.

And a set B of vectors in a vector space V over C is a basis for V if it is a linearly independent spanning set for V. And as before, we like bases because they allow us to write every vector in V as a unique linear combination of the basis vectors.

The Unique Representation Theorem: Let B = {v1 through vn} be a spanning set for a vector space V over C. Then every vector in V can be expressed in a unique way as a linear combination of the vectors of B if and only if the set B is linearly independent. As with Theorem 9.3.a, the proof of this theorem can be copied straight from its counterpart in R, which is Theorem 4.3.1, and so I won’t bother to copy it here.

So not only are the definitions for these basic linear algebra concepts the same, but the way we go about determining them is the same, so let’s look at some examples.

Let A be the set of vectors {[1; -2 + i], [1 + i; -3 – i], [1 + i; -3], [5 – 2i; -5 + 9i]}. Let’s determine whether or not A is a spanning set for C2, and if it is linearly independent, and use these results to determine if A is a basis for C2.

First, let’s see if A is a spanning set for C2. To do this, we take an arbitrary element [z1; z2] of C2 and see if we can find scalars alpha1, alpha2, alpha3, and alpha4 in C such that (alpha1)[1; -2 + i] + (alpha2)[1 + i; -3 – i] + (alpha3)[1 + i; -3] + (alpha4)[5 – 2i; -5 + 9i] = [z1; z2]. If we compute the linear combination on the left, we see this is the same as checking the following vector equality. Does the vector [alpha1 + (alpha2)(1 + i) + (alpha3)(1 + i) + (alpha4)(5 – 2i); (alpha1)(-2 + i) + (alpha2)(-3 – i) + (alpha3)(-3) + (alpha4)(-5 + 9i)] equal the vector [z1; z2]? By the definition of vector equality, this equation is equivalent to the following system of equations, so we need our first components to be equal and our second components to be equal.

To solve this system of linear equations, we need to row reduce its augmented matrix. So here, we’ll construct the augmented matrix, and if we perform the row operation (row 2 + (2 – i)(row 1)), we get this matrix. Our new matrix is in row echelon form, and since there are no bad rows no matter what our choice of z1 and z2, we see that our system does have a solution. So this means that [z1; z2] is in the span of A for any [z1 and z2] in C2, so A is a spanning set for C2.

Now let’s determine if A is linearly independent. That means we need to determine if there are any non-trivial solutions to the equation (alpha1)[1; -2 + i] + (alpha2)[1 + i; -3 – i] + (alpha3)[1 + i; -3] + (alpha4)[5 – 2i; -5 + 9i] = the 0 vector. Well, this is the same equation we looked at to determine whether or not A is a spanning set, except that z1 and z2 are now set to 0. So, using the results from our previous work, we see that the solution to this equation can be found by looking at the matrix seen here.

At this time, our concern is not whether or not this equation has a solution since we already know that the trivial solution will be a solution, but instead, how many solutions there are. We see from this row echelon form matrix that the rank of the coefficient matrix is 2, while there are 4 columns, so the general solution will contain 4 – 2, which equals 2, parameters. As such, this means that the trivial solution is not the only solution, so the set A is not linearly independent. And since A is not linearly independent, it is not a basis for C2.