In the previous lecture, we saw that for a subset SV (where V is a vector space) that the span of S (Span(S)) is a subspace of V and so Span(S) is a vector space. We also solved the question of whether a particular vector w belongs to Span(S) by answering the question of whether we can write w as a linear combinations of the elements of S. When S contains a finite collection of elements {u1,u2,u3}, we did this by writing w as linear combination of the elements of S as follows,

w=x1(u1)+x2(u2)+...+xk(uk).

Writing w as a linear combinations above requires solving the above system of linear equations with k variables (x1,x2,...,xk). If there is a solution to the system, then we know w can be written as a linear combination of the elements of S and it belongs to the span of S. If the system doesn’t have a solution, then this means that w can’t be written as a linear combination of the elements and so w doesn’t belong to the span of S.

But this question can be reversed meaning we can answer this question in terms of the span of a matrix. So if we’re given a linear system of equations with matrix A,

(a11a12...a1n......am1am2...amn).

If we view this matrix as a collection of ncolumn vectors in Rm instead. The span of these vectors is a subspace of Rm. We call this subspace the Column Space of A (Col(A)).

Definition
Given a vector x¯=(x1x2..xn).Rn, we define the following operation Ax¯=(a11...a1n......am1...amn)(x1x2..xn)=x1(a11...am1)+...+xn(a1n...amn). This product is in the column space of A because it is a linear combinations of the columns of A.


Note that this product or operations works if the vector has as many entries as the columns of A.

Example 1

The following matrix has three column vectors in R2. The product below is a linear combination of the columns of the matrix.

(123456)(111)=1(14)+1(25)+1(36)=(615).


An Observation

Given b¯Rm and viewing the x¯ as a variable, the equation

(Ax¯=b¯).

is equivalent to the linear system with augmented matrix (Ab¯).

Test

Let’s verify the above observation. Let A=(123456),b¯=(2π) Then,

Ax¯=b¯(123456)(x1x2x3)=(2π)x1(14)+x2(25)+x3(36)=(2π)(1x1+2x2+3x34x1+5x2+6x3)=(2π)

This is a linear system of equations with augmented matrix

(1232456π)

Ax¯=b¯ is consistent if and only if b¯Col(A). This again means that answering the question of whether b¯ is in the span of columns of A is the same as answering if the system has a solution.

Linear Dependence and Linear Independence

Suppose W is a subspace of V. We know that spans are subspaces but is W a span of some elements? or what is the smallest number k such that W can be written as

W=Span={(w1,w2,...,wk)}

The answer to this question could be that there is no such k (for example if W is a subspace of the vector space of all continuous functions).

Definition
A subset {u1,u2,u3,...}V is linearly dependent if there are scalars a1,...ak not all zero such that a1u1+a2u2+...+akuk=0¯.


Note here that not linearly dependent is equivalent to linearly independent. In particular the set {u1,u2,u3,...} is linearly independent if and only if

a1u1+a2u2+...+akuk=0¯.

is true only when a1=0,a2=0,...ak=0.

Example 0

Consider {0¯}. This set is linearly dependent because we can choose a scalar a10 such that a10¯=0¯.



Example 1

Given a vector u such that u{0¯}, then {u} is linearly independent.

Proof: We need to prove that au=0¯ only if a=0. Now suppose that au=0¯. We have two cases. Case one is when a=0 and we’re done. Case two is when a0. If a0, then

1aau=1a0¯u=0¯

But we know that u0¯. Therefore a must be zero and we’re done.



Example 2

Given a vector u1,u2, prove that {u1,u2} is linearly dependent if and only if one vector is a scalar multiple of the other.

Proof: For the forward direction, suppose that {u1,u2} is linearly dependent. This means that there are scalars (a_1, a_2) not all zero such that

a1u1+a2u2=0¯

Without the loss of generality, suppose that a10. Then,

a1u1+a2u2=0¯a1u1=a2u2u1=a2a1u2.

For the backward direction, suppose that one vector can be written as a scalar multiple of the other. Without the loss of generality, suppose it is u1. Then u1=cu2 where c0, We can re-write this as follows,

u1=cu2u1+(c)u2=0¯

From this we see that there are scalars a1=1 and a2=c, not all zero such that a1u1+a2u2=0¯ which means that they are linearly dependent as we wanted to show.



Example 3

Determine the set of the following vectors:

{u1,u2,u3}={(1,1,2),(1,2,1),(5,1,4)}

is linearly dependent.

The set of vectors is linearly independent a1u1+a2u2+a3u3=0¯ implies that a1=0,a2=0 and a3=0. So,

a1u1+a2u2+a3u3=0¯a1(1,1,2)+a2(1,2,1)+a3(5,1,4)=(0,0,0)(a1+a2+5a3,a2+2a2+a3,2a1+2a24a3)=(0,0,0).

This is equivalent to solving the following system:

a1+a2+5a3=0a2+2a2+a3=02a1+2a24a3=0.

{u1,u2,u3} is linearly independent if and only if this system has the trivial solution (0,0,0). (0,0,0) will always be a solution. It’s why it is called the trivial solution. So the question now is how do we know from the REF matrix, whether we have the (0,0,0) solution or a non-zero solution?

If the matrix has no column without a leading entry (besides last) then we have a unique solution and the set is linearly independent. If the matrix has a column (besides the last) with no leading entry then we have infinitely many solutions (so not just the zero vector) and the set is linearly dependent. So we want to make sure that all columns have leading entries.

(115012102240)

This will eventually be

(115003600000)

The third column has no leading entry so there are infinitely many solutions besides the zero solution this set is linearly dependent.



Example 4

Consider {sin(x),cos(x)}F(R). Is this set linearly dependent or independant?

From the definition if sin(x) was a non-zero scalar multiple of cos(x), in other words sin(x)=acos(x) for some non-zero scalar a, then they are linearly dependent. But two functions are equal when they take the same value at every point.

Suppose we choose the point x=0, then we know that sin(0)=0 but acos(0)=a. Therefore, these functions are not equal and so the set is linearly independent.



Theorem
If {u1,...,uk}V is linearly dependent, then there exists one uj can be expressed as a linear combination of the others.


Proof: Suppose the set {u1,...,uk}V is linearly dependent. This means that for some (not all zero) scalars a1,...,ak, we must have

a1u1+...+ajuj+...+akuk=0¯.

Without the loss of generality that aj0 for some j. Multiply the equation by 1/aj

1aj(a1u1+...+ajuj+...+akuk)=1aj0¯a1aju1+...+uj+...+akajuk=0¯a1aju1...akajuk=uj

Therefore, uj is a linear combination of the other vectors as we wanted to show.

Theorem (Refinement Theorem)
Suppose {u1,...,uk}V is linearly dependent. There is a subset {ui1,...,uil} which is linearly independent and satisfies Span({ui1,...,uil})=Span({u1,...,uk}).


Proof: Suppose the set {u1,...,uk}V is linearly dependent. By the previous theorem, there exists some j where uj is a linear combination of the other vectors in the set. So,

uj=b1u1+...+bj1uj1+bj+1uj+1+...+bkuk.

Now, delete uj from this set. We claim that

Span({u1,...,uj,...,uk})=Span({u1,...uj^,...,uk}).

(the hat symbol means that the variable has been deleted). To show this, we will show that the two sets are subsets of each others which will imply that the two sets are equal.

Now, given a vector u¯=a1u1+...+ajuj+...+akuk in Span({u1,...,uj,...,uk}), we want to show that u¯Span({u1,...uj^,...,uk}). But since we established that uj is a linear combination of the other vectors so substitute uj in u¯ as follows,

=a1u1+...+aj(b1u1+...+bj1uj1+bj+1uj+1+...+bkuk)+...+akuk=(a1+ajb1)u1+...+(ak+ajbk)uk.

From this we see that given a vector in Span({u1,...,uj,...,uk}), it is also in Span({u1,...uj^,...,uk}). For the other direction, it’s trivial. If we have a vector u¯=a1u1+...+0uj+...+akuk, then it is in Span({u1,...,uj,...,uk}). Therefore, the two spans are equal.

But now, if {u1,...,uj1,uj+1,...,uk} is linearly independent, we then stop. Otherwise we find another uj to throw out. This process will stop since we started with a finite number of vectors.

Remark: To find uj, we start with u1 and ask/settle the question (Q1) can you u1 as linear combinations of u2,...,uk. If the answer is yes set uj=u1. If the answer is No, then ask (Q2) can u2 be written as a linear combination of u1,...,uk?

Conclusion: we can refine a finite subset {u1,...,uk} to obtain a linearly independent subset {ui1,...,uil} with the same span.

The rest of this lecture covered the definition of what a basis is and some other small result. I decided to move these to lecture 9 since lecture 9 covered basis in depth.



References:

  • Math416 by Ely Kerman