Definition
A vector space is a set \(V\) with two operations: addition: \(V \times V \rightarrow V\), and a scalar multiplication: \(\mathbf{R} \times V \rightarrow V\), such that the following properties hold:
  1. \(u + v = v + u\) for all \(u,v \in V\).
  2. \((u + v) + w = u + (v + w)\) for all \(u, v, w \in V\).
  3. There exists an element \(\bar{0} \in V\) such that \(v + \bar{0} = v\) for all \(v \in V\).
  4. For all \(v \in V\), there exists \(w \in V\) such that \(v + w = \bar{0}\).
  5. \(1v = v\) for all \(v \in V\)
  6. \(a(bv) = a(bv)\) for all \(v \in V\) and for all \(a, b \in \mathbf{F}\)
  7. \(a(u + v) = au + av\) for all \(u, v \in V\) and for all \(a \in \mathbf{F}\)
  8. \((a + b)v = av + bv\) for all \(u, v \in V\) and for all \(a, b \in \mathbf{F}\)


For property (4), we don’t call it \(-v\) yet because we didn’t prove yet if it’s unique.

Example 1: \(\mathbf{R}\)

\(\mathbf{R}\) is a vector space equipped with the usual addition and scalar multiplication. The number 0 is the zero vector. We can additionally verify that all the 8 properties are true.



Example 2: A Set of Matrices

The set of \(m\) by \(n\) matrices (\(M_{m \times n}\)) equipped with component wise addition such that

$$ \begin{align*} \begin{pmatrix} a & b & c \\ d & e & f \\ \end{pmatrix} + \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ \end{pmatrix} = \begin{pmatrix} 1+a & 2+b & 3+c \\ 4+d & 5+e & 6+f \\ \end{pmatrix} \end{align*} $$

and component wise scalar multiplication such that

$$ \begin{align*} c \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ \end{pmatrix} = \begin{pmatrix} 1c & 2c & 3c \\ 4c & 5c & 6c \\ \end{pmatrix} \end{align*} $$

is a vector space.



Example 3: Sets of Functions

let \(S\) be a nonempty set. For example \(S = \mathbf{R}\), or \(S = \{\pi, \pi^2\}\), \(S = \{\)atoms in the universe\(\}\). Basically any non-empty set. Now consider \(F(S) = \{f: S \rightarrow \mathbf{R}\}\), the set of all functions or mappings from \(S\) to \(\mathbf{R}\). One way to think of this is the all the ways we can label the elements in the set \(S\) with real numbers.

Define addition as \((f+g)(s) = f(s) + g(s)\) for all \(s \in S\). So addition of functions works as addition of their values and produces a real number which is what we want. Define scalar multiplication as \(cf(s) =c(f(s))\) for all \(s \in S\).

\(F(S)\) is a vector space. It satisfies all 8 conditions. For example. The zero vector in this space is \(\bar{0}(s) = 0\) for all \(s \in S\). Note also that \(C^1(\mathbf{R})\) (the functions where with continues derivatives) is a subset of \(C^0(\mathbf{R})\) (the set of continuous functions) which is a subset of \(F(S)\).



Example 4: The Set of all Sequences

Consider the set of all natural numbers \(\mathbf{N}\) and the set of functions \(F(\mathbf{N}) = \{\sigma: \mathbf{N} \rightarrow \mathbf{R}\). \(\sigma\) is a function that takes a natural number and assigns it a real number. But

$$ \begin{align*} \sigma(1), \sigma(2), \sigma(3), ... \end{align*} $$

is a sequence. So we’re giving the set of sequences the structure of a vector space. Let \(\sigma(1) =a_1, \sigma(2) = a_2, ...\) and so on. So now we can write the sequence as

$$ \begin{align*} a_1, a_2, a_3 .... = \{a_n\} \end{align*} $$

Let \(V = \{\) sequences \(\{a_n\}\) is a vector space. Define the addition of two sequences as \(\{a_n\} + \{b_n\} = \{a_n + b_n\}\). Adding the terms one by one. Define scalar multiplication as \(c\{a_n\} = \{ca_n\}\).



Example 5: The Set of Polynomials

Definition: Degree of a Polynomial
Let \(f\) be defined as follows, $$ \begin{align*} f(x) = a_nx^n + a_{n-1}x^{n-1}+...+a_1x+a_0. \end{align*} $$ the degree of \(f\) is the largest \(k\) such that \(x^k\) appears in \(f\) with \(a_k \neq 0\).


Let \(P_n = \{\) polynomials \(f(x)\) of degree at most \(n\}\).
Define the addition operation as follows,

$$ \begin{align*} f(x)+g(x) = (a_nx^n + ...+a_0) + (b_nx^n + ...+b_0) = (a_n+b_n)x^n + ...+(a_0+b_0). \end{align*} $$

and define scalar multiplication as

$$ \begin{align*} cf(x) = ca_nx^n + ca_{n-1}x^{n-1} +...ca_0. \end{align*} $$

\(P_n\) is a vector space. The zero vector is the function \(f\bar{0} = 0 = 0x^n + .... + 0\). Question: why did we define the polynomials to have at most \(n\) and not just \(n\)? because take \((X^5 + 1)\) and \((-x^5 + 9)\). The addition of these two will generate a 0 and so we have to say at most.



Additional Vector Space Results

Theorem
Let \(u, v, w\) be elements of a vector space \(V\). if \(u + w = v + w\), then \(u = v\).


Proof: Let \(V\) be a vector space and \(u, v, w\) be elements in \(V\). By property (4) there is a \(z \in V\) such that

$$ \begin{align*} w + z = \bar{0} \end{align*} $$

We also know that by property (3) that

$$ \begin{align*} u = u + \bar{0}. \end{align*} $$

But \(\bar{0} = w+z\) and so

$$ \begin{align*} u &= u + \bar{0} \\ &= u + (w + z) \\ &= (u + w) + z \quad \text{ (by property (2)) } \\ &= (v + w) + z \quad \text { (by the hypothesis)} \\ &= v + (w + z) \quad \text{ (by property (2))} \\ &= v + \bar{0} \\ &= v \quad \end{align*} $$

Therefore \(u = v\) as we wanted to show. \(\blacksquare\)

Theorem
The zero vector in a vector space is unique.


Proof: Suppose \(V\) is a vector space. Now suppose for the sake of contradiction that \(\bar{0}\) and \(\bar{0}'\) are additive inverses of \(V\) where \(\bar{0} \neq \bar{0}'\). This means that

$$ \begin{align*} \bar{0}' &= \bar{0}' + \bar{0} \quad \text{(By property 3)}\\ &= \bar{0} + \bar{0}' \quad \text{By property 1}\\ &= \bar{0}. \quad \text{(by property 3)} \end{align*} $$

Therefore, \(\bar{0} = \bar{0}'\) which is a contradiction and the zero vector must be unique. \(\blacksquare\)

Theorem
For all \(v \in V\), there exists a \(w \in V\) such that \(v + w = \bar{0}\). This \(w\) is unique. We call \(w = -v\).


Proof: Suppose \(V\) is a vector space. Let \(v \in V\). Suppose for the sake of contradiction that \(w\) is not unique and there exists two additive inverses \(w\) and \(w'\) such that \(w \neq w'\). Then

$$ \begin{align*} w &= w + 0 \quad \text{By property 3}\\ &= w + (v + w') \quad \text{By property 3}\\ &= (w + v) + w' \quad \text{By property 1}\\ &= 0 + w' \quad \text{By the hypothesis}\\ &= w'. \quad \text{By property 3}\\ \end{align*} $$

Since \(w = w'\), we can conclude that \(w\) is a unique additive inverse. \(\blacksquare\)

Two additional implications mentioned in the class is that \(w = (-1)v\) and \(0v = \bar{0}\).



Example 6: A Non Example

Consider the set \(\mathbf{R}^2\) equipped with a different set of operations. Let’s define addition as

$$ \begin{align*} (a_1, a_2) + (b_1, b_2) = (a_1+b_1, a_2*b_2). \end{align*} $$

and scalar multiplication.

$$ \begin{align*} c(a_1, a_2) = (ca_1, a_2). \end{align*} $$

Is this a vector space? No. why?

What is the zero vector is this space?

$$ \begin{align*} \bar{0} = (0, 1) \end{align*} $$

because for every vector \(v\), we want \(v + \bar{0} = v\). \((0, 1)\) works here because

$$ \begin{align*} (a_1,a_2)+(0,1) = (a_1+0, a_2*1) = (a_1,a_2) \end{align*} $$


The claim is that property 4 can’t be true. Let \(v = (0,0)\). There is no \((a_1, a_2) \in \mathbf{R}^2\) such that \(v + (a_1, a_2) = \bar{0}\). To see this,

$$ \begin{align*} (0,0) + (a_1, a_2) = (a_1+0, 0*a_2) = (a_1, 0). \end{align*} $$

so it can never be equal to (0, 1).



References:

  • Video Lectures from Math416 by Ely Kerman.
  • Linear Algebra Done Right for the last two proofs