Identity matrix and Kronecker delta

$$ \begin{align*} I_n &= \begin{pmatrix} 1 & 0 & \dotsb & 0 \\ 0 & 1 & \dotsb & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 1 \end{pmatrix} \in M_{n \times n}. \end{align*} $$

\(I_n\) is the \(n \times n\) identity matrix. To construct the matrix, we have the following rule.

$$ \begin{equation*} (I)_{ij} = \delta_{ij} = \begin{cases} 1 \quad \text{if } i = j \\ 0 \quad \text{if } i \neq j \end{cases} \end{equation*} $$

\(\delta_{ij}\) is the Kronecker delta. For any matrix \(A \in M_{m \times n}\), \(AI_n = A\) and \(I_mA = A\).
Exercise: Suppose we have a finite basis \(\beta\) for \(V\), then the identity map is

$$ \begin{align*} I: \ &V \rightarrow V \\ &x \rightarrow x \end{align*} $$

If we compute its matrix representative, we will see that \([I_V]_{\beta}^{\beta} = I_n\).



Inverse Linear Transformation

Next we define the inverse of a linear transformation

Definition
An inverse of \(T: V \rightarrow W\) is a map \(S: W \rightarrow V\) such that $$ \begin{align*} S \circ T = I_V \text{ and } T \circ S = I_W \end{align*} $$


Example 1

$$ \begin{align*} T : \ &\mathbf{R}^2 \rightarrow \mathbf{R}^2 \\ &(x, y) \rightarrow (y, -x) \end{align*} $$

has an inverse

$$ \begin{align*} S : \ &\mathbf{R}^2 \rightarrow \mathbf{R}^2 \\ &(x, y) \rightarrow (-y, x) \end{align*} $$

We can check that this is true by composing these transformations and check that we get the identity map.

$$ \begin{align*} S \circ T (x,y) &= S(T(x, y)) = S(y, -x) = (x, y) \\ T \circ S (x,y) &= T(S(x, y)) = T(-y, x) = (x, y) \end{align*} $$




Example 2

Suppose we have the following vector spaces:

$$ \begin{align*} V &= P \\ W &= \hat{P} = \{a_1x + a_2x^2 ... + a_kx^k\} \end{align*} $$

\(\hat{P}\) is the set of all polynomials without the constant term \(a_0\). We claim that \(\hat{P}\) is a vector space. The easiest way to verify this to show that \(\hat{P}\) is a subspace of \(P\). Now consider the following linear map between \(P\) to \(\hat{P}\) where we multiply the polynomial by \(x\).

$$ \begin{align*} T : \ &P \rightarrow \hat{P} \\ &a_0 + a_1x + ... + a_kx^k \rightarrow a_0x + a_1x^2 + ... + a_kx^{k+1} \\ &f \rightarrow xf \end{align*} $$

This map has the following inverse linear transformation:

$$ \begin{align*} S &: \hat{P} \rightarrow P \\ &a_1x + a_2x^2 + ... + a_kx^{k} \rightarrow a_1 + a_2x + ... + a_kx^{k-1} \\ &f \rightarrow \frac{1}{x}f \end{align*} $$


Example 3

The following linear transformation

$$ \begin{align*} T &: P \rightarrow P \\ &f \rightarrow f' \end{align*} $$

has no inverse! (it is onto but not 1-1). Not one-to-one since the derivative of any constant function is 0.

This leads us to the following theorem:

Theorem
Let \(T: V \rightarrow W\) be linear.
  • \(T\) has an inverse if and only if \(T\) is 1-1 (\(N(T)=\{\bar{0}_V\}\)) and onto (\(R(T) = W\)).
  • If \(T\) is invertible, then its inverse is unique \((T^{-1})\).
  • \(T^{-1}\) is linear. (Theorem 2.17 in the book)


The third property is not obvious and requires a proof.

Proof: Let \(T: V \rightarrow W\) be a linear map with inverse \(T^{-1}: W \rightarrow V\). We want to show that \(T^{-1}\) is linear. To do this, we need to show that

$$ \begin{align*} T^{-1}(w_1 + cw_2) = T^{-1}(w_1) + cT^{-1}(w_2). \end{align*} $$

Because \(T\) has an inverse then \(T\) is onto. This means that \(w_1 = T(v_1)\) and \(w_2 = T(v_2)\) for some \(v_1, v_2 \in V\). Moreover, since \(T\) is 1-1, then \(v_1 \neq v_2\). Now,

$$ \begin{align*} T^{-1}(w_1 + cw_2) &= T^{-1}(T(v_1) + cT(v_2)) \\ &= T^{-1}(T(v_1 + cv_2)) \text{ (because $T$ is linear!)} \\ &= I(v_1 + cv_2) \text{ (because $T$ has an inverse!)} \\ &= v_1 + cv_2 \\ & = T^{-1}(w_1) + c T^{-1}(w_2) \end{align*} $$

Therefore, \(T^{-1}\) is linear. \(\blacksquare\)

Theorem
Suppose \(T: V \rightarrow W\) be linear and invertible. If \(\beta\) is a basis for \(V\), then \(T(\beta)\) is a basis for \(W\).


Proof (in the case where \(V\) and \(W\) are finite dimensional spaces):
Let \(T: V \rightarrow W\) be a linear and invertible map and suppose that \(\dim(V)=n\). Choose a basis \(\beta = \{v_1, ..., v_n\}\) for \(V\). Consider the set of images,

$$ \begin{align*} T(\beta) = \{T(v_1),...,T(v_n)\}. \end{align*} $$

We need to show that this set is a basis which means that we need to show that it spans \(W\) and so \(Span(T(\beta)) = W\) and that it is a linearly independent set. To show that it spans \(W\), we need for any \(w \in W\), scalars \(a_1,...,a_n\) such that

$$ \begin{align*} w = a_1T(v_1) + ... + a_nT(v_n). \end{align*} $$

But we know that \(w = T(v)\) for some \(v \in V\) since \(T\) is onto. We also know that \(\beta\) is a basis for \(V\) and so we can write \(v\) as a linear combination of the vectors in \(\beta\) for some scalars \(a_1,...,a_n\).

$$ \begin{align*} v = a_1v_1 + ... + a_nv_n. \end{align*} $$

And now because \(T\) is linear, we can do the following

$$ \begin{align*} w &= T(v) \\ &= T(a_1v_1 + ... + a_nv_n) \\ & = a_1T(v_1) + ... + a_nT(v_n). \end{align*} $$

Which is what we wanted to show. To show that the vectors in \(T(\beta)\) are linearly independent, we need to show that the solution to

$$ \begin{align*} a_1T(v_1) + ... + a_nT(v_n) = \bar{0}_W. \end{align*} $$

is the trivial solution which means that \(a_1=0,..,a_n=0\). We can use the linearity of \(T\) to show that

$$ \begin{align*} \bar{0}_W &= a_1T(v_1) + ... + a_nT(v_n) \\ &= T(a_1v_1 + ... + a_nv_n). \end{align*} $$

But \(T\) is 1-1 and so this implies that \(a_1v_1 + ... + a_nv_n\) must be \(\bar{0}_V\). Therefore, we must have \(a_1=0,...,a_n=0\) as required.

Corollary
Let \(T: V \rightarrow W\) be linear. If \(\dim(V) = n\) and \(T\) is invertible, then $$ \begin{align*} \dim W = \dim V = n \end{align*} $$


(Study Notes:) This is really important. \(T\) is invertible and \(V\) having dimension \(n\) means that \(W\) has dimension \(n\). Also \(T\) is one-to-one and onto. Later we’ll learn if both \(V\) and \(W\) have the same dimension and \(T\) is one-to-one, then this is sufficient to conclude that \(T\) is invertible. (next lecture).

I also went through the proof in the book for this corollary. See This

Isomorphism

Definition
\(V\) and \(W\) are isomorphic if there is an invertible linear map \(T: V \rightarrow W\). Such a map \(T\) is called isomorphism.


Example 4

\(\mathbf{R}^3\) and \(P_2\) are isomorphic. To see this, we need an invertible map from one to the other. The maps

$$ \begin{align*} T : \ &\mathbf{R}^3 \rightarrow P_2 \\ &(a_1,a_2,a_3) \rightarrow a_1 + a_2x + a_3x^2 \end{align*} $$

and

$$ \begin{align*} U : \ &\mathbf{R}^3 \rightarrow P_2 \\ &(a_1,a_2,a_3) \rightarrow a_3 + (a_1 + a_2)x + (a_1 + a_2)x^2 \end{align*} $$

are both isomorphisms.



Example 5

$$ \begin{align*} U : \ &P \rightarrow \hat{P} \\ &f \rightarrow xf \end{align*} $$

is an isomorphism and so \(P\) and \(\hat{P}\) are isomorphic.



Criterion for Isomorphic Finite Dimensional Vector Spaces

Theorem
If \(V\) is finite dimensional, then \(W\) is isomorphic to \(V\) if and only if \(\dim W = \dim V\).


Proof:
\(\Rightarrow\): If \(W\) is isomorphic to \(T\), then there exists an isomorphism map \(T\). \(T\) is onto and one-to-one because it is invertible. Therefore, \(\dim(W) = \dim(V)\) by the corollary we stated earlier.

\(\Leftarrow\): Suppose that \(\dim V = \dim W\). We want to show that they are isomorphic. This means that there exists some invertible map from one to the other. So let \(\beta = \{v_1,...,v_n\}\) be a basis for \(V\) and \(\alpha = \{w_1, ...,w_n\}\) be a basis for \(W\).

Define the map \(T: W \rightarrow V\) by \([T]_{\alpha}^{\beta} = I_n\). This \(T\) works. Why? This \(T\) satisfies:

$$ \begin{align*} T(w_i) = v_i. \end{align*} $$

More generally,

$$ \begin{align*} T(w) &= T(a_1w_1 + ... + a_nw_n) \\ &= a_1v_1 + ... + a_nv_n. \end{align*} $$




References

  • Video Lectures from Math416 by Ely Kerman.