We’ve studied special classes of linear maps over an inner product space to itself like normal maps and self-adjoint maps. Today we will study a new special class

Definition
\(T:V \rightarrow V\) is an isometry if $$ \begin{align*} \langle T(x), T(y) \rangle = \langle x, y \rangle \end{align*} $$


For example, in \(\mathbf{R}^3\), this means that isometries preserve lengths and angles.
When \(V\) is over \(\mathbf{C}\), an isometry is sometimes called unitary while if \(V\) is over \(\mathbf{R}\), \(T\) is called orthogonal.

Example 1

Let \(T = L_A\) where

$$ \begin{align*} A = \begin{pmatrix} \cos\theta & - \sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \end{align*} $$

Remember from the previous lecture, \(A\) was normal but not self-adjoint. We claim that \(A\) is an isometry. We can verify this by evaluating

$$ \begin{align*} L_A \begin{pmatrix} x \\ y \end{pmatrix} &= \begin{pmatrix} x\cos\theta - y\sin\theta \\ x\sin\theta + y\cos\theta \end{pmatrix} \end{align*} $$

while,

$$ \begin{align*} \left\langle L_A \begin{pmatrix} x_1 \\ y_1 \end{pmatrix}, L_A \begin{pmatrix} x_2 \\ y_2 \end{pmatrix} \right\rangle &= (x_1\cos\theta - y_1\sin\theta)(x_2\sin\theta + y_2\cos\theta) \\ & = x_1y_1 + x_2y_2 \end{align*} $$




Example 2

Let

$$ \begin{align*} V &= C^0([0,1]), \mathbf{C}) \\ &= \{ F(t) = f(t) + ig(t) \ | \ f, g \in C^0([0,1)]\} \\ \langle F, G \rangle &= \int_0^1 F(t)\overline{G(t)} dt \end{align*} $$

Suppose that \(H \in V\) such that \(|H(t)| = 1\). For real numbers, we have only two functions 1 and -1 but for complex numbers, we have many of these functions. For example \(H(t) = e^{ih(t)}\).

Now we’re going to use this function to define an isometry. Specifically,

$$ \begin{align*} T \ : \ &V \rightarrow V &F \rightarrow FH \end{align*} $$

This is an isometry. \(H\) is kind of rotating by some angle depending on \(t\). To verify this,

$$ \begin{align*} \langle T(F), T(G) \rangle &= \int_0^1 (F(t)H(t))(\overline{G}(t)\overline{H}(t)) \ dt \\ &= \int_0^1 F(t)\overline{G}(t) \ dt \\ &= \langle F, G \rangle \end{align*} $$




Equivalent Conditions for Isometry

What can we say about Isometries? What does being an Isometry imply?

Theorem
The following are equivalent
  1. \(T: V \rightarrow V\) is an isometry
  2. \(TT^* = T^*T = I_V \) (an isometry is a normal map)
  3. If \(\beta\) is an orthonormal basis, then so is \(T(\beta)\). [In other words, \(T\) is an Isometry if it takes one orthonormal basis to another orthonormal basis].
  4. \(T(\beta)\) is orthonormal for some orthonormal \(\beta\). [This is a weaker statement. \(T\) needs to take one orthonormal basis to another at a minimum once (not every)].
  5. \(\Vert T(x) \Vert = \Vert x \Vert \ \forall x \in V \). So \(T\) preserves lengths.


Why is \((e)\) true? to see why, we need the following lemma

Lemma
Suppose \(S:V \rightarrow V\) is self-adjoint. If \(\langle S(x), x \rangle = 0\) for all \(x \in V\), then \(S = T_{0_V}\) (the zero map)


Proof

Let \(\beta = \{v_1,...,v_n\}\) be an orthonormal basis consisting of eigenvectors of \(S\) (this is possible because \(S\) is self adjoint). So \(S(v_j) = \lambda_j v_j\) for \(j = 1,...,n\). Then

$$ \begin{align*} 0 &= \langle S(v_j), v_j \rangle \quad \text{(by the given assumption)} \\ 0 &= \langle \lambda_j v_j, v_j \rangle \\ 0 &= \lambda_j \langle v_j, v_j \rangle \\ 0 &= \lambda_j \Vert v_j \Vert^2 \\ \end{align*} $$

\(v_j\) is a basis vector so it’s not zero. Therefore, we must have \(\lambda_j = 0\) for \(j = 1,...,n\). This implies that \(S\) is the zero map because all the eigenvalues are zero but we have a basis of eigenvectors. So every vector \(x\) can be written as a linear combination of the basis vectors and when we apply \(S\), we get a linear combination of \(\lambda_j v_j\) where all the \(\lambda_j\)’s are zero so \(S\) must be the zero vector.



Proof of Theorem

We’re ready to prove the theorem.

\((a) \implies (b):\)
We’re given that \(T\) is an isometry so we know by definition that \(T\) preserves the inner product so \(\langle T(x), T(y) \rangle = \langle x, y \rangle\). We want to show that \(TT^* = T^*T\). We first observe that \(T^*T\) is also self-adjoint (which means it’s diagonalizable). To see this, see that the adjoint of \((T^*T)\) is \((T^*T)^* = T^*(T^*)^* = T^*T\). Since \(T^*T\) is self-adjoint, then we can add to it any multiple of the identity map and the new map is still self adjoint (proof is in last lecture?). So \(T^*T - I_V\) is self adjoint. Now observe

$$ \begin{align*} \langle (T^*T - I_V)(x), x \rangle &= \langle T^*T(x) - x, x \rangle \\ &= \langle T^*T(x), x \rangle - \langle x, x \rangle \\ &= \langle T(x), T(x) \rangle - \langle x, x \rangle \quad \text{(by definition of adjoint)}\\ &= 0 \text{(Since $T$ is an Isometry)}\\ \end{align*} $$

So we showed that \(T^*T - I_V\) is a self adjoint map where for any vector \(x\), we have \(\langle (T^*T - I_V)(x), x \rangle - 0\). So this satisfies the lemma we proved previously. This implies that \(T^*T - I_V\) is the zero map. So

$$ \begin{align*} T^*T - I_V = T_0 \\ T^*T = I_V \\ T^* = T^{-1} \end{align*} $$

And we are done.

\((b) \implies (c):\)
Let \(\beta = \{v_1,...,v_n\}\) be an orthonormal basis where \(\langle v_i, v_j \rangle = \delta_{ij}\). We know \(T(\beta) = \{T(v_1),...,T(v_n)\}\). We need to show that \(T(\beta)\) is an orthonormal basis which means that \(\langle T(v_i), T(v_j) \rangle = \delta_{ij}\). Now observe

$$ \begin{align*} \langle (T(v_i), T(v_j) \rangle &= \langle v_i, T^*(T(v_j)) \rangle \\ &= \langle v_i, v_j \rangle \quad \text{(by (b), $T^*T = I_V$)} \\ &= \delta_{ij} \end{align*} $$

\((c) \implies (d):\) This is trivial since \(c\) is a stronger statement

\((d) \implies (e):\)
We need to show that if there is some orthonormal basis that can be taken to another orthonormal basis, then this means that \(T\) preserves all lengths.

Let \(\beta = \{v_1,...,v_n\}\) be an orthonormal basis with \(T(\beta) = \{T(v_1),...,T(v_n)\} = \{w_1, ..., w_n\}\) also an orthonormal basis where \(\langle v_i, v_j \rangle = \delta_{ij} = \langle w_i, w_j \rangle\). All these assumptions are by (d). Now we know to prove that \(\Vert T(x) \Vert = \Vert x \Vert\) for any \(x \in V\). Observe that

$$ \begin{align*} x &= \sum_j a_jv_j \\ T(x) &= \sum_j a_j T(v_j) = \sum_j a_j w_j \end{align*} $$

So now we use this to evaluate

$$ \begin{align*} \Vert T(x) \Vert^2 &= \langle T(x), T(x) \rangle \\ &= \langle \sum_i a_i w_i, \sum_j a_j w_j \rangle \\ &= \sum_{i,j} a_i \bar{a_j} \langle w_i, w_j \rangle \quad \text{(the sums and constants come out)} \\ &= \sum_{i,j} a_i \bar{a_j} (\delta_{ij}) \\ &= \sum_{i} |a_i|^2 \quad \text{(because $\delta_{ij} = 0$ when $i \neq j$)} \\ &= \langle x,x \rangle \end{align*} $$

In fact if we had started with \(\Vert T(x) \Vert^2\), we would arrive at the same result \(\sum_{i} |a_i|^2\). So we’re done.

\((e) \implies (a):\)
We want to show that if we preserve lengths, then we actually preserve all inner products so \(\Vert T(x) \Vert = \Vert x \Vert \implies \langle T(x), T(x) \rangle = \langle x, x \rangle\). To show this, we can use the following trick

$$ \begin{align*} \Vert x + y \Vert^2 &= \langle x+y, x+y \rangle \\ &= \Vert x\Vert^2 + \langle x, y \rangle + \langle y, x \rangle + \Vert y \Vert^2 \\ &= \Vert x\Vert^2 + \langle x, y \rangle + \overline{\langle x, y \rangle} + \Vert y \Vert^2 \end{align*} $$

What is \(\langle x, y \rangle + \overline{\langle x, y \rangle}\)? It’s 2 times the real part of \(\langle x, y \rangle + \overline{\langle x, y \rangle}\). I don’t understand why? [TODO]

$$ \begin{align*} 2Re\langle x, y \rangle &= \frac{1}{2}(\Vert x + y\Vert^2 - \Vert x\Vert^2 - \Vert y\Vert^2) \\ Im\langle x, y \rangle &= -\frac{1}{2}(\Vert ix + y\Vert^2 - \Vert x\Vert^2 - \Vert y\Vert^2) \end{align*} $$




Isometries Are Rare

From research, we know that Isometries are actually rare. One fact is that \(T\) is an isometry of \(\mathbf{R}^2\) if and only if

$$ \begin{align*} T = L_A \text{ for } A = \begin{pmatrix} \cos\theta & - \sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \text{ or } A = \begin{pmatrix} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta \end{pmatrix} \end{align*} $$

So these are the only possibilities for isometries for \(\mathbf{R}^2\).





References

  • Math416 by Ely Kerman