Lecture 36: Isometries
We’ve studied special classes of linear maps over an inner product space to itself like normal maps and self-adjoint maps. Today we will study a new special class
For example, in \(\mathbf{R}^3\), this means that isometries preserve lengths and angles.
When \(V\) is over \(\mathbf{C}\), an isometry is sometimes called unitary while if \(V\) is over \(\mathbf{R}\), \(T\) is called orthogonal.
Example 1
Let \(T = L_A\) where
Remember from the previous lecture, \(A\) was normal but not self-adjoint. We claim that \(A\) is an isometry. We can verify this by evaluating
while,
Example 2
Let
Suppose that \(H \in V\) such that \(|H(t)| = 1\). For real numbers, we have only two functions 1 and -1 but for complex numbers, we have many of these functions. For example \(H(t) = e^{ih(t)}\).
Now we’re going to use this function to define an isometry. Specifically,
This is an isometry. \(H\) is kind of rotating by some angle depending on \(t\). To verify this,
Equivalent Conditions for Isometry
What can we say about Isometries? What does being an Isometry imply?
- \(T: V \rightarrow V\) is an isometry
- \(TT^* = T^*T = I_V \) (an isometry is a normal map)
- If \(\beta\) is an orthonormal basis, then so is \(T(\beta)\). [In other words, \(T\) is an Isometry if it takes one orthonormal basis to another orthonormal basis].
- \(T(\beta)\) is orthonormal for some orthonormal \(\beta\). [This is a weaker statement. \(T\) needs to take one orthonormal basis to another at a minimum once (not every)].
- \(\Vert T(x) \Vert = \Vert x \Vert \ \forall x \in V \). So \(T\) preserves lengths.
Why is \((e)\) true? to see why, we need the following lemma
Proof
Let \(\beta = \{v_1,...,v_n\}\) be an orthonormal basis consisting of eigenvectors of \(S\) (this is possible because \(S\) is self adjoint). So \(S(v_j) = \lambda_j v_j\) for \(j = 1,...,n\). Then
\(v_j\) is a basis vector so it’s not zero. Therefore, we must have \(\lambda_j = 0\) for \(j = 1,...,n\). This implies that \(S\) is the zero map because all the eigenvalues are zero but we have a basis of eigenvectors. So every vector \(x\) can be written as a linear combination of the basis vectors and when we apply \(S\), we get a linear combination of \(\lambda_j v_j\) where all the \(\lambda_j\)’s are zero so \(S\) must be the zero vector.
Proof of Theorem
We’re ready to prove the theorem.
\((a) \implies (b):\)
We’re given that \(T\) is an isometry so we know by definition that \(T\) preserves the inner product so \(\langle T(x), T(y) \rangle = \langle x, y \rangle\). We want to show that \(TT^* = T^*T\). We first observe that \(T^*T\) is also self-adjoint (which means it’s diagonalizable). To see this, see that the adjoint of \((T^*T)\) is \((T^*T)^* = T^*(T^*)^* = T^*T\). Since \(T^*T\) is self-adjoint, then we can add to it any multiple of the identity map and the new map is still self adjoint (proof is in last lecture?). So \(T^*T - I_V\) is self adjoint. Now observe
So we showed that \(T^*T - I_V\) is a self adjoint map where for any vector \(x\), we have \(\langle (T^*T - I_V)(x), x \rangle - 0\). So this satisfies the lemma we proved previously. This implies that \(T^*T - I_V\) is the zero map. So
And we are done.
\((b) \implies (c):\)
Let \(\beta = \{v_1,...,v_n\}\) be an orthonormal basis where \(\langle v_i, v_j \rangle = \delta_{ij}\). We know \(T(\beta) = \{T(v_1),...,T(v_n)\}\). We need to show that \(T(\beta)\) is an orthonormal basis which means that \(\langle T(v_i), T(v_j) \rangle = \delta_{ij}\). Now observe
\((c) \implies (d):\) This is trivial since \(c\) is a stronger statement
\((d) \implies (e):\)
We need to show that if there is some orthonormal basis that can be taken to another orthonormal basis, then this means that \(T\) preserves all lengths.
Let \(\beta = \{v_1,...,v_n\}\) be an orthonormal basis with \(T(\beta) = \{T(v_1),...,T(v_n)\} = \{w_1, ..., w_n\}\) also an orthonormal basis where \(\langle v_i, v_j \rangle = \delta_{ij} = \langle w_i, w_j \rangle\). All these assumptions are by (d). Now we know to prove that \(\Vert T(x) \Vert = \Vert x \Vert\) for any \(x \in V\). Observe that
So now we use this to evaluate
In fact if we had started with \(\Vert T(x) \Vert^2\), we would arrive at the same result \(\sum_{i} |a_i|^2\). So we’re done.
\((e) \implies (a):\)
We want to show that if we preserve lengths, then we actually preserve all inner products so \(\Vert T(x) \Vert = \Vert x \Vert \implies \langle T(x), T(x) \rangle = \langle x, x \rangle\). To show this, we can use the following trick
What is \(\langle x, y \rangle + \overline{\langle x, y \rangle}\)? It’s 2 times the real part of \(\langle x, y \rangle + \overline{\langle x, y \rangle}\). I don’t understand why? [TODO]
Isometries Are Rare
From research, we know that Isometries are actually rare. One fact is that \(T\) is an isometry of \(\mathbf{R}^2\) if and only if
So these are the only possibilities for isometries for \(\mathbf{R}^2\).
References
- Math416 by Ely Kerman