Lecture 34/35: Normal and Self Adjoint Maps
In the last lecture, we studied adjoint linear maps defined as
Here are some additional facts
Fact 1: \(T^*\) is unique if it exists
Fact 2: In infinite dimensions \(T^*\) need not exist.
Example 1
\(A \in M_{m \times n}(\mathbf{F})\) with \(\mathbf{F} = \mathbf{R}\) or \(\mathbf{C}\)
Normal Linear Maps
The goal of today is use this notion of adjoint maps to define more classes of linear maps which will be useful. Though we’re going to restrict the maps to maps from a finite dimensional inner product space \(V\) to \(V\) with \(\beta\) being an orthonormal basis. Therefore, in this setting, we will always have an adjoint map.
Example 2
Let \(A \in M_{n \times n}(\mathbf{F})\) is normal if \(AA^* = A^*A\)
Note here that this is a rotation matrix. It has no eigenvalues or eigenvectors. This transformation doesn’t take any vector to a multiple of itself. All we’re doing is rotating a vector. To see if it’s a normal matrix (defines a normal operator), we need to compute \(A^*\),
and then compute the products
Example 3
Suppose that \(A^* = - A\), then
So \(A\) and \(A^t\) are negative of each other. For example
A Sufficient Condition for Normal Linear Maps
Next, we will describe a sufficient condition for an operator to be normal which will be useful later!
Remark: The converse isn’t too. Example 2 shows this. (But what about theorem 6.16 in the book??)
Proof
Let \(\beta = \{v_1, ..., v_n\}\) be an orthonormal basis consisting of eigenvectors. Therefore
Computing \([T]_{\beta}^{\beta}\)
Therefore,
Properties of Normal Linear Maps
So now that we have a sufficient condition to identify normal maps. Let’s study their properties.
- \( \Vert T(v) \Vert\) = \( \Vert T^*(v) \Vert \quad \forall v \in V\)
- \(T+ cI_v\) is normal
- \(T(v) = \lambda v \ \implies \ T^*(v) = \bar{\lambda}v\)
- Suppose \(T(v_1) = \lambda_1v_1\) and \(T(v_2) = \lambda_2v_2\) where \(\lambda_1 \neq \lambda_2\) Then \(\langle v_1, v_2 \rangle = 0\)
Proof
For (a), notice that
For (b), we want to show that \((T + cI_v)(T + cI_v)^* = (T + cI_v)^*(T + cI_v)\) to prove that it is normal so,
For (c),
By property (b), we know that \((T - \lambda I_V)\) is normal and by property (a), we know that the adjoint will have the same norm therefore,
For (d),
But we know that \(\lambda_1 \neq \lambda_2\). Therefore, we must have that \(\langle v_1, v_2 \rangle = 0\)
Self-Adjoint Maps
Next we will study another special class of adjoint maps called self adjoint maps defined below
Note here that self adjoint implies that \(T\) is normal. The converse is not true (rotation matrix is an example)
Example 1
Let \(A \in M_{n \times n}(\mathbf{R})\) and \(A_{ij} = A_{ji} \ \forall i,j\), then A (\(L_A\)) is self-adjoint.
Example 2
Suppose that \(V\) is a finite dimensional inner product space where \(W \subseteq V\) is a subspace.
So if you take a vector \(x \in V\), we know we can decompose it into two vectors \(w \in W\) and \(z \in W^{\perp}\). This map just produces the part that is in \(W\). We claim that \(\text{proj}_W\) is self-adjoint.
Proof
Take \(x_1, x_2 \in V\). We need to show that
We know that \(x_1 = w_1 + z_1\) and \(x_2 = w_2 + z_2\) for some unique vectors \(w_1, w_2 \in W\) and \(z_1, z_2 \in W^{\perp}\). Then,
At this point, we recognize that \(w_2 = T(x_2)\) but we still have \(w_1\) and want to reach \(x_1\). Notice that \(x_1 = w_1 + z_1\). Moreover, \(\langle z_1, w_2 \rangle = 0\) So
as we wanted to show. \(\blacksquare\)
Self-adjoint Maps are Diagonalizable
Today’s goal is to prove that self adjoint maps are diagonalizable. Note here that when a matrix \(A\) is symmetric where \(A_{ij} = A_{ji}\), then \(A\) is self-adjoint. This implies \(A\) is diagonalizable and so \(\det(A - tI_n)\) splits which is really useful to know!
Question: What is the diagonal form of the projection map \(\text{proj}_W\)? because eigenvectors get mapped to a multiple of themselves, the projection of the vector is either all in \(W\) or all in \(Z\) and you get zero from the projection. Therefore, we notice here that the eigenvalues are 0s and 1s.
Proving that self adjoint maps are diagonalizable, requires a few things along the way so we will next prove the results that we need in order to prove that they’re diagonalizable.
Eigenvalues of Self-adjoint Maps
Proof
Since \(T\) is self-adjoint, then \(T\) is normal. Then for any \(\lambda\),
But \(T = T^*\) since \(T\) is self-adjoint. Therefore,
as we wanted to show. \(\blacksquare\)
Do Self-adjoint Maps have Eigenvalues?
So eigenvalues are real but are there eigenvalues?
If \(V\) is a vector space over \(\mathbf{C}\), then \(T: V \rightarrow V\) always has eigenvalues. It doesn’t matter if \(T\) is self-adjoint or not. The characteristic polynomial \(\det([T]_{\beta}^{\beta} - tI_n)\) with complex entries always splits! (fact from algebra).
What if \(V\) was over \(\mathbf{R}?\)
An example is the rotation matrix. It is normal but not self-adjoint and it doesn’t have real eigenvalues.
Proof
Let \(\beta\) be an orthonormal basis of \(V\). We need to show that \(\det([T]_{\beta}^{\beta} - tI_n) = 0\) has a real root. Let \(A = [T]^{\beta}_{\beta}\). Then
So \(A\) is symmetric. The idea is to apply theorem 1 which states that if \(V\) is over \(\mathbf{C}\), then we have real eigenvalues. So consider the following map
We know that \(L_A\) is self adjoint so \((L_A)^* = L_{A^*} = L_{A^t} = L_A\). So this map is self-adjoint over the complex field. By the fundamental theorem of algebra or (the fact above), the characteristic polynomial always splits and so \(L_A\) has an eigenvalue. By theorem 1, this eigenvalue must be real. Therefore, \(\det([T]_{\beta}^{\beta} - tI_n) = 0\) has a real root. \(\ \blacksquare\)
Eigenvectors of a Self-adjoint Map
What can we say about the eigenvectors of a linear self-adjoint map?
Proof
We’re given that we have one at least one real eigenvalue but we want to prove that we have an orthonormal basis of eigenvectors. Having one eigenvalue gives us a base case. So let’s do this by Induction on \(\dim V = n\).
Base Case: \(n = 1\): we have a map from \(\mathbf{F}\) to \(\mathbf{F}\). A linear map is just multiplying by a scalar.
This map is self-adjoint. Every value of \(a\) is an eigenvalue since it produces a multiple of \(x\) as long as it’s not zero. Therefore, just choose any \(a \neq 0\). The orthonormal basis will then be \(\{\frac{x_0}{|x_0|}\}\)
Inductive Case: Assume this is true for \(n\).
Let \(T: V \rightarrow V\) with \(\dim(V)=n\), be self-adjoint.
By theorem 2, \(T\) has a real eigenvalue \(\lambda_1\) and so \(T(v_1) = \lambda_1 v_1\). Assume \(v_1\) has length 1 (We can normalize otherwise). So now at this point we have our first vector in the orthonormal basis \(\beta\). How do we get our second vector?
We know that the second vector must be orthogonal to the first vector. This means that it lies in the orthogonal complement of of where \(v_1\) lies. So Set \(W = \{ v_1 \}^{\perp}\). \(W\) has dimension \(n-1\) because we took away a dimension. But we can’t yet apply the inductive hypothesis on \(W\) because we need a self-adjoint map on \(W\). What could this map be?
We know the eigenvectors in the basis \(\beta\) need to be eigenvectors of \(T\) so this map has to be related to \(T\)? So the hope is that \(T\) restricts to a map on \(W\). This happens as we know if \(W\) is \(T\) invariant. For that to happen, we need to show that \(T(W) \subseteq W\).
To show that \(T(W) \subseteq W\), suppose we’re given \(w \in W\), we want to show that \(T(w) \in W\). But we know that \(W = \{ v_1 \}^{\perp}\). So we want to show that \(\langle T(w), v_1 \rangle = 0\).
So now we know that \(W\) is \(T\)-invariant and so we have \(T_W: W \rightarrow W\). We still need two things before we can apply the inductive hypothesis. We need \(W\) to be an inner product space and we also need \(T_W\) to be self-adjoint. But \(W\) is a subspace of \(V\) so it inherits the inner product from \(V\). Moreover, \(T_W\) is a self-adjoint map. To see why, notice that
So now we can apply the inductive hypothesis to conclude that \(W\) has an orthonormal basis, \(\beta_W = \{v_2,...,v_n\}\) consisting of eigenvectors of \(T_W\). But we know that eigenvectors of a restriction are also eigenvectors of \(T\) itself. so \(\beta = \{v_1,v_2,...,v_n\}\) and we are done. \(\ \blacksquare\)
References
- Math416 by Ely Kerman