Lecture 28: Invariant and T-cyclic Subspaces
Consider a linear map \(T: V \rightarrow V\). Suppose \(v\) is an eigenvector of \(T\). We know by definition that \(Tv = \lambda v\) and \(v \neq 0\). We also know that \(span\{v\} \subset V\) is subspace. The observation here is that since \(v\) is an eigenvector, then when \(T\) acts on the subspace, then it stays inside the subspace meaning
This is because \(T(cv) = cT(v) = (c\lambda) v\) which is in the span of \(v\). The span of an eigenvector is the simplest example an invariant subspace.
i.e. \(T(w) \in W \ \forall w \in W\)
Examples
Example 1: \(W = span\{v\}\) where \(v\) is an eigenvector of \(T\).
Example 2: \(I_V:V \rightarrow V\) Every subspace is \(I_V\)-invariant.
Example 3: \(0_V:V \rightarrow V\) Every subspace is \(0_V\)-invariant.
Example 4:
\(W = span\{(1,0)\}\) is \(T\)-invariant
\(W = span\{(1,1)\}\) is not \(T\)-invariant
The Characteristic Polynomial of Invariant Subspaces
So what is the point of invariant subspaces? It helps us break off pieces of our map. What does that mean? If \(T: V \rightarrow V\) and \(W\) is \(T\)-invariant, then the restriction of \(T\) to \(W\), \(T_W\) satisfies \(T_W: W \rightarrow W\). So now have a smaller set instead of the entire vector space \(V\). The following theorem describes the relationship between the characteristic polynomial of \(T:V \rightarrow V\) and the characteristic polynomial of \(T: W \rightarrow W\).
Proof
Let \(\beta_w = \{v_1,...,v_k\}\) be a basis of \(W\). Extend this to a basis for \(V\). so
We can express \(T\) with respect \(\beta\) to get
We know that \(T(v_i) \in W\) for \(i = 1,...,k\) since \(W\) is \(T\)-invariant. Therefore, we can express \(T(v_i)\) as a linear combination of just the first \(k\) vectors in \(\beta\). The rest of the coefficients will be zero for the remaining vectors in \(\beta\). So you can imagine below that for the first \(k\) columns of \([T]_{\beta}^{\beta}\), we’ll have zero entries for anything below the \(k\)th entry. Let \(B_1\) be the coefficients that we know will not be zero (at least some).
We claim that \(B_1 = [T]_{\beta_W}^{\beta_W}\). Now, we want to determine the characteristic polynomial of \([T]_{\beta}^{\beta}\).
This is a block upper triangular matrix. The determinant has a nice form for such matrices and we can write
From this we see that the characteristic polynomial of \([T]^{\beta_W}_{\beta_W}\) divides the characteristic polynomial of \([T]_{\beta}^{\beta} \ \blacksquare\)
T-cyclic Subspaces
Since \(T\)-invariant subspaces are useful, the question is can we produce them? Is there a tool or mechanism to find them? We start with the following definition
Observe here that \(W\) is \(T\)-invariant. Why? take any element \(w \in W\), \(T(w)\) is still in \(W\). To see this, notice that
Question: Are all \(T\)-invariant subspaces \(T\)-cyclic?
The answer is no. Suppose
\(W = \{(x,y,0) \ | \ x, y \in \mathbf{R}\}\) is \(T\)-invariant. We just map it to itself. In fact \(T_W = I_W\).
So we see here that \(W\) is not \(T\)-cyclic. Take \((x, y, 0)\),
- \(\{v,T(v),...,T^{k-1}(v)\}\) is a basis for \(W\)
- If \(T^k(v) = a_0v + ... + a_{k-1}T^{k-1}(v)\), then the characteristic polynomial of \(T_W\) \((-1)^{k+1}(a_0 + a_1t + ... + a_{k-1}t^{k-1} - t^k)\)
For (a). Since the dimension is not infinite anymore. Then it’s natural to ask if only \(k\) of the infinitely generated vectors is a span for \(W\) and the answer is yes.
Proof:
We’ll start with a proof of (b) given (a). To say anything about the characteristic polynomial, we need to find a basis and then compute the matrix with respect to the basis. A natural choice is the basis given to us in \(a\) so
Next we need to compute \([T_W]_{\beta_W}^{\beta_W}\)
The first column is the coefficients of \(T_W(v)\) with respect to \(\beta_W\) so that’s just 1 for \(T(v)\) while the rest are 0. Similarly, we have the same thing for the rest of the \(k-1\) column vectors. But for the last column, we need to represent \(T_W(T^{k-1}(v)) = T^k(v)\). We’re given \(T^k(v) = a_0v + ... + a_{k-1}T^{k-1}(v)\) and so the rest of the coefficients are \(a_0,a_1 .... a_{k-1}\).
Now, we can compute the determinant expanding across the first row
The last determinant for the \(a_0\) component is 1 because that sub matrix is upper triangular so the determinant is the product of the entries on the diagonal which are all 1.
Next, we want to compute that new determinant but notice now that it has the same pattern so
References
- Math416 by Ely Kerman