Recall the the vector space of linear transformations
Question: For what \(A \in M_{m \times n}\) does the map \(L_A\) have an inverse?
By Corollary from last lecture (If \(\dim(V)=n\) and \(T\) is invertible, then \(\dim(W)=\dim(V)=n\)), we require \(n = m\). So now we’ll restrict our question to only the following map:
When is this map invertible? For this map to be invertible, we need a map
such that
This map is linear since the inverse of a linear map is linear. Since it is linear, then we can represent it with a matrix so let \(L_B = (L_A)^{-1}\) for some \(B \in M_{n \times n}\). Therefore,
Based on this, we have the following definition
Remark: The inverse of \(A\) is unique if it exists.
Proof:
Suppose \(BA = I_n = AB\) and \(CA = I_n = AC\). We need to show that \(C = B\). To do this,
The inverse of \(A\) can be denoted by \(A^{-1}\).
Conditions for An Invertible Matrix
So now given a matrix \(A\), how do we know that it’s invertible? Previously, we asked if a linear map is invertible and if it was invertible, then we knew the matrix representation of the map is also invertible. So \(A\) is invertible if and only if
- \(\Leftrightarrow L_A: \mathbb{R}^n \rightarrow \mathbb{R}^n\) is invertible.
- \(\Leftrightarrow L_A\) is 1-1 and onto. (you only need one as consequence of the dimension theorem since the dimension of the domain and the codomain are the same)
- \(\Leftrightarrow L_A\) is 1-1. (see above or theorem 2.5)
- \(\Leftrightarrow N(L_A) = \{\bar{0}\}\). The above is equivalent (proved in homework I think) to saying that the null space is only the zero vector.
- \(\Leftrightarrow \{\bar{x} \ | \ A\bar{x} = \bar{0} = \{\bar{0}\}\}\). This is just the null space. We can check/settle this by putting the matrix in a row echelon form!
- \(\Leftrightarrow\) a REF of \(A\) has leading entries in each column.
Matrix Representation for an Inverse Linear Transformation
We know that linear maps between finite dimensional vector spaces have matrix representations. So now we know to relate these the matrix representation of these maps to their inverse matrices.
Suppose \(T: V \rightarrow W\) is linear. Let \(\beta\) be a finite basis for \(V\) and \(\gamma\) be a finite basis for \(W\).
We know that \(T\) has a matrix representative from \(\beta\) to \(\gamma\), \([T]_{\beta}^{\gamma}\). Therefore, the inverse of \(T\) will have a matrix representative instead from \(\gamma\) to \(\beta\), \([T]_{\gamma}^{\beta}\). We want to know what the relationship is between these two matrices. We claim the following:
Proof:
To see this, let’s multiply both matrices to verify that we get the identity matrix. By theorem 2.11 (lecture 14), we know that multiplying these matrices is the same as the composition of the two matrices.
Matrix Representation for an Inverse Linear Transformation
If \(A\) is invertible, how do you find an its inverse?
We need \(B\) such that \(AB = I_n\)
We can think of \(AB\) as multiplying the columns of \(B\) by \(A\) and so
This is equivalent to solving the following \(n\) equations in \(n\) variables.
We can solve this all at once because if you notice here, \(A\) is the same in every equation. We do this by grouping this in the following way
We then put this matrix in row reduced echelon form to get \((RREF(A) | B)\). Because we have \(n\) entries, then if \(RREF(A) = I_n\), then \(B = A^{-1}\). If \(RREF(A) \neq I_n\), then \(A\) is not invertible.
Example
Let \(A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}\). Find if \(A^{-1}\) exists.
From this we see that \(A\) is invertible and its inverse is \(\begin{pmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{pmatrix}\)
Special Rule for 2 by 2 Matrices
In fact, any 2 by 2 matrix \(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\) is invertible if \(ad - bc = 0\). In which case
Proof: We will show that this is true by using the same procedure from the last example. We will also assume that \(a \neq 0\).
Note here that if \(ad - bc = 0\), then the system for the last column is inconsistent and \(A\) has no inverse. We can proceed until we get to
And so this will be the inverse. We still need to settle the case when \(a = 0\). (Exercise)
References
- Math416 by Ely Kerman