Last time we proved the theorem that states exactly how row operations change the value of the determinant and then we proved another result where the determinant of upper (lower) triangular matrices is equal to the product of the diagonal entries. This meant that if we put a matrix \(A\) in REF, then we can just multiply the diagonal entries to calculate the determinant in addition to accounting for the type of row operations we applied and how they contribute to the determinant. Specifically, we saw that if we applied some \(k\) row operations on \(A\) to get to \(B\) (REF),

$$ \begin{align*} A \xrightarrow{R_k,...,R_2,R_1} B \end{align*} $$

Then,

$$ \begin{align*} B_{11}...B_{nn} &= \mathcal{E}(\mathcal{R}_k)...\mathcal{E}(\mathcal{R}_1)\det(A), \\ \end{align*} $$

where

$$ \begin{align*} \mathcal{E}(R) &= \begin{cases} -1, \quad \mathcal{R}\text{ type I } \\ c, \quad \mathcal{R}\text{ type II w/ scalar $c$ } \\ 1, \quad \mathcal{R}\text{ type III } \end{cases} \end{align*} $$

Therefore,

$$ \begin{align*} \det(A) &= \frac{B_{11}...B_{nn}}{ \mathcal{E}(\mathcal{R}_k)...\mathcal{E}(\mathcal{R}_1)} \end{align*} $$

It is clear that none of these row operations will lead the determinant to be 0. In fact \(\det(A) \neq 0\) is equivalent to

  • \(\leftrightarrow \det(REF) \neq 0\)
  • \(\leftrightarrow REF\) of \(A\) has \(n\) leading entries
  • \(\leftrightarrow A\) is invertible




Next, we want to prove a general theorem that states that \(\det(AB) = \det(A)\det(B)\) but in order to do so, we’ll need a couple of theorems. The first theorem is from lecture 18. We stated a corollary without a proof but now we’ll call it theorem (a) and prove it.

Theorem (a)
\(A\) is invertible if and only if it can be written as a product of elementary matrices.


Proof:
\(A\) is invertible is equivalent to

  • \(\leftrightarrow RREF \text{ is } I_n\). When we computed the inverse, we needed \(A\)'s RREF to be the identity matrix.
  • \(\leftrightarrow\) This meant that we applied a sequence of elementary row operations on \(A\) to get the identity matrix. These elementary row operations can be represented with matrix multiplication (lecture 18). So \(E_k...E_1A = I_n\).
  • \(\leftrightarrow\) So we can solve for \(A\) and get \(A = E_1^{-1}...E_k^{-1}\).
  • \(\leftrightarrow\) These elementary matrices are invertible and they are themselves elementary matrices. So \(A = E_1^{'}...E_k^{'}\) and \(A\) is invertible. \(\blacksquare\)




Next, we’ll state the second theorem that we will need

Theorem
\(A, B\) are invertible if and only if \(AB\) is invertible.



Proof:
\(\Rightarrow\): Suppose \(A\) and \(B\) are invertible. Then, \((AB)^{-1} = B^{-1}A^{-1}\).

\(\Leftarrow\): Suppose that \(AB\) is invertible. This implies

  • \(\rightarrow L_{ab} = L_a \circ L_b\) is invertible.
  • \(\rightarrow L_b\) is one to one and \(L_a\) is onto. (We proved this in HW 5 or 6)
  • \(\rightarrow\) both \(L_a \text{ and } L_b\) map from a vector space to itself. (Note: this was vague but I did prove in the same homework that both matrices must be \(n \times n\) matrices. Here the linear transformations are of the same dimension) as an implication.
  • \(\rightarrow\) Since they're maps from a vector space to itself, then being onto or one-to-one implies the other so they're both bijective and so both are invertible
  • \(\rightarrow B, A\) are invertible. \(\blacksquare\)




Finally we’re readying to prove the main theorem:

Theorem
For \(A, B \in M_{n \times n}\) $$ \begin{align*} \det(AB) = \det(A)(B) \end{align*} $$



Proof:
By Theorem (b) if either \(A\) or \(B\) fails to be invertible, then

$$ \begin{align*} \det(A)\det(B) = 0 = \det(AB) \end{align*} $$

So assume that \(A\) and \(B\) are both invertible. We have two cases:
Case 1: \(A\) is an elementary matrix so \(A = E(\mathcal{R})\) for some row operation \(\mathcal{R}\). What do we know about the determinant of \(A\)? We need to apply the row operation on the identity matrix and then figure out the type of row operation to determine the relationship between the determinant of \(A\) and the determinant of \(I\). So for,

$$ \begin{align*} I_n \xrightarrow{\mathcal{R}} E(\mathcal{R}) \end{align*} $$

We just need to know the type of operation we applied and apply the previous theorem to figure out \(\mathcal{E}(\mathcal{R})\) (whether it’s 1, 1 or \(-c\)). So we’ll have

$$ \begin{align*} \det(E(R)) = \mathcal{E}(\mathcal{R}) \det I_n = \mathcal{E}(\mathcal{R}) &= \begin{cases} -1, \quad \mathcal{R}\text{ type I } \\ c, \quad \mathcal{R}\text{ type II w/ scalar $c$ } \\ 1, \quad \mathcal{R}\text{ type III } \end{cases} \end{align*} $$

Putting all of this together

$$ \begin{align*} \det(AB) &= \det(E(\mathcal{R})B) \\ &= \mathcal{E}(\mathcal{R})\det(B) \\ &= \det(A)\det(B). \end{align*} $$

For the general case, we know we can write \(A\) and \(B\) as products of elementary matrices by theorem (a) and so

$$ \begin{align*} \det(AB) &= \det(E_1E_2...E_kB) \\ &= \det(E_1(E_2...E_kB)) \text{ (matrix multiplication is associative)} \\ &= \det(E_1)\det(E_2...E_kB) \text{ (by Case 1)} \\ &= \det(E_1)\det(E_2)...\det(E_k)\det(B) \end{align*} $$

Notice here that the product \(\det(E_1)\det(E_2)\) is just \(\det(E_1E_2)\) because \(E_1\) is an elementary matrix (we’re doing the reverse of Case 1). We can continue doing so until we get \(\det(A)\) as follows

$$ \begin{align*} \det(AB) &= \det(E_1)\det(E_2)...\det(E_k)\det(B) \\ &= \det(E_1E_2)...\det(E_k)\det(B) \\ &= \det(E_1E_2E_k)\det(B) \\ &= \det(A)\det(B). \blacksquare \end{align*} $$


Cramer's Rule

Observation: If \(A \in M_{n \times n}\) is invertible, then any system \(A\bar{x}=\bar{b}\) has a unique solution.

$$ \begin{align*} \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix} &= \bar{x} = A^{-1}b. \end{align*} $$

Cramer’s rule allows to analyze the component of the solution in this setting. What does this mean? For each \(k = 1,...,n\).

$$ \begin{align*} x_ = \frac{\det(M_k)}{\det(A)}, \end{align*} $$

where \(M_k\) is obtained from \(A\) by replacing the \(k\)th column by \(\bar{b}\). This allows to solve for a particular component of the solution without having to solve for everything. It also allows us to analyze the dependence on \(A\) and \(\bar{b}\). Why is this true?

Proof: Let

$$ \begin{align*} A &= \begin{pmatrix} a_1 & \cdots & a_k \cdots & a_n \end{pmatrix} \\ M_k &= \begin{pmatrix} a_1 & \cdots & \bar{b_k} \cdots & a_n \end{pmatrix}. \end{align*} $$

The claim is that if we devide the determinant of \(A\) by the determinant of \(M_k\), we will get the \(k\)th component. Define the matrix \(X_k\) where we will the identity matrix and replace the \(k\)th column with the \(\bar{x}\).

$$ \begin{align*} I_n &= \begin{pmatrix} e_1 & \cdots & \bar{e_k} \cdots & e_n \end{pmatrix} \\ X_n &= \begin{pmatrix} e_1 & \cdots & \bar{x} \cdots & e_n \end{pmatrix} \end{align*} $$

Now, compute the product of \(A\) and \(X\).

$$ \begin{align*} AX_n &= \begin{pmatrix} Ae_1 & \cdots & A\bar{x} \cdots & Ae_k \end{pmatrix} \text{ (definition of matrix multiplication)} \\ &= \begin{pmatrix} a_1 & \cdots & \bar{b} \cdots & a_n \end{pmatrix} \text{ (Because $Ae_1$ is just the first column of $A$)}\\ &= M_k \end{align*} $$

So now take the determinant of both sides to see that,

$$ \begin{align*} \det(AX_n) &= \det(M_k) \\ \det(A)\det(X_n) &= \det(M_k) \text{ (By the previous theorem)}\\ \det(X_n) &= \frac{\det(M_k)}{\det(A)} \end{align*} $$

Let’s compute \(\det(X_n)\) along the \(r=k\)th column

$$ \begin{align*} \det(X_n) &= \sum_{j=1}^n(-1)^{k+j}(X_n)_{kj}\det(\tilde{X_k})_{kj} \\ &= (-1)^{k+k}(x_k)\det(I_{n-1}) \\ &= x_k. \ \blacksquare \end{align*} $$


Theorem
Suppose \(F: M_{n \times n} \rightarrow \mathbf{R}\) such that
  1. \(F\) is linear in rows.
  2. \(F(A) = 0\) if \(A\) has two identical rows.
  3. \(F(I_n) = 1.\)
Then \(F = det\)


Any map with those three properties has to be the determinant!



References

  • Video Lectures from Math416 by Ely Kerman.