Theorem 4.1
The function \(\det: M_{2 \times 2}(\mathbf{F}) \rightarrow \mathbf{F}\) is a linear function of each of a \(2 \times 2\) matrix when the other row is held fixed. That is if \(u, v\) and \(w\) are in \(\mathbf{F}^2\) and \(k\) is a scalar, then $$ \begin{align*} \det \begin{pmatrix} u + kv \\ w \end{pmatrix} = \det \begin{pmatrix} u \\ w \end{pmatrix} + k \det \begin{pmatrix} v \\ w \end{pmatrix} \end{align*} $$ and $$ \begin{align*} \det \begin{pmatrix} w \\ u + kv \end{pmatrix} = \det \begin{pmatrix} w \\ u \end{pmatrix} + k \det \begin{pmatrix} w \\ v \end{pmatrix} \end{align*} $$


Proof

Let \(u = (a_1, a_2), v = (b_1, b_2)\) and \(w = (c_1, c_2)\) be in \(\mathbf{F}^2\) and let \(k\) be a scalar. Then

$$ \begin{align*} \det \begin{pmatrix} u \\ v \end{pmatrix} + \det k\begin{pmatrix} w \\ v \end{pmatrix} &= \det \begin{pmatrix} a_1 & a_2 \\ c_1 & c_2 \end{pmatrix} + k \det \begin{pmatrix} b_1 & b_2 \\ c_1 & c_2 \end{pmatrix} \\ &= (a_1c_2 - a_2c_1) + k(b_1c_2 - b_2c_1) \\ &= a_1c_2 - a_2c_1 + kb_1c_2 + kb_2c_1 \\ &= c_2(a_1 + kb_1) - c_1(a_2 +kb_2) \\ &= \det \begin{pmatrix} a_1+kb_1 & a_2+kb_2 \\ c_1 & c_2 \end{pmatrix} \end{align*} $$

The other direction is similarly calculated.

Theorem 4.2
Let \(A \in M_{2 \times 2}(\mathbf{F})\). Then the determinant of \(A\) is nonzero if and only if \(A\) is invertible. Moreover, if \(A\) is invertible, then $$ \begin{align*} A^{-1} = \frac{1}{\det(A)} = \det \begin{pmatrix} A_{22} & -A_{12} \\ -A_{21} & A_{11} \end{pmatrix} \end{align*} $$


Proof

We can construct the matrix such that we can show that \(AM = MA = I\) so this proves that \(A\) is invertible. Conversely, then \(A\) must have rank 2 and therefore we musteither have \(A_{21} \neq 0\) or \(A_{11} \neq 0\). If \(A_{11} \neq 0\), we can add \(-A_{21}/A_{11}\) times row 1 to row 2 to obtain

$$ \begin{align*} \begin{pmatrix} A_{11} & -A_{12} \\ 0 & A_{22} - \frac{A_{12}A_{21}}{A_{11}}\end{pmatrix} \end{align*} $$

Because this is an elementary row operation, then we know the rank is preserved. Notice now that \(A_{21} = 0\). This means that \(A_{22} - \frac{A_{12}A_{21}}{A_{11}}\) can’t be zero. From this, we will see that the determinant is not zero. For the other case when \(A_{21} \neq 0\), we can instead add \(-A_{11}/A_{21}\) to achieve the same conclusion.å

Definition
Let \(A \in M_{n \times n}(\mathbf{F})\). If \(n = 1\), so that \(A = (A_{11}\), we define \(\det(A) = A_{11}\). For \(n \geq 2\), we define \(\det(A)\) recursively as $$ \begin{align*} \det(A) = \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \end{align*} $$ The scalar \(\det(A)\) is called the determinant of \(A\) and is also denoted by \(|A|\). The scalar $$ \begin{align*} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \end{align*} $$ is called the cofactor of the entry \(A\) in row \(i\), column \(j\).



Theorem 4.3
The determinant of \(n \times n\) matrix is a linear function of each row when the remaining rows are held fixed. That is for \(1 \leq r \leq n\). we have $$ \begin{align*} \det \begin{pmatrix} a_1 \\ \vdots \\ a_{r-1} \\ u + kv \\ a_{r+1} \\ \vdots \\ a_n \end{pmatrix} = \det \begin{pmatrix} a_1 \\ \vdots \\ a_{r-1} \\ u \\ a_{r+1} \\ \vdots \\ a_n \end{pmatrix} + k \det \begin{pmatrix} a_1 \\ \vdots \\ a_{r-1} \\ v \\ a_{r+1} \\ \vdots \\ a_n \end{pmatrix} \end{align*} $$ where \(k\) is a scalar and \(u,v\) are each \(a_i\) are row vectors in \(\mathbf{F}^n\)


Proof

By induction on \(n\). If \(n = 1\), the result is immediate.
Now suppose that \(n \geq 2\) and assume the hypothesis is true for \(n-1\), that is the determinant of any \((n - 1) \times (n-1)\) is a linear function of each row when the remaining rows are held fixed.

Let \(A\) be an \(n \times n\) matrix with rows \(a_1, a_2,...,a_n\), respectively and suppose that for some \(r\) where \((1 \leq r \leq n)\), we have \(a_r = u + kv\) for some \(u, v \in \mathbf{F}^n\) and scalar \(k\). Let \(u = (b_1,...,b_n)\) and let \(v = (c_1, ...,c_n)\). Now, let \(B\) and \(C\) be the matrices obtained from \(A\) by replacing row \(r\) of \(A\) by \(u\) and \(v\) respectively. We want to show that \(\det(A) = \det(B) + k\det(C)\).

Case \(r = 1\): TODO (it’s in lecture 18)

Case \(r > 1\): For the matrices \(\tilde{A_{1j}}\), \(\tilde{B_{1j}}\), \(\tilde{C_{1j}}\) which are constructed by removing the first row and the \(j\)th column, we know that they are the same except for row \(r-1\) (\(r-1\) since the first row is removed, so the \(r\)‘th row is now the \(r-1\)th row). The row \(r-1\) specifically looks like

$$ \begin{align*} (b_1 + kc_1,...,b_{j-1} + kc_{j-1},b_{j+1} + kc_{j+1},...,b_n + kc_n) \end{align*} $$

Moreover, these matrices are now of size \((n-1) \times (n-1)\) so we can invoke the inductive hypothesis to conclude that

$$ \begin{align*} \tilde{A_{1j}} = \tilde{B_{1j}} + k\tilde{C_{1j}} \end{align*} $$

We also know that the first row is equal among all three matrices so \(A_{1j} = B_{1j} = C_{1j}\) for all \(j\). Let’s now compute the determinant using the definition above

$$ \begin{align*} \det(A) &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \\ &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} [\tilde{B_{1j}} + k\tilde{C_{1j}}] \\ &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \tilde{B_{1j}} + k \sum_{j=1}^{n} (-1)^{1+j} \tilde{C_{1j}}] \\ &= \det(B) + k\det(C). \end{align*} $$

Therefore the inductive hypothesis is true for any \(n \times n\) matrix. \(\ \blacksquare\)


Theorem 4.3 (Corollary)
If \(A \in M_{n \times n}(\mathbf{F})\) has a row consisting entirely of zeros, then \(\det(A)=0\).


Proof TODO (Exercise 4.2, 24)


Theorem 4.3 (Corollary)
Let \(B \in M_{n \times n}(\mathbf{F})\) where \(n \geq 2\). If row \(i\) of \(B\) equals \(e_k\) for some \(k \ (1 \leq k \leq n)\), then \(\det(B) = (-1)^{i+k}\det(\tilde{B_{ik}})\)


Proof
By Induction on \(n\).

Base Case \(n = 2\): TODO

Inductive Case: Assume this is true for \((n - 1) \times (n - 1)\). We will show this is true for an \(n \times n\) matrix. Let \(B\) be an \(n \times\) matrix where the \(i\)th row is \(e_k\). If \(i = 1\), then we’re done by the definition of the determinant. Otherwise suppose that \(i \neq 1\).

Now, for column \(j \neq k\), let \(C_{ij}\) be the \((n-2) \times (n-2)\) matrix obtained from \(B\) by deleting rows 1 and \(i\) and columns \(j\) and \(k\). Then, \(\tilde{B_{1j}}\) is the following vector in \(\mathbf{F}^{n-1}\)

$$ \begin{align*} \begin{cases} e_{k-1} \quad \text{ if }j < k \\ 0 \quad \quad \ \text{ if } j = k \\ e_{k} \quad \quad \text{ if } j > k \end{cases} \end{align*} $$

TODO ….

Theorem 4.4
The determinant of a square matrix can be evaluated by any cofactor expansion along any row. That is if \(A \in M_{n \times n}(\mathbf{F})\). Then for any integer \(i \ (1 \leq i \leq n)\) $$ \begin{align*} \det(A) = \sum_{j=1}^{n} (-1)^{i+j} A_{ij} \det(\tilde{A_{ij}}) \end{align*} $$


Proof





References