Definition
The determinant is a map $$ \begin{align*} \det: &M_{n \times n} \rightarrow \mathbf{R} \\ &A \rightarrow \det(A) \end{align*} $$


Properties of the determinant

  1. \(A\) is invertible iff \(\det(A) \neq 0\)
  2. \(\det(A)\) has geometric meaning.
  3. To see this, let
    $$ \begin{align*} [0,1]^n = \{(x_1,...,x_n) \in \mathbf{R}^n \ | \ x_i \in [0,1]\} \end{align*} $$
    What does this represent? In \(\mathbf{R}^2\), this is a unit square. In general, it's a cube determined by the vectors \(\{e_1, e_2,...,e_n\}\).
    What does this have anything to do with the determinant?
    Consider, the matrix \(A \in M_{n \times n}\). When we apply \(A\) on each vector in the standard basis, we get \(Ae_1, Ae_2, ...\).
    \(L_A([0,1]^n\) is the parallelepiped determined by \(\{Ae_1, Ae_2, ...,Ae_n\}\) and \(volume(L_A([0,1]^n)) = |det(A)|\).

  4. The determinant map is not linear except for \(n = 1\). (It is linear in the rows of \(A\))
  5. \(\det(AB) = \det(A)\det(B)\).




Definition of the Determinant

The definition of the \(\det: M_{n \times n} \rightarrow \mathbf{R}\) is inductive on \(n\).

For \(n = 1\):

$$ \begin{align*} &M_{1\times 1} = \{(a)\} \leftrightarrow \mathbf{R} \\ &\det((a)) = a \end{align*} $$

Checking the four properties:

  1. Since we only have one entry in the matrix, then the inverse exists and is \((a)^{-1} = (\frac{1}{a})\) if and only if \(a = \det((a)) \neq 0\).
  2. Checking the second property we see that for \(L_{(a)}: \mathbf{R}^1 \rightarrow \mathbf{R}^1\)
    $$ \begin{align*} L_{(a)} : &\mathbf{R}^1 \rightarrow \mathbf{R}^1 \\ &x \rightarrow ax \\ L_{(a)}([0,1]) &= \begin{cases} [0,a] \quad \text{if } a \geq 0 \\ [a,0] \quad \text{if } a \lt 0\end{cases}\\ volume(L_{(a)}([0,1])) &= |a| = |det((a))|. \end{align*} $$
  3. The determinant is linear for \(n = 1\)
  4. \( det((a)(b)) = det((ab)) = ab = det((a))det((b)) \)


For \(n = 2\):

$$ \begin{align*} \det: \ &M_{2 \times 2} \rightarrow \mathbf{R} \\ &\begin{pmatrix}a & b \\ c & d \end{pmatrix} \rightarrow ad - bc \end{align*} $$

Checking the four properties:

  1. We previously proved that \(\begin{pmatrix}a & b \\ c & d \end{pmatrix}\) is invertible if and only if \(ad - bc \neq 0\).
  2. This property takes some work to see. Check the book for a nice description of it.
  3. The determinant is not linear for \(n = 2\)
  4. We want to check that \(\det(AB) = \det(A)\det(B)\). To see this:
    $$ \begin{align*} \begin{pmatrix}a & b \\ c & d \end{pmatrix}\begin{pmatrix}\alpha & \beta \\ \gamma & \delta \end{pmatrix} &= \begin{pmatrix}a\alpha + b\gamma & a\beta + b\delta \\ c\alpha + d\gamma & c\beta + d\delta \end{pmatrix} \\ det(AB) &= ((a\alpha + b\gamma)(c\beta + d\delta) - (a\beta + b\delta)(c\beta + d\delta)) \\ &= a\alpha d\delta + b\gamma c\beta - a\beta d\gamma - b\delta c\alpha \\ &= (ad - bc)(a\delta - \beta \gamma)\\ &= \det(A) \det(B). \\ \end{align*} $$


So now what about the general case?

$$ \begin{align*} \det: \ &M_{n \times n} \rightarrow \mathbf{R} \end{align*} $$

Define \(\tilde{A_{ij}}\) as the \((n-1)\times(n-1)\) matrix obtained from \(A\) by deleting its \(i\)th row and \(j\)th column.

For example

$$ \begin{align*} A &= \begin{pmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} \\ \tilde{A_{23}} &= \begin{pmatrix}1 & 2 \\ 7 & 8 \end{pmatrix} \tilde{A_{31}} = \begin{pmatrix}2 & 3 \\ 5 & 6 \end{pmatrix} \end{align*} $$

So now for \(n \geq 2\) and \(A \in M_{n \times n}\)

$$ \begin{align*} det(A) = \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \end{align*} $$

Remark 1:

$$ \begin{align*} (-1)^k &= \begin{cases} 1 \quad \phantom{-} k \text{ even} \\ -1 \quad \text{k odd } \end{cases} \end{align*} $$

Remark 2:

$$ \begin{align*} \det\left( \begin{pmatrix}A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}\right) &= (-1)^{1+1}A_{11} det((A_{22})) + (-1)^{1+2}A_{12}det((A_{21})) \\ &= A_{11}A_{22} - A_{12}A_{21}. \end{align*} $$




Example

Compute the determinant for

$$ \begin{align*} \begin{pmatrix} 1 & 2 & 3 \\ 1 & 0 & 2 \\ 2 & 1 & 1 \end{pmatrix} \end{align*} $$
$$ \begin{align*} det(\tilde{A_{11}}) = \det \left( \begin{pmatrix} 0 & 2 \\ 1 & 1 \end{pmatrix}\right) &= -2 \\ det(\tilde{A_{12}}) = \det \left( \begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix}\right) &= -3 \\ det(\tilde{A_{13}}) = \det \left( \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix}\right) &= 1 \\ \end{align*} $$
$$ \begin{align*} \det(A) &= (-1)^{1+1}(1)(-2) + (-1)^{1+2}(2)(-3) + (-1)^{1+3}(3)(1) \\ &= -2 + 6 + 3 = 7. \end{align*} $$


Theorem
\(\det(A)\) is linear in rows of \(A\).


What does this mean? Suppose we have three matrices \(A, B\) and \(C\) in \(M_{n \times n}\) which are equal in all rows but the \(r\)th row. And suppose that for the \(r\)th row that \(a_r = b_r + kc_r\).

$$ \begin{align*} B = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, C = \begin{pmatrix} \pi & \pi \\ 1 & 1 \end{pmatrix}, A = \begin{pmatrix} 1+k\pi & 1+k\pi \\ 1 & 1 \end{pmatrix} \end{align*} $$

Here, they’re all equal except for the \(r\)th row which is the first row here so \(r = 1\). The \(r\)th row of \(A\) is \(a_r = b_r + kc_r\). In this case,

$$ \begin{align*} \det(A) = \det(B) + k\det(C) \end{align*} $$

Proof:
We have two cases. The \(r\)th row is 1 or the \(r\)th row is some other row other than 1. Why? because the current definition of the determinant that we have right now is “favoring” the first row. So we want to split the cases around this.

Case 1 (\(r = 1\)):
Suppose the matrices differ in the first row where the first row of \(A\) is some linear combination of the first row in \(B\) and the first row in \(C\). We know that

$$ \begin{align*} \det(A) = \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \end{align*} $$

However we know that every entry in the first row of \(A\) can be written as a linear combination of the entries in \(B\) and \(C\) and

$$ \begin{align*} A_{1j} = B_{1j} + kC_{1j}. \end{align*} $$

Additionally, by the definition that we have of computing the determinant,

$$ \begin{align*} \tilde{A_{1j}} = \tilde{B_{1j}} = \tilde{C_{1j}}. \end{align*} $$

Therefore,

$$ \begin{align*} \det(A) &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \\ &= \sum_{j=1}^{n} (-1)^{1+j} (B_{1j} + kC_{1j}) \det(\tilde{A_{1j}}) \\ &= \sum_{j=1}^{n} (-1)^{1+j} B_{1j} \det(\tilde{A_{1j}}) + \sum_{j=1}^{n} (-1)^{1+j} kC_{1j} \det(\tilde{A_{1j}}) \\ &= \sum_{j=1}^{n} (-1)^{1+j} B_{1j} \det(\tilde{B_{1j}}) + k\sum_{j=1}^{n} (-1)^{1+j} C_{1j} \det(\tilde{C_{1j}}) \\ &= \det(B) + k\det(C). \end{align*} $$

Case 2 (\(r > 1\)): By Induction on \(n\)
Base Case: When \(n = 1\), the determinat is linear.

Inductive Step: Suppose it is true for \(n - 1\). Then,

$$ \begin{align*} \det(A) &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \end{align*} $$

In this case, we know the matrices differ in the \(r\)‘th row where \(r > 1\). and so the first row of each matrix is the same. \(A_{1j} = B_{1j} = C_{1j}\) for all \(j\). What about the determinants of \(\tilde{A_{1j}}, \tilde{B_{1j}}\) and \(\tilde{C_{1j}}\)? Well, since they are now of size \(n-1 \times n-1\), then we can use the inductive hypothesis to conclude that,

$$ \begin{align*} \det(\tilde{A_{1j}}) &= \det(\tilde{B_{1j}}) + k \det(\tilde{C_{1j}}) \end{align*} $$

So now we can use all of these facts to compute the determinant,

$$ \begin{align*} \det(A) &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{A_{1j}}) \\ &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} (\det(\tilde{B_{1j}}) + k \det(\tilde{C_{1j}})) \\ &= \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{B_{1j}}) + k \sum_{j=1}^{n} (-1)^{1+j} A_{1j} \det(\tilde{C_{1j}}) \\ &= \sum_{j=1}^{n} (-1)^{1+j} B_{1j} \det(\tilde{B_{1j}}) + k \sum_{j=1}^{n} (-1)^{1+j} C_{1j} \det(\tilde{C_{1j}}) \\ &= \det(B) + k\det(C). \ \blacksquare \end{align*} $$


Theorem
For any \(r = 1,2,...,n\), we have $$ \begin{align*} \det(A) = \sum_{j=1}^{n} (-1)^{r+j} A_{rj} \det(\tilde{A_{rj}}) \end{align*} $$


The proof for this theorem requires the following lemma

Lemma
Given \(A\). Let \(B\) be the matrix equal to \(A\) with the \(r\)th row placed by \(e_j\) so $$ \begin{align*} B &= \begin{pmatrix} \bar{b_1} \\ \vdots \\ e_j \\ \vdots \\ \bar{b_n} \end{pmatrix} \end{align*} $$ Then, $$ \begin{align*} \det(B) &= (-1)^{r+j} \det(\tilde{B_{rj}}) \end{align*} $$


The proof of this lemma is in the textbook (page 214). (TODO: check). So now we’ll do the proof for the theorem.

Proof (Using the Lemma)
Given \(A\). Let \(B_j\) be the matrix equal to \(A\) with the \(r\)th row replaced by \(e_j\). By the technical lemma,

$$ \begin{align*} \det(B_j) &= (-1)^{r+j} \det(\tilde{(B_{j})_{rj}}) \end{align*} $$

(Note this is this the same expression as the technical lemma. It’s just that the matrix here is called \(B_j\) and in the technical lemma it is \(B\). Moreover, \(\det(\tilde{(B_{j})_{rj}})\) says take the matrix \(B_j\) and remove the row \(r\) and column \(j\) to compute that determinant)

So now, since we’re computing the determinant by removing the row \(r\) and column \(j\), this determinant should the same exact determinant as \(\det(\tilde{A_{rj}})\) since they only differ in row \(r\) and column \(j\). Therefore, we replace it with \(\det(\tilde{A_{rj}})\) in

$$ \begin{align*} \det(B_j) &= (-1)^{r+j} \det(\tilde{(B_{j})_{rj}}) \\ &= (-1)^{r+j} \det(\tilde{A_{rj}}) \end{align*} $$

Next, notice that \(e_1,...e_j\) .. are vectors of the standard basis for \(\mathbf{R}^n\). So we know that the \(r\)th row of \(A\) can be expressed as a linear combination of these vectors. Moreover, the coefficients of this linear combination are just the entries in the \(r\)th row of \(A\). So by Theorem 1,

$$ \begin{align*} \det(A) &= \sum A_{rj} \det(B_j) \\ &= \sum A_{rj} (-1)^{r+j} \det(\tilde{A_{rj}}). \end{align*} $$

as we wanted to show. \(\blacksquare\)

This gives the following corollary

Corollary 1
If \(A\) has a row of all zeros, then \(\det A = 0\).


This is all fine but unfortunately, it doesn’t still solve the problem of the computation of the determinant taking too long to compute. Since a matrix at random will likely not have a row of all zeros. We still have another corollary

Corollary 2
If \(A\) has two identical rows, then \(\det A = 0\).


Proof
By induction on \(n\).

Base Case: \(n = 2\).

$$ \begin{align*} \det\begin{pmatrix} a & b \\ a & b \end{pmatrix} = ab - ab = 0. \end{align*} $$

Inductive Step:
Assume this is true for \(n - 1\).

$$ \begin{align*} A &= \begin{pmatrix} \bar{a_1} \\ \vdots \\ a_i \\ \vdots \\ a_j \\ \vdots \\ \bar{b_n} \end{pmatrix}, \text{ where $a_i = a_j$} \end{align*} $$

The goal is to compute the determinant of \(A\) using the inductive hypothesis. Well if we choose \(r\) such that it is not \(i\) or \(j\), then by the previous theorem we know that

$$ \begin{align*} \det(A) &= \sum A_{rj} (-1)^{r+j} \det(\tilde{A_{rj}}). \end{align*} $$

But \(\det(\tilde{A_{rj}})\) is 0 since \(\tilde{A_{rj}}\) has two identical rows and its size is \(n-1 \times n-1\). Therefore,

$$ \begin{align*} \det(A) &= 0. \end{align*} $$

As we wanted to show. \(\blacksquare\)



References

  • Video Lectures from Math416 by Ely Kerman.