Determinants

Determinants Cramer's Rule Inverse Formula Invertible Matrix Theorem

Determinants

The determinant is a "scalar" value that encodes several important properties of a square matrix. It determines whether a matrix is invertible, measures how the matrix scales volumes under the corresponding linear transformation, and appears in formulas for eigenvalues and matrix inverses.

Definition: Determinants

For \(n \geq 2\), the determinant of an \(n \times n\) matrix \(A\) is defined as \[ \begin{align*} \det A &= \sum_{j=1}^n (-1)^{1+j} a_{1j} \det A_{1j} \\\\ &= a_{11}\det A_{11}-a_{12}\det A_{12}+\cdots \\\\ &\quad +(-1)^{1+n}a_{1n}\det A_{1n}\\\\ &= a_{11}C_{11}+a_{12}C_{12} + \cdots + a_{1n}C_{1n}. \end{align*} \] Here, \(C_{ij}=(-1)^{i+j} \det A_{ij}\) is the \((i, j)\)-cofactor of \(A\).

Example:

Consider the determinant of the matrix: \[ A = \begin{bmatrix} 3 & 2 & 5 \\ 7 & 5 & 4 \\ 0 & 1 & 0 \end{bmatrix}. \] The determinant of \(A\) can be computed using the cofactor expansion across the first row \[ \begin{align*} \det A &= 3\det \begin{bmatrix} 5 & 4 \\ 1 & 0 \end{bmatrix} -2\det \begin{bmatrix} 7 & 4 \\ 0 & 0 \end{bmatrix} \\\\ &\quad +5\det \begin{bmatrix} 7 & 5 \\ 0 & 1 \end{bmatrix} \\\\ &= 3(-4)-2(0)+5(7) \\\\ &= 23. \end{align*} \]

The cofactor expansion can be performed along any row or column. In general, for an entry \(a_{ij}\) in a matrix \(A\) \[ \text{Cofactor of } a_{ij} = C_{ij} = (-1)^{i+j} \det A_{ij} \] where \(A_{ij}\) is the \((n-1) \times (n-1)\) submatrix obtained by removing the \(i\)-th row and \(j\)-th column of \(A\).

In addtion, the cofactor matrix of \(A\) is the \(n \times n\) matrix where each entry is the cofactor of the corresponding element of \(A\). \[ \text{cofactor }(A) = \begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n}\\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix}. \]

Example:

Using the third row of \(A\): \[ \begin{align*} \det A &= 0 -1\det \begin{bmatrix} 3 & 5 \\ 7 & 4 \end{bmatrix} + 0 \\\\ &= -1(12-35) \\\\ &= 23 \end{align*} \] Alternatively, expanding down the first column of \(A\): \[ \begin{align*} \det A &= 3\det \begin{bmatrix} 5 & 4 \\ 1 & 0 \end{bmatrix} -7\det \begin{bmatrix} 2 & 5 \\ 1 & 0 \end{bmatrix} \\\\ &\quad +0\det \begin{bmatrix} 2 & 5 \\ 5 & 4 \end{bmatrix} \\\\ &= 3(-4)-7(-5)+0 \\\\ &= 23. \end{align*} \]

These computations demonstrate an important property:
The determinant of a triangular matrix is the product of its diagonal entries.

Example:

\[ \begin{align*} &\det \begin{bmatrix} 1 & 7 & 5 & 4 & 2 \\ 0 & 2 & 9 & 2 & 3 \\ 0 & 0 & 3 & 5 & 7\\ 0 & 0 & 0 & 4 & 7\\ 0 & 0 & 0 & 0 & 5 \end{bmatrix} \\\\ &= 1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 \\\\ &= 120. \end{align*} \]

Additionally, note that the transpose of a matrix does not affect its determinant. Thus, for any square matrix \(A\): \[ \det A = \det A^T. \]

Example:

Consider the matrix \(A\) again. The transpose of \(A\) is: \[ A^T = \begin{bmatrix} 3 & 7 & 0 \\ 2 & 5 & 1 \\ 5 & 4 & 0 \end{bmatrix} \qquad \] Then \[ \begin{align*} \det A^T &= 3\det \begin{bmatrix} 5 & 1 \\ 4 & 0 \end{bmatrix} -7\det \begin{bmatrix} 2 & 1 \\ 5 & 0 \end{bmatrix} \\\\ &\quad +0\det \begin{bmatrix} 2 & 5 \\ 5 & 4 \end{bmatrix}\\\\ &= 3(-4) -7(-5)+0 = 23 \\\\ &= \det A. \end{align*} \]

This result is supported by the relationship between cofactors in \(A\) and \(A^T\). The cofactor of \(a_{1j}\) in \(A\) is equal to the cofactor of \(a_{j1}\) in \(A^T\). Therefore, the cofactor expansion across the first row of \(A\) is the same as the cofactor expansion down the first column of \(A^T\). This relationship holds for any row or column of any square matrix.

Furthermore, determinants are multiplicative.

Example:

Given: \[ \begin{align*} &A = \begin{bmatrix} 1 & 2 \\ 8 & 9 \end{bmatrix}, \quad B = \begin{bmatrix} 5 & 7 \\ 4 & 6 \end{bmatrix}, \\\\ &\text{and} \quad AB = \begin{bmatrix} 13 & 19 \\ 76 & 110 \end{bmatrix} \end{align*} \] then \[ \begin{align*} &\det A = 9-16=-7, \\\\ &\det B = 30-28 =2, \\\\ &\det AB = 1430-1444=-14 = (\det A)(\det B). \end{align*} \] Note: \(\det (A+B) \neq \det A + \det B\).

Finally, observe how elementary row operations affect determinants: \[ \begin{align*} \det \begin{bmatrix} a & b \\ c & d \end{bmatrix} &= ad-bc \\\\ \det \begin{bmatrix} c & d \\ a & b \end{bmatrix} &= cb-ad=-(ad-bc) \\\\ \det \begin{bmatrix} a & b \\ kc & kd \end{bmatrix} &= kad-kbc = k(ad-bc) \\\\ \det \begin{bmatrix} a & b \\ c+2a & d+2b \end{bmatrix} &= a(d+2b)-b(c+2a) \\\\ &= ad+2ab-bc-2ab \\\\ &= ad-bc. \end{align*} \]

Cramer's Rule

The properties of determinants lead to a direct formula for solving linear systems. While row reduction is more efficient computationally, Cramer's rule provides an explicit expression for each component of the solution vector in terms of determinants.

Theorem 1: Cramer's Rule

Let \(A\) be an invertible \(n \times n\) matrix. \(\forall b \in \mathbb{R}^n \), \(Ax = b\) has a solution \(x\) where the entries of \(x\) are given by: \[ x_i = \frac{detA_i(b)}{\det A}, \, i = 1, 2, \cdots, n. \tag{1} \] Here, \(A_i(b)\) is the matrix obtained from \(A\) by replacing its \(i\)-th column with \(b\).

Proof:

Consider the \(n \times n\) identity matrix \(I\). Replace the \(i\)-th column of \(I\) with \(x\), giving a modified identity matrix \(I_i(x)\): \[ I_i(x) = \begin{bmatrix} e_1 & e_2 & \cdots & x & \cdots & e_n \end{bmatrix}. \] Multiplying \(A\) by \(I_i(x)\), we obtain: \[ \begin{align*} AI_i(x) &= \begin{bmatrix} Ae_1 & Ae_2 & \cdots & Ax & \cdots & Ae_n \end{bmatrix} \\\\ &= \begin{bmatrix} a_1 & a_2 & \cdots & b & \cdots & a_n \end{bmatrix} \\\\ &= A_i(b) \end{align*} \] By the multiplicative property of determinants, \[ \begin{align*} &(\det A)(\det I_i(x))= \det A_i(b) \\\\ &\Longrightarrow (\det A)x_i = \det A_i(b). \end{align*} \] Since \(A\) is invertible \(\det A \neq 0\) and then dividing through by \(\det A\) yields (1).

Inverse Formula

Using similar techniques to those in Cramer's rule, we can derive an explicit formula for the inverse of a matrix in terms of determinants. This formula expresses \(A^{-1}\) using the cofactors of \(A\), providing theoretical insight into the structure of matrix inverses.

For practical computation with large matrices, row reduction methods (such as the algorithm in Elementary Matrices) are significantly more efficient than computing determinants and cofactors. However, the inverse formula is valuable for theoretical analysis and for small matrices where explicit symbolic expressions are desired.

Theorem 2:

Let \(A\) be an invertible matrix. Then the inverse of \(A\) is given by \[ A^{-1} = \frac{1}{\det A} \text{adj }(A) \] where \(\text{adj }A\) is the adjugate of \(A\), which is the transpose of the cofactor matrix of \(A\). \[ \begin{align*} \text{adj }(A) &= \text{cofactor }(A)^T \\\\ &= \begin{bmatrix} C_{11} & C_{21} & \cdots & C_{n1}\\ C_{12} & C_{22} & \cdots & C_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ C_{1n} & C_{2n} & \cdots & C_{nn} \end{bmatrix}. \end{align*} \]

Example:

Consider \[ A = \begin{bmatrix} -1 & 2 & 3 \\ 2 & 1 & -4 \\ 3 & 3 & 2 \end{bmatrix}. \] To get \(\text{adj }A\), we need the nine cofactors of \(A\): \[ \begin{align*} C_{11} &= +(2+12) = 14, & C_{12} &= -(4+12) = -16, & C_{13} &= +(6-3) = 3\\ C_{21} &= -(4-9) = 5, & C_{22} &= +(-2-9) = -11, & C_{23} &= -(-3-6) = 9\\ C_{31} &= +(-8-3) = -11, & C_{32} &= -(4-6) = 2, & C_{33} &= +(-1-4) = -5 \end{align*} \] and \(\det A = -1(2+12)-2(4+12)+3(6-3) = -37\).

To verify the determinant, we can compute \[ \begin{align*} (\text{adj }A)A &= \begin{bmatrix} 14 & 5 & -11 \\ -16 & -11 & 2 \\ 3 & 9 & -5 \end{bmatrix} \begin{bmatrix} -1 & 2 & 3 \\ 2 & 1 & -4 \\ 3 & 3 & 2 \end{bmatrix} \\\\ &= -37I \end{align*} \] Thus, \(\det A = -37\), and \[ A^{-1} = \begin{bmatrix} \frac{-14}{37} & \frac{-5}{37} & \frac{11}{37} \\ \frac{16}{37} & \frac{11}{37} & \frac{-2}{37} \\ \frac{-3}{37} & \frac{-9}{37} & \frac{5}{37} \end{bmatrix}. \]

Invertible Matrix Theorem

Throughout our study of linear algebra, we have encountered various characterizations of invertible matrices. The Invertible Matrix Theorem consolidates these into a single powerful statement: for square matrices, properties related to existence and uniqueness of solutions, linear independence, spanning, linear transformations, and determinants are all logically equivalent. This means that establishing any one of these properties automatically implies all the others.

Theorem 3: Invertible Matrix Theorem

Let \(A\) be an \(n \times n\) matrix. Then the following statements are logically equivalent.

  1. \(A\) is invertible.
  2. There is an \(n \times n\) matrix \(B\) s.t. \(AB = I\) and \(BA = I\).
  3. \(Ax = 0\) has only trivial solution.
  4. \(A\) has \(n\) pivot positions.
  5. \(A\) is row equivalent to \(I_n\).
  6. \(\forall b \in \mathbb{R}^n , Ax = b\) has at least one solution.
  7. The columns of \(A\) span \(\mathbb{R}^n\).
  8. The linear transformation \(x \mapsto Ax\) maps \(\mathbb{R}^n\) onto \(\mathbb{R}^n\).
  9. The columns of \(A\) form a linearly independent set.
  10. The linear transformation \(x \mapsto Ax\) is one-to-one.
  11. \(A^T\) is invertible.
  12. \(\det A \neq 0\).
  13. \(0\) is not an eigenvalue of \(A\).
  14. \((\text{Col }A)^{\perp} = \{0\}.\) (See: Symmetry)
  15. \((\text{Nul }A)^{\perp} = \mathbb{R}^n.\)
  16. \(\text{Row }A = \mathbb{R}^n.\)
  17. \(A\) has \(n\) nonzero singular values.