| |

9: The Determinant

The determinant is a single number that captures something essential about a square matrix: how it scales volume. A determinant of zero means the matrix collapses space,itโ€™s singular. A nonzero determinant means the matrix is invertible. This geometric meaning drives everything else.

Geometric Meaning

(What the Determinant Measures)

For a square matrix AA, the determinant detโก(A)\det(A) measures:

  1. Signed volume scaling: How much AA scales the volume of any region
  2. Orientation: Whether AA preserves or reverses orientation (sign of det)

If AA is nร—nn \times n and RR is any region in Rn\mathbb{R}^n:

Volume(A(R))=โˆฃdetโก(A)โˆฃโ‹…Volume(R)\text{Volume}(A(R)) = |\det(A)| \cdot \text{Volume}(R)

Examples:

  • detโก(A)=2\det(A) = 2: Doubles volumes, preserves orientation
  • detโก(A)=โˆ’3\det(A) = -3: Triples volumes, reverses orientation (reflection)
  • detโก(A)=0\det(A) = 0: Collapses to lower dimension, volume becomes zero

(The Unit Cube Picture)

The columns of AA are the images of the standard basis vectors. The determinant equals the signed volume of the parallelepiped spanned by these column vectors.

For a 2ร—22 \times 2 matrix, the columns span a parallelogram. The determinant is its signed area.

For a 3ร—33 \times 3 matrix, the columns span a parallelepiped. The determinant is its signed volume.


The 2ร—2 Determinant

(Formula)

For A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}:

detโก(A)=adโˆ’bc\det(A) = ad - bc

Derivation from area: The columns [ac]\begin{bmatrix} a \\ c \end{bmatrix} and [bd]\begin{bmatrix} b \\ d \end{bmatrix} span a parallelogram. Using the cross product formula for area (or direct geometry), we get โˆฃadโˆ’bcโˆฃ|ad - bc|. The sign tracks orientation.


(Example)

detโก[3124]=3(4)โˆ’1(2)=10\det\begin{bmatrix} 3 & 1 \\ 2 & 4 \end{bmatrix} = 3(4) - 1(2) = 10

This transformation scales areas by a factor of 1010.


(Singular Case)

detโก[2412]=2(2)โˆ’4(1)=0\det\begin{bmatrix} 2 & 4 \\ 1 & 2 \end{bmatrix} = 2(2) - 4(1) = 0

The columns [21]\begin{bmatrix} 2 \\ 1 \end{bmatrix} and [42]\begin{bmatrix} 4 \\ 2 \end{bmatrix} are parallel,they span a line, not a parallelogram. Zero area means the matrix is singular.


The 3ร—3 Determinant

(Formula via Sarrusโ€™ Rule)

For A=[abcdefghi]A = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}:

detโก(A)=aei+bfg+cdhโˆ’cegโˆ’bdiโˆ’afh\det(A) = aei + bfg + cdh - ceg - bdi - afh

This can be remembered by the โ€œrule of Sarrusโ€: copy the first two columns to the right, then take products along diagonals (down-right positive, up-right negative).

Note: Sarrusโ€™ rule only works for 3ร—33 \times 3. For larger matrices, use cofactor expansion.


(Example)

detโก[123456789]=1(45)+2(42)+3(32)โˆ’3(35)โˆ’2(36)โˆ’1(48)\det\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} = 1(45) + 2(42) + 3(32) - 3(35) - 2(36) - 1(48) =45+84+96โˆ’105โˆ’72โˆ’48=0= 45 + 84 + 96 - 105 - 72 - 48 = 0

The determinant is zero,these columns are linearly dependent (the third column is the average of the first two).


Cofactor Expansion

For matrices larger than 3ร—33 \times 3, we use cofactor expansion (also called Laplace expansion).

(Minor and Cofactor)

For an nร—nn \times n matrix AA:

  • The (i,j)(i,j) minor MijM_{ij} is the determinant of the (nโˆ’1)ร—(nโˆ’1)(n-1) \times (n-1) matrix obtained by deleting row ii and column jj
  • The (i,j)(i,j) cofactor CijC_{ij} is the signed minor:
Cij=(โˆ’1)i+jMijC_{ij} = (-1)^{i+j} M_{ij}

(The Checkerboard Sign Pattern)

The factor (โˆ’1)i+j(-1)^{i+j} creates a checkerboard of signs:

[+โˆ’+โˆ’โ‹ฏโˆ’+โˆ’+โ‹ฏ+โˆ’+โˆ’โ‹ฏโˆ’+โˆ’+โ‹ฏโ‹ฎโ‹ฎโ‹ฎโ‹ฎโ‹ฑ]\begin{bmatrix} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix}

Position (1,1)(1,1) is positive, and signs alternate from there.


(Cofactor Expansion Along a Row)

The determinant can be computed by expanding along any row ii:

detโก(A)=โˆ‘j=1naijCij=โˆ‘j=1n(โˆ’1)i+jaijMij\det(A) = \sum_{j=1}^{n} a_{ij} C_{ij} = \sum_{j=1}^{n} (-1)^{i+j} a_{ij} M_{ij}

Expanding along row 1:

detโก(A)=a11C11โˆ’a12C12+a13C13โˆ’โ‹ฏ\det(A) = a_{11}C_{11} - a_{12}C_{12} + a_{13}C_{13} - \cdots

(Cofactor Expansion Along a Column)

Equivalently, expand along any column jj:

detโก(A)=โˆ‘i=1naijCij\det(A) = \sum_{i=1}^{n} a_{ij} C_{ij}

Key insight: Choose the row or column with the most zeros to minimize computation.


(Example: 3ร—3 via Cofactor Expansion)

A=[213045102]A = \begin{bmatrix} 2 & 1 & 3 \\ 0 & 4 & 5 \\ 1 & 0 & 2 \end{bmatrix}

Expand along row 1:

detโก(A)=2โ‹…detโก[4502]โˆ’1โ‹…detโก[0512]+3โ‹…detโก[0410]\det(A) = 2 \cdot \det\begin{bmatrix} 4 & 5 \\ 0 & 2 \end{bmatrix} - 1 \cdot \det\begin{bmatrix} 0 & 5 \\ 1 & 2 \end{bmatrix} + 3 \cdot \det\begin{bmatrix} 0 & 4 \\ 1 & 0 \end{bmatrix} =2(8โˆ’0)โˆ’1(0โˆ’5)+3(0โˆ’4)=16+5โˆ’12=9= 2(8 - 0) - 1(0 - 5) + 3(0 - 4) = 16 + 5 - 12 = 9

(Example: 4ร—4 Determinant)

A=[1020310100102001]A = \begin{bmatrix} 1 & 0 & 2 & 0 \\ 3 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 2 & 0 & 0 & 1 \end{bmatrix}

Column 2 has three zeros,expand along it:

detโก(A)=0โ‹…C12+1โ‹…C22+0โ‹…C32+0โ‹…C42=C22\det(A) = 0 \cdot C_{12} + 1 \cdot C_{22} + 0 \cdot C_{32} + 0 \cdot C_{42} = C_{22} C22=(โˆ’1)2+2detโก[120010201]C_{22} = (-1)^{2+2} \det\begin{bmatrix} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1 \end{bmatrix}

Expand this 3ร—33 \times 3 along column 3:

=(+1)(0โ‹…C13+0โ‹…C23+1โ‹…C33)=detโก[1201]=1= (+1)\left( 0 \cdot C_{13} + 0 \cdot C_{23} + 1 \cdot C_{33} \right) = \det\begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} = 1

So detโก(A)=1\det(A) = 1.


Properties of the Determinant

(Multiplicative Property)

For square matrices AA and BB of the same size:

detโก(AB)=detโก(A)detโก(B)\det(AB) = \det(A) \det(B)

Interpretation: If AA scales volume by detโก(A)\det(A) and BB scales by detโก(B)\det(B), then ABAB scales by the product.

Consequence: detโก(Ak)=(detโก(A))k\det(A^k) = (\det(A))^k


(Transpose)

detโก(AT)=detโก(A)\det(A^T) = \det(A)

Rows and columns play symmetric roles in the determinant.


(Inverse)

If AA is invertible:

detโก(Aโˆ’1)=1detโก(A)\det(A^{-1}) = \frac{1}{\det(A)}

Proof: detโก(A)detโก(Aโˆ’1)=detโก(AAโˆ’1)=detโก(I)=1\det(A)\det(A^{-1}) = \det(AA^{-1}) = \det(I) = 1


(Scalar Multiplication)

For an nร—nn \times n matrix:

detโก(cA)=cndetโก(A)\det(cA) = c^n \det(A)

Each of the nn rows gets multiplied by cc, contributing a factor of cc each.


Row Operations and the Determinant

The determinant responds predictably to row operations:

(Row Swap)

Swapping two rows negates the determinant:

detโก(swapย rowsย i,j)=โˆ’detโก(A)\det(\text{swap rows } i, j) = -\det(A)

Intuition: Swapping reverses orientation.


(Row Scaling)

Multiplying a row by cc scales the determinant by cc:

detโก(rowย iโ†’cโ‹…rowย i)=cโ‹…detโก(A)\det(\text{row } i \to c \cdot \text{row } i) = c \cdot \det(A)

(Row Replacement)

Adding a multiple of one row to another preserves the determinant:

detโก(rowย iโ†’rowย i+cโ‹…rowย j)=detโก(A)\det(\text{row } i \to \text{row } i + c \cdot \text{row } j) = \det(A)

This is why row reduction is useful for computing determinants.


(Computing via Row Reduction)

To find detโก(A)\det(A):

  1. Row reduce to echelon form, tracking operations
  2. For each row swap, multiply by โˆ’1-1
  3. For each row scaling by cc, divide by cc
  4. The determinant of an echelon matrix is the product of diagonal entries

Example:

[2614]โ†’R1โ†”R2[1426]โ†’R2โˆ’2R1[140โˆ’2]\begin{bmatrix} 2 & 6 \\ 1 & 4 \end{bmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{bmatrix} 1 & 4 \\ 2 & 6 \end{bmatrix} \xrightarrow{R_2 - 2R_1} \begin{bmatrix} 1 & 4 \\ 0 & -2 \end{bmatrix}

Echelon form has diagonal product 1ร—(โˆ’2)=โˆ’21 \times (-2) = -2. One row swap means detโก(A)=โˆ’(โˆ’2)=2\det(A) = -(-2) = 2.

Check: 2(4)โˆ’6(1)=22(4) - 6(1) = 2 โœ“


Determinant and Invertibility

(The Fundamental Characterization)

For a square matrix AA:

Aย isย invertibleโ€…โ€ŠโŸบโ€…โ€Šdetโก(A)โ‰ 0A \text{ is invertible} \iff \det(A) \neq 0

Why?

detโก(A)=0\det(A) = 0 means the columns are linearly dependent, which means:

  • The transformation collapses some dimension
  • Ax=0A\mathbf{x} = \mathbf{0} has nontrivial solutions
  • AA cannot be inverted (no way to โ€œuncollapseโ€)

detโก(A)โ‰ 0\det(A) \neq 0 means the columns are linearly independent, which means:

  • The transformation preserves all dimensions
  • The kernel is trivial
  • AA is invertible

(Equivalent Conditions)

For an nร—nn \times n matrix AA, the following are equivalent:

  1. detโก(A)โ‰ 0\det(A) \neq 0
  2. AA is invertible
  3. rank(A)=n\text{rank}(A) = n
  4. Columns of AA are linearly independent
  5. Columns of AA span Rn\mathbb{R}^n
  6. Ax=bA\mathbf{x} = \mathbf{b} has a unique solution for every b\mathbf{b}
  7. kerโก(A)={0}\ker(A) = \{\mathbf{0}\}
  8. rref(A)=In\text{rref}(A) = I_n

Special Matrices

(Triangular Matrices)

For upper or lower triangular matrices, the determinant is the product of diagonal entries:

detโก[a11โˆ—โˆ—0a22โˆ—00a33]=a11a22a33\det\begin{bmatrix} a_{11} & * & * \\ 0 & a_{22} & * \\ 0 & 0 & a_{33} \end{bmatrix} = a_{11} a_{22} a_{33}

This follows from cofactor expansion,each step picks up one diagonal entry.


(Diagonal Matrices)

detโก[d1d2d3]=d1d2d3\det\begin{bmatrix} d_1 & & \\ & d_2 & \\ & & d_3 \end{bmatrix} = d_1 d_2 d_3

The determinant is the product of eigenvalues (for diagonal matrices, the diagonal entries are the eigenvalues).


(Block Triangular Matrices)

If A=[BC0D]A = \begin{bmatrix} B & C \\ 0 & D \end{bmatrix} where BB and DD are square:

detโก(A)=detโก(B)detโก(D)\det(A) = \det(B) \det(D)

The Determinant Formula (Advanced)

(Leibniz Formula)

The determinant can be written as a sum over all permutations:

detโก(A)=โˆ‘ฯƒโˆˆSnsgn(ฯƒ)โˆi=1nai,ฯƒ(i)\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^{n} a_{i, \sigma(i)}

where SnS_n is the set of all permutations of {1,2,โ€ฆ,n}\{1, 2, \ldots, n\} and sgn(ฯƒ)=ยฑ1\text{sgn}(\sigma) = \pm 1 is the sign of the permutation.

Interpretation: Each term picks one entry from each row and each column. The sign depends on whether the permutation is even or odd.

For n=2n = 2: two permutations give a11a22โˆ’a12a21a_{11}a_{22} - a_{12}a_{21}.

For n=3n = 3: six permutations give the Sarrus formula.

For larger nn: there are n!n! terms, which is why direct computation is impractical.


Why the Determinant Matters

The determinant answers fundamental questions:

  1. Is this matrix invertible? Check if detโกโ‰ 0\det \neq 0
  2. How does this transformation scale volume? Thatโ€™s โˆฃdetโกโˆฃ|\det|
  3. Does it preserve orientation? Check the sign
  4. Are these vectors linearly independent? Put them as columns and check detโกโ‰ 0\det \neq 0

The determinant compresses a matrix into a single number,but that number encodes deep geometric and algebraic information about what the matrix does.