←
home
/ linear-algebra / 9: The Determinant 9: The Determinant 2025.12.06
The determinant is a single number that captures something essential about a square matrix: how it scales volume. A determinant of zero means the matrix collapses space,it’s singular. A nonzero determinant means the matrix is invertible. This geometric meaning drives everything else.
Geometric Meaning
(What the Determinant Measures)
For a square matrix A A A , the determinant det ( A ) \det(A) det ( A ) measures:
Signed volume scaling : How much A A A scales the volume of any region
Orientation : Whether A A A preserves or reverses orientation (sign of det)
If A A A is n × n n \times n n × n and R R R is any region in R n \mathbb{R}^n R n :
Volume ( A ( R ) ) = ∣ det ( A ) ∣ ⋅ Volume ( R ) \text{Volume}(A(R)) = |\det(A)| \cdot \text{Volume}(R) Volume ( A ( R )) = ∣ det ( A ) ∣ ⋅ Volume ( R )
Examples:
det ( A ) = 2 \det(A) = 2 det ( A ) = 2 : Doubles volumes, preserves orientation
det ( A ) = − 3 \det(A) = -3 det ( A ) = − 3 : Triples volumes, reverses orientation (reflection)
det ( A ) = 0 \det(A) = 0 det ( A ) = 0 : Collapses to lower dimension, volume becomes zero
(The Unit Cube Picture)
The columns of A A A are the images of the standard basis vectors. The determinant equals the signed volume of the parallelepiped spanned by these column vectors.
For a 2 × 2 2 \times 2 2 × 2 matrix, the columns span a parallelogram. The determinant is its signed area.
For a 3 × 3 3 \times 3 3 × 3 matrix, the columns span a parallelepiped. The determinant is its signed volume.
The 2×2 Determinant
For A = [ a b c d ] A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} A = [ a c b d ] :
det ( A ) = a d − b c \det(A) = ad - bc det ( A ) = a d − b c
Derivation from area: The columns [ a c ] \begin{bmatrix} a \\ c \end{bmatrix} [ a c ] and [ b d ] \begin{bmatrix} b \\ d \end{bmatrix} [ b d ] span a parallelogram. Using the cross product formula for area (or direct geometry), we get ∣ a d − b c ∣ |ad - bc| ∣ a d − b c ∣ . The sign tracks orientation.
(Example)
det [ 3 1 2 4 ] = 3 ( 4 ) − 1 ( 2 ) = 10 \det\begin{bmatrix} 3 & 1 \\ 2 & 4 \end{bmatrix} = 3(4) - 1(2) = 10 det [ 3 2 1 4 ] = 3 ( 4 ) − 1 ( 2 ) = 10
This transformation scales areas by a factor of 10 10 10 .
(Singular Case)
det [ 2 4 1 2 ] = 2 ( 2 ) − 4 ( 1 ) = 0 \det\begin{bmatrix} 2 & 4 \\ 1 & 2 \end{bmatrix} = 2(2) - 4(1) = 0 det [ 2 1 4 2 ] = 2 ( 2 ) − 4 ( 1 ) = 0
The columns [ 2 1 ] \begin{bmatrix} 2 \\ 1 \end{bmatrix} [ 2 1 ] and [ 4 2 ] \begin{bmatrix} 4 \\ 2 \end{bmatrix} [ 4 2 ] are parallel,they span a line, not a parallelogram. Zero area means the matrix is singular.
The 3×3 Determinant
For A = [ a b c d e f g h i ] A = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} A = a d g b e h c f i :
det ( A ) = a e i + b f g + c d h − c e g − b d i − a f h \det(A) = aei + bfg + cdh - ceg - bdi - afh det ( A ) = a e i + b f g + c d h − ce g − b d i − a f h
This can be remembered by the “rule of Sarrus”: copy the first two columns to the right, then take products along diagonals (down-right positive, up-right negative).
Note: Sarrus’ rule only works for 3 × 3 3 \times 3 3 × 3 . For larger matrices, use cofactor expansion.
(Example)
det [ 1 2 3 4 5 6 7 8 9 ] = 1 ( 45 ) + 2 ( 42 ) + 3 ( 32 ) − 3 ( 35 ) − 2 ( 36 ) − 1 ( 48 ) \det\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} = 1(45) + 2(42) + 3(32) - 3(35) - 2(36) - 1(48) det 1 4 7 2 5 8 3 6 9 = 1 ( 45 ) + 2 ( 42 ) + 3 ( 32 ) − 3 ( 35 ) − 2 ( 36 ) − 1 ( 48 )
= 45 + 84 + 96 − 105 − 72 − 48 = 0 = 45 + 84 + 96 - 105 - 72 - 48 = 0 = 45 + 84 + 96 − 105 − 72 − 48 = 0
The determinant is zero,these columns are linearly dependent (the third column is the average of the first two).
Cofactor Expansion
For matrices larger than 3 × 3 3 \times 3 3 × 3 , we use cofactor expansion (also called Laplace expansion).
(Minor and Cofactor)
For an n × n n \times n n × n matrix A A A :
The ( i , j ) (i,j) ( i , j ) minor M i j M_{ij} M ij is the determinant of the ( n − 1 ) × ( n − 1 ) (n-1) \times (n-1) ( n − 1 ) × ( n − 1 ) matrix obtained by deleting row i i i and column j j j
The ( i , j ) (i,j) ( i , j ) cofactor C i j C_{ij} C ij is the signed minor:
C i j = ( − 1 ) i + j M i j C_{ij} = (-1)^{i+j} M_{ij} C ij = ( − 1 ) i + j M ij
(The Checkerboard Sign Pattern)
The factor ( − 1 ) i + j (-1)^{i+j} ( − 1 ) i + j creates a checkerboard of signs:
[ + − + − ⋯ − + − + ⋯ + − + − ⋯ − + − + ⋯ ⋮ ⋮ ⋮ ⋮ ⋱ ] \begin{bmatrix} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} + − + − ⋮ − + − + ⋮ + − + − ⋮ − + − + ⋮ ⋯ ⋯ ⋯ ⋯ ⋱
Position ( 1 , 1 ) (1,1) ( 1 , 1 ) is positive, and signs alternate from there.
(Cofactor Expansion Along a Row)
The determinant can be computed by expanding along any row i i i :
det ( A ) = ∑ j = 1 n a i j C i j = ∑ j = 1 n ( − 1 ) i + j a i j M i j \det(A) = \sum_{j=1}^{n} a_{ij} C_{ij} = \sum_{j=1}^{n} (-1)^{i+j} a_{ij} M_{ij} det ( A ) = j = 1 ∑ n a ij C ij = j = 1 ∑ n ( − 1 ) i + j a ij M ij
Expanding along row 1:
det ( A ) = a 11 C 11 − a 12 C 12 + a 13 C 13 − ⋯ \det(A) = a_{11}C_{11} - a_{12}C_{12} + a_{13}C_{13} - \cdots det ( A ) = a 11 C 11 − a 12 C 12 + a 13 C 13 − ⋯
(Cofactor Expansion Along a Column)
Equivalently, expand along any column j j j :
det ( A ) = ∑ i = 1 n a i j C i j \det(A) = \sum_{i=1}^{n} a_{ij} C_{ij} det ( A ) = i = 1 ∑ n a ij C ij
Key insight: Choose the row or column with the most zeros to minimize computation.
(Example: 3×3 via Cofactor Expansion)
A = [ 2 1 3 0 4 5 1 0 2 ] A = \begin{bmatrix} 2 & 1 & 3 \\ 0 & 4 & 5 \\ 1 & 0 & 2 \end{bmatrix} A = 2 0 1 1 4 0 3 5 2
Expand along row 1:
det ( A ) = 2 ⋅ det [ 4 5 0 2 ] − 1 ⋅ det [ 0 5 1 2 ] + 3 ⋅ det [ 0 4 1 0 ] \det(A) = 2 \cdot \det\begin{bmatrix} 4 & 5 \\ 0 & 2 \end{bmatrix} - 1 \cdot \det\begin{bmatrix} 0 & 5 \\ 1 & 2 \end{bmatrix} + 3 \cdot \det\begin{bmatrix} 0 & 4 \\ 1 & 0 \end{bmatrix} det ( A ) = 2 ⋅ det [ 4 0 5 2 ] − 1 ⋅ det [ 0 1 5 2 ] + 3 ⋅ det [ 0 1 4 0 ]
= 2 ( 8 − 0 ) − 1 ( 0 − 5 ) + 3 ( 0 − 4 ) = 16 + 5 − 12 = 9 = 2(8 - 0) - 1(0 - 5) + 3(0 - 4) = 16 + 5 - 12 = 9 = 2 ( 8 − 0 ) − 1 ( 0 − 5 ) + 3 ( 0 − 4 ) = 16 + 5 − 12 = 9
(Example: 4×4 Determinant)
A = [ 1 0 2 0 3 1 0 1 0 0 1 0 2 0 0 1 ] A = \begin{bmatrix} 1 & 0 & 2 & 0 \\ 3 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 2 & 0 & 0 & 1 \end{bmatrix} A = 1 3 0 2 0 1 0 0 2 0 1 0 0 1 0 1
Column 2 has three zeros,expand along it:
det ( A ) = 0 ⋅ C 12 + 1 ⋅ C 22 + 0 ⋅ C 32 + 0 ⋅ C 42 = C 22 \det(A) = 0 \cdot C_{12} + 1 \cdot C_{22} + 0 \cdot C_{32} + 0 \cdot C_{42} = C_{22} det ( A ) = 0 ⋅ C 12 + 1 ⋅ C 22 + 0 ⋅ C 32 + 0 ⋅ C 42 = C 22
C 22 = ( − 1 ) 2 + 2 det [ 1 2 0 0 1 0 2 0 1 ] C_{22} = (-1)^{2+2} \det\begin{bmatrix} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & 1 \end{bmatrix} C 22 = ( − 1 ) 2 + 2 det 1 0 2 2 1 0 0 0 1
Expand this 3 × 3 3 \times 3 3 × 3 along column 3:
= ( + 1 ) ( 0 ⋅ C 13 + 0 ⋅ C 23 + 1 ⋅ C 33 ) = det [ 1 2 0 1 ] = 1 = (+1)\left( 0 \cdot C_{13} + 0 \cdot C_{23} + 1 \cdot C_{33} \right) = \det\begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} = 1 = ( + 1 ) ( 0 ⋅ C 13 + 0 ⋅ C 23 + 1 ⋅ C 33 ) = det [ 1 0 2 1 ] = 1
So det ( A ) = 1 \det(A) = 1 det ( A ) = 1 .
Properties of the Determinant
(Multiplicative Property)
For square matrices A A A and B B B of the same size:
det ( A B ) = det ( A ) det ( B ) \det(AB) = \det(A) \det(B) det ( A B ) = det ( A ) det ( B )
Interpretation: If A A A scales volume by det ( A ) \det(A) det ( A ) and B B B scales by det ( B ) \det(B) det ( B ) , then A B AB A B scales by the product.
Consequence: det ( A k ) = ( det ( A ) ) k \det(A^k) = (\det(A))^k det ( A k ) = ( det ( A ) ) k
(Transpose)
det ( A T ) = det ( A ) \det(A^T) = \det(A) det ( A T ) = det ( A )
Rows and columns play symmetric roles in the determinant.
(Inverse)
If A A A is invertible:
det ( A − 1 ) = 1 det ( A ) \det(A^{-1}) = \frac{1}{\det(A)} det ( A − 1 ) = det ( A ) 1
Proof: det ( A ) det ( A − 1 ) = det ( A A − 1 ) = det ( I ) = 1 \det(A)\det(A^{-1}) = \det(AA^{-1}) = \det(I) = 1 det ( A ) det ( A − 1 ) = det ( A A − 1 ) = det ( I ) = 1
(Scalar Multiplication)
For an n × n n \times n n × n matrix:
det ( c A ) = c n det ( A ) \det(cA) = c^n \det(A) det ( c A ) = c n det ( A )
Each of the n n n rows gets multiplied by c c c , contributing a factor of c c c each.
Row Operations and the Determinant
The determinant responds predictably to row operations:
(Row Swap)
Swapping two rows negates the determinant:
det ( swap rows i , j ) = − det ( A ) \det(\text{swap rows } i, j) = -\det(A) det ( swap rows i , j ) = − det ( A )
Intuition: Swapping reverses orientation.
(Row Scaling)
Multiplying a row by c c c scales the determinant by c c c :
det ( row i → c ⋅ row i ) = c ⋅ det ( A ) \det(\text{row } i \to c \cdot \text{row } i) = c \cdot \det(A) det ( row i → c ⋅ row i ) = c ⋅ det ( A )
(Row Replacement)
Adding a multiple of one row to another preserves the determinant:
det ( row i → row i + c ⋅ row j ) = det ( A ) \det(\text{row } i \to \text{row } i + c \cdot \text{row } j) = \det(A) det ( row i → row i + c ⋅ row j ) = det ( A )
This is why row reduction is useful for computing determinants.
(Computing via Row Reduction)
To find det ( A ) \det(A) det ( A ) :
Row reduce to echelon form, tracking operations
For each row swap, multiply by − 1 -1 − 1
For each row scaling by c c c , divide by c c c
The determinant of an echelon matrix is the product of diagonal entries
Example:
[ 2 6 1 4 ] → R 1 ↔ R 2 [ 1 4 2 6 ] → R 2 − 2 R 1 [ 1 4 0 − 2 ] \begin{bmatrix} 2 & 6 \\ 1 & 4 \end{bmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{bmatrix} 1 & 4 \\ 2 & 6 \end{bmatrix} \xrightarrow{R_2 - 2R_1} \begin{bmatrix} 1 & 4 \\ 0 & -2 \end{bmatrix} [ 2 1 6 4 ] R 1 ↔ R 2 [ 1 2 4 6 ] R 2 − 2 R 1 [ 1 0 4 − 2 ]
Echelon form has diagonal product 1 × ( − 2 ) = − 2 1 \times (-2) = -2 1 × ( − 2 ) = − 2 .
One row swap means det ( A ) = − ( − 2 ) = 2 \det(A) = -(-2) = 2 det ( A ) = − ( − 2 ) = 2 .
Check: 2 ( 4 ) − 6 ( 1 ) = 2 2(4) - 6(1) = 2 2 ( 4 ) − 6 ( 1 ) = 2 ✓
Determinant and Invertibility
(The Fundamental Characterization)
For a square matrix A A A :
A is invertible ⟺ det ( A ) ≠ 0 A \text{ is invertible} \iff \det(A) \neq 0 A is invertible ⟺ det ( A ) = 0
Why?
det ( A ) = 0 \det(A) = 0 det ( A ) = 0 means the columns are linearly dependent, which means:
The transformation collapses some dimension
A x = 0 A\mathbf{x} = \mathbf{0} A x = 0 has nontrivial solutions
A A A cannot be inverted (no way to “uncollapse”)
det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0 means the columns are linearly independent, which means:
The transformation preserves all dimensions
The kernel is trivial
A A A is invertible
(Equivalent Conditions)
For an n × n n \times n n × n matrix A A A , the following are equivalent:
det ( A ) ≠ 0 \det(A) \neq 0 det ( A ) = 0
A A A is invertible
rank ( A ) = n \text{rank}(A) = n rank ( A ) = n
Columns of A A A are linearly independent
Columns of A A A span R n \mathbb{R}^n R n
A x = b A\mathbf{x} = \mathbf{b} A x = b has a unique solution for every b \mathbf{b} b
ker ( A ) = { 0 } \ker(A) = \{\mathbf{0}\} ker ( A ) = { 0 }
rref ( A ) = I n \text{rref}(A) = I_n rref ( A ) = I n
Special Matrices
(Triangular Matrices)
For upper or lower triangular matrices, the determinant is the product of diagonal entries :
det [ a 11 ∗ ∗ 0 a 22 ∗ 0 0 a 33 ] = a 11 a 22 a 33 \det\begin{bmatrix} a_{11} & * & * \\ 0 & a_{22} & * \\ 0 & 0 & a_{33} \end{bmatrix} = a_{11} a_{22} a_{33} det a 11 0 0 ∗ a 22 0 ∗ ∗ a 33 = a 11 a 22 a 33
This follows from cofactor expansion,each step picks up one diagonal entry.
(Diagonal Matrices)
det [ d 1 d 2 d 3 ] = d 1 d 2 d 3 \det\begin{bmatrix} d_1 & & \\ & d_2 & \\ & & d_3 \end{bmatrix} = d_1 d_2 d_3 det d 1 d 2 d 3 = d 1 d 2 d 3
The determinant is the product of eigenvalues (for diagonal matrices, the diagonal entries are the eigenvalues).
(Block Triangular Matrices)
If A = [ B C 0 D ] A = \begin{bmatrix} B & C \\ 0 & D \end{bmatrix} A = [ B 0 C D ] where B B B and D D D are square:
det ( A ) = det ( B ) det ( D ) \det(A) = \det(B) \det(D) det ( A ) = det ( B ) det ( D )
The determinant can be written as a sum over all permutations:
det ( A ) = ∑ σ ∈ S n sgn ( σ ) ∏ i = 1 n a i , σ ( i ) \det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^{n} a_{i, \sigma(i)} det ( A ) = σ ∈ S n ∑ sgn ( σ ) i = 1 ∏ n a i , σ ( i )
where S n S_n S n is the set of all permutations of { 1 , 2 , … , n } \{1, 2, \ldots, n\} { 1 , 2 , … , n } and sgn ( σ ) = ± 1 \text{sgn}(\sigma) = \pm 1 sgn ( σ ) = ± 1 is the sign of the permutation.
Interpretation: Each term picks one entry from each row and each column. The sign depends on whether the permutation is even or odd.
For n = 2 n = 2 n = 2 : two permutations give a 11 a 22 − a 12 a 21 a_{11}a_{22} - a_{12}a_{21} a 11 a 22 − a 12 a 21 .
For n = 3 n = 3 n = 3 : six permutations give the Sarrus formula.
For larger n n n : there are n ! n! n ! terms, which is why direct computation is impractical.
Why the Determinant Matters
The determinant answers fundamental questions:
Is this matrix invertible? Check if det ≠ 0 \det \neq 0 det = 0
How does this transformation scale volume? That’s ∣ det ∣ |\det| ∣ det ∣
Does it preserve orientation? Check the sign
Are these vectors linearly independent? Put them as columns and check det ≠ 0 \det \neq 0 det = 0
The determinant compresses a matrix into a single number,but that number encodes deep geometric and algebraic information about what the matrix does.