6: Subspaces
Subspaces are the building blocks of linear algebra,they’re the sets where vector operations stay contained, and understanding them reveals the geometry hidden inside matrices.
Basic Definitions
(Subspace)
A subset is a subspace of if and only if it satisfies three conditions:
- Contains the zero vector:
- Closed under addition: If , then
- Closed under scalar multiplication: If and , then
In other words, a subspace is a set that “behaves like a vector space” within .
(Why These Three Conditions?)
The intuition: a subspace must be self-contained under the operations that define vector spaces.
- The zero vector ensures you have a natural “origin”
- Closure under addition means you can combine vectors freely
- Closure under scalar multiplication means you can scale vectors freely
Together, these guarantee that all linear combinations of vectors in stay inside .
Examples
Example 1: The Trivial Subspaces
For any :
- is always a subspace (the trivial subspace)
- is always a subspace (the whole space)
Example 2: Lines Through the Origin
The set
where is a subspace of .
This is the line through the origin in the direction of .
Why?
- Contains : Set
- Closed under addition:
- Closed under scalar multiplication:
Example 3: Planes Through the Origin
The set
where are linearly independent is a subspace.
This is the plane spanned by and .
Example 4: Not a Subspace
The set
is not a subspace because:
- It doesn’t contain (since )
- It’s not closed under addition: and are in , but their sum is not
This is a line, but it doesn’t. pass through the origin,so it fails to be a subspace.
Key Result: Span is Always a Subspace
Theorem: For any collection of vectors , the span
is a subspace of .
Proof:
-
Contains : Take all coefficients , then .
-
Closed under addition: If and , then
-
Closed under scalar multiplication: If and , then
This is why span is so important,it’s the canonical way to construct subspaces.
Column Space and Null Space
Two fundamental subspaces arise from any matrix :
(Column Space)
The column space of , denoted , is the span of the columns of :
This is a subspace of .
Interpretation: is the set of all possible outputs as ranges over .
(Null Space)
The null space (or kernel) of , denoted or , is the set of all solutions to :
This is a subspace of .
Why is it a subspace?
- , so
- If and , then
- If , then
Interpretation: The null space captures all the “redundancy” in the matrix,the directions that get collapsed to zero.
Dimension and Basis
(Basis)
A basis for a subspace is a linearly independent set of vectors whose span equals .
Equivalently, a basis is a minimal spanning set or a maximal independent set.
(Dimension)
The dimension of a subspace , denoted , is the number of vectors in any basis for .
Key fact: All bases for a given subspace have the same size.
Examples
- A line through the origin has dimension
- A plane through the origin has dimension
The Rank-Nullity Theorem
For any matrix :
Or equivalently:
Interpretation: The dimension of the input space () is split between:
- The dimension of directions that get mapped somewhere non-trivial (the rank)
- The dimension of directions that collapse to zero (the nullity)
This is one of the most important equations in linear algebra.
Computing Subspaces from RREF
Given a matrix , we can compute both and from .
Finding a Basis for
- Compute
- Identify the pivot columns in
- The corresponding columns from the original matrix form a basis for
Why the original columns? Because row operations preserve the column space relationships but change the actual vectors. The pivot positions tell you which original columns are independent.
Finding a Basis for
- Compute
- Identify the free variables (columns without pivots)
- For each free variable, set it to and all other free variables to , then solve for the basic variables
- Each solution vector forms a basis vector for
The dimension of equals the number of free variables.
Why Subspaces Matter
Subspaces are the geometric objects that linear algebra studies.
- Solutions to linear systems live in subspaces
- Eigenvectors span eigenspaces (which are subspaces)
- Projections map onto subspaces
- Least-squares problems minimize distance to subspaces
Understanding subspaces means understanding the structure of , not just individual vectors.