Every linear transformation has two fundamental subspaces attached to it: the kernel (what gets destroyed) and the image (what gets produced). Understanding these reveals what a transformation does geometrically,where it collapses, where it stretches, and what it can reach.
The Image
(Image of a Linear Transformation)
Let T:VโW be a linear transformation. The image (or range) of T is the set of all outputs:
If w1โ=T(v1โ) and w2โ=T(v2โ), then w1โ+w2โ=T(v1โ)+T(v2โ)=T(v1โ+v2โ)โim(T)
If w=T(v), then cw=cT(v)=T(cv)โim(T)
(Image for Matrices: Column Space)
For a matrix transformation T(x)=Ax where A is mรn:
im(T)=col(A)=span{columnsย ofย A}
Why? Write A=[a1โโฃa2โโฃโฏโฃanโ] in column form. Then:
Ax=x1โa1โ+x2โa2โ+โฏ+xnโanโ
Every output is a linear combination of the columns,so the image is exactly the span of the columns.
Interpretation: The column space tells you what vectors can be reached by the transformation. If bโcol(A), then Ax=b has a solution. If bโ/col(A), the system is inconsistent.
The Kernel
(Kernel of a Linear Transformation)
Let T:VโW be a linear transformation. The kernel (or null space) of T is the set of all inputs that map to zero:
ker(T)={vโVโฃT(v)=0}
Key fact: The kernel is always a subspace of V.
Proof:
T(0)=0, so 0โker(T)
If T(v1โ)=0 and T(v2โ)=0, then T(v1โ+v2โ)=T(v1โ)+T(v2โ)=0
If T(v)=0, then T(cv)=cT(v)=c0=0
(Kernel for Matrices: Null Space)
For a matrix transformation T(x)=Ax:
ker(T)=null(A)={xโRnโฃAx=0}
The null space is the solution set to the homogeneous system Ax=0.
Interpretation: The kernel captures all the redundancy in the transformation,the directions that get completely flattened to nothing.
(Computing the Kernel)
To find a basis for ker(A):
Row reduce A to rref(A)
Identify the free variables (columns without pivots)
For each free variable, set it to 1 and all others to 0, then solve
These solutions form a basis for the kernel
Example: Find ker(A) for
A=โ121โ242โ10โ1โ022โโ
Row reduce:
rref(A)=โ100โ200โ010โ1โ10โโ
Pivots in columns 1 and 3. Free variables: x2โ and x4โ.
so T(v)=d1โT(u1โ)+โฏ+drโT(urโ) (since T(viโ)=0).
Linear independence: If d1โT(u1โ)+โฏ+drโT(urโ)=0, then T(d1โu1โ+โฏ+drโurโ)=0, so d1โu1โ+โฏ+drโurโโker(T). This means itโs a combination of v1โ,โฆ,vkโ, forcing all diโ=0 by independence.
Thus dim(V)=k+r=nullity(T)+rank(T).
Injectivity and Surjectivity Revisited
The kernel and image directly characterize injectivity and surjectivity:
(Kernel and Injectivity)
Tย isย injectiveย (one-to-one)โบker(T)={0}
Why? If ker(T)={0} and T(v1โ)=T(v2โ), then T(v1โโv2โ)=0, so v1โโv2โโker(T), meaning v1โ=v2โ.
For matrices:A is injective iff null(A)={0} iff thereโs a pivot in every column iff rank(A)=n.
(Image and Surjectivity)
T:VโWย isย surjectiveย (onto)โบim(T)=W
For matrices:A (mรn) is surjective iff col(A)=Rm iff thereโs a pivot in every row iff rank(A)=m.
(The Counting Argument)
For an mรn matrix A:
Injective requires rank(A)=n (all columns are pivots)
Surjective requires rank(A)=m (all rows have pivots)
Bijective requires rank(A)=m=n (square and full rank)
Since rank(A)โคmin(m,n):
If n>m: Cannot be injective (not enough room for pivots in every column)
If m>n: Cannot be surjective (not enough columns to pivot in every row)
The Row Space
Thereโs a fourth fundamental subspace hiding in every matrix.
(Row Space)
The row space of A, denoted row(A), is the span of the rows of A:
row(A)=span{rowsย ofย A}=col(AT)
This is a subspace of Rn (same as the domain).
(Key Property)
Row operations preserve the row space. So:
row(A)=row(rref(A))
This means the nonzero rows of rref(A) form a basis for row(A).
Note: This is different from column space! Row operations change column relationships but preserve row span.
(Row Space and Null Space are Orthogonal)
For any matrix A:
row(A)โฅker(A)
Every vector in the row space is orthogonal to every vector in the null space.
Why? If xโker(A), then Ax=0. This means each row riโ satisfies riโโ x=0. So x is orthogonal to every row, hence to every linear combination of rows.
The Four Fundamental Subspaces
Every mรn matrix A defines four fundamental subspaces:
Subspace
Notation
Lives in
Dimension
Column space (image)
col(A)
Rm
r
Null space (kernel)
ker(A)
Rn
nโr
Row space
row(A)
Rn
r
Left null space
ker(AT)
Rm
mโr
where r=rank(A).
(Orthogonal Complements)
These subspaces pair up as orthogonal complements:
In Rn: row(A)โฅker(A), and together they span Rn
In Rm: col(A)โฅker(AT), and together they span Rm
Dimensions check:
dim(row(A))+dim(ker(A))=r+(nโr)=n โ
dim(col(A))+dim(ker(AT))=r+(mโr)=m โ
Geometric Summary
The kernel and image tell you exactly how a transformation reshapes space:
The image is the โshadowโ of the domain in the codomain,all the places you can reach. Its dimension (the rank) is how many independent directions survive the transformation.
The kernel is the โblind spotโ in the domain,all the inputs that become invisible. Its dimension (the nullity) is how many directions get flattened.