←
home
/ linear-algebra / 8: Orthogonality and Projections 8: Orthogonality and Projections 2025.12.08
Orthogonality is the geometric idea of “perpendicularity” extended to any dimension. Two vectors are orthogonal when their dot product is zero,they point in completely independent directions. This concept unlocks powerful tools: projections decompose vectors into components, orthogonal bases simplify computations, and the Gram-Schmidt process converts any basis into an orthonormal one. Orthogonality turns complicated geometric problems into simple, coordinate-wise calculations.
The Dot Product
(Definition)
For vectors u , v ∈ R n \mathbf{u}, \mathbf{v} \in \mathbb{R}^n u , v ∈ R n :
u ⋅ v = u 1 v 1 + u 2 v 2 + ⋯ + u n v n = ∑ i = 1 n u i v i \mathbf{u} \cdot \mathbf{v} = u_1v_1 + u_2v_2 + \cdots + u_nv_n = \sum_{i=1}^{n} u_iv_i u ⋅ v = u 1 v 1 + u 2 v 2 + ⋯ + u n v n = i = 1 ∑ n u i v i
Matrix notation: u ⋅ v = u T v \mathbf{u} \cdot \mathbf{v} = \mathbf{u}^T\mathbf{v} u ⋅ v = u T v (row times column).
Geometric interpretation: The dot product measures how much two vectors “align”:
Large positive value: vectors point in similar directions
Zero: vectors are perpendicular (orthogonal)
Large negative value: vectors point in opposite directions
(Properties)
The dot product is:
Commutative: u ⋅ v = v ⋅ u \mathbf{u} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{u} u ⋅ v = v ⋅ u
Distributive: u ⋅ ( v + w ) = u ⋅ v + u ⋅ w \mathbf{u} \cdot (\mathbf{v} + \mathbf{w}) = \mathbf{u} \cdot \mathbf{v} + \mathbf{u} \cdot \mathbf{w} u ⋅ ( v + w ) = u ⋅ v + u ⋅ w
Homogeneous: ( c u ) ⋅ v = c ( u ⋅ v ) (c\mathbf{u}) \cdot \mathbf{v} = c(\mathbf{u} \cdot \mathbf{v}) ( c u ) ⋅ v = c ( u ⋅ v )
Positive definite: v ⋅ v ≥ 0 \mathbf{v} \cdot \mathbf{v} \geq 0 v ⋅ v ≥ 0 , with equality iff v = 0 \mathbf{v} = \mathbf{0} v = 0
(Length and Angle)
The length (or norm ) of a vector:
∥ v ∥ = v ⋅ v = v 1 2 + v 2 2 + ⋯ + v n 2 \|\mathbf{v}\| = \sqrt{\mathbf{v} \cdot \mathbf{v}} = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2} ∥ v ∥ = v ⋅ v = v 1 2 + v 2 2 + ⋯ + v n 2
The angle between nonzero vectors u \mathbf{u} u and v \mathbf{v} v :
cos θ = u ⋅ v ∥ u ∥ ∥ v ∥ \cos\theta = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|} cos θ = ∥ u ∥∥ v ∥ u ⋅ v
Key insight: The dot product is ∥ u ∥ ∥ v ∥ cos θ \|\mathbf{u}\| \|\mathbf{v}\| \cos\theta ∥ u ∥∥ v ∥ cos θ ,it captures both magnitude and directional alignment.
(Example)
u = [ 3 4 ] , v = [ 1 2 ] \mathbf{u} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} u = [ 3 4 ] , v = [ 1 2 ]
u ⋅ v = 3 ( 1 ) + 4 ( 2 ) = 11 \mathbf{u} \cdot \mathbf{v} = 3(1) + 4(2) = 11 u ⋅ v = 3 ( 1 ) + 4 ( 2 ) = 11
∥ u ∥ = 9 + 16 = 5 , ∥ v ∥ = 1 + 4 = 5 \|\mathbf{u}\| = \sqrt{9 + 16} = 5, \quad \|\mathbf{v}\| = \sqrt{1 + 4} = \sqrt{5} ∥ u ∥ = 9 + 16 = 5 , ∥ v ∥ = 1 + 4 = 5
cos θ = 11 5 5 = 11 5 25 ≈ 0.9839 \cos\theta = \frac{11}{5\sqrt{5}} = \frac{11\sqrt{5}}{25} \approx 0.9839 cos θ = 5 5 11 = 25 11 5 ≈ 0.9839
The angle is θ ≈ 10.3 ° \theta \approx 10.3° θ ≈ 10.3° ,nearly aligned.
Orthogonality
(Definition)
Vectors u \mathbf{u} u and v \mathbf{v} v are orthogonal (written u ⊥ v \mathbf{u} \perp \mathbf{v} u ⊥ v ) if:
u ⋅ v = 0 \mathbf{u} \cdot \mathbf{v} = 0 u ⋅ v = 0
Geometric meaning: Orthogonal vectors are perpendicular,they point in completely independent directions.
Note: The zero vector 0 \mathbf{0} 0 is orthogonal to every vector (by convention).
(Orthogonal Sets)
A set of vectors { v 1 , v 2 , … , v k } \{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_k\} { v 1 , v 2 , … , v k } is orthogonal if:
v i ⋅ v j = 0 for all i ≠ j \mathbf{v}_i \cdot \mathbf{v}_j = 0 \quad \text{for all } i \neq j v i ⋅ v j = 0 for all i = j
Key fact: Nonzero orthogonal vectors are automatically linearly independent .
Proof: Suppose c 1 v 1 + ⋯ + c k v k = 0 c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0} c 1 v 1 + ⋯ + c k v k = 0 . Dot both sides with v i \mathbf{v}_i v i :
c 1 ( v 1 ⋅ v i ) + ⋯ + c i ( v i ⋅ v i ) + ⋯ + c k ( v k ⋅ v i ) = 0 c_1(\mathbf{v}_1 \cdot \mathbf{v}_i) + \cdots + c_i(\mathbf{v}_i \cdot \mathbf{v}_i) + \cdots + c_k(\mathbf{v}_k \cdot \mathbf{v}_i) = 0 c 1 ( v 1 ⋅ v i ) + ⋯ + c i ( v i ⋅ v i ) + ⋯ + c k ( v k ⋅ v i ) = 0
All terms vanish except c i ∥ v i ∥ 2 = 0 c_i\|\mathbf{v}_i\|^2 = 0 c i ∥ v i ∥ 2 = 0 . Since v i ≠ 0 \mathbf{v}_i \neq \mathbf{0} v i = 0 , we have c i = 0 c_i = 0 c i = 0 . This holds for all i i i , so the set is independent. ✓
(Orthonormal Sets)
A set is orthonormal if it’s orthogonal and every vector has length 1:
v i ⋅ v j = { 1 if i = j 0 if i ≠ j \mathbf{v}_i \cdot \mathbf{v}_j = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases} v i ⋅ v j = { 1 0 if i = j if i = j
This is written compactly as v i ⋅ v j = δ i j \mathbf{v}_i \cdot \mathbf{v}_j = \delta_{ij} v i ⋅ v j = δ ij (Kronecker delta).
Example: The standard basis { e 1 , e 2 , … , e n } \{\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n\} { e 1 , e 2 , … , e n } is orthonormal.
(Why Orthonormal Bases Are Perfect)
If { q 1 , … , q n } \{\mathbf{q}_1, \ldots, \mathbf{q}_n\} { q 1 , … , q n } is an orthonormal basis, then for any vector v \mathbf{v} v :
v = ( v ⋅ q 1 ) q 1 + ( v ⋅ q 2 ) q 2 + ⋯ + ( v ⋅ q n ) q n \mathbf{v} = (\mathbf{v} \cdot \mathbf{q}_1)\mathbf{q}_1 + (\mathbf{v} \cdot \mathbf{q}_2)\mathbf{q}_2 + \cdots + (\mathbf{v} \cdot \mathbf{q}_n)\mathbf{q}_n v = ( v ⋅ q 1 ) q 1 + ( v ⋅ q 2 ) q 2 + ⋯ + ( v ⋅ q n ) q n
The coefficients are just dot products,no need to solve a system of equations!
Why this works: Dot both sides with q i \mathbf{q}_i q i :
v ⋅ q i = ( v ⋅ q 1 ) ( q 1 ⋅ q i ) + ⋯ + ( v ⋅ q i ) ( q i ⋅ q i ) + ⋯ \mathbf{v} \cdot \mathbf{q}_i = (\mathbf{v} \cdot \mathbf{q}_1)(\mathbf{q}_1 \cdot \mathbf{q}_i) + \cdots + (\mathbf{v} \cdot \mathbf{q}_i)(\mathbf{q}_i \cdot \mathbf{q}_i) + \cdots v ⋅ q i = ( v ⋅ q 1 ) ( q 1 ⋅ q i ) + ⋯ + ( v ⋅ q i ) ( q i ⋅ q i ) + ⋯
All terms vanish except ( v ⋅ q i ) ⋅ 1 = v ⋅ q i (\mathbf{v} \cdot \mathbf{q}_i) \cdot 1 = \mathbf{v} \cdot \mathbf{q}_i ( v ⋅ q i ) ⋅ 1 = v ⋅ q i . ✓
Projections
(Scalar Projection)
The scalar projection of v \mathbf{v} v onto u \mathbf{u} u is:
comp u v = u ⋅ v ∥ u ∥ \text{comp}_{\mathbf{u}} \mathbf{v} = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\|} comp u v = ∥ u ∥ u ⋅ v
This is the signed length of the projection,positive if v \mathbf{v} v points roughly in the direction of u \mathbf{u} u , negative otherwise.
(Vector Projection)
The vector projection (or orthogonal projection ) of v \mathbf{v} v onto u \mathbf{u} u is:
proj u v = u ⋅ v u ⋅ u u = u ⋅ v ∥ u ∥ 2 u \text{proj}_{\mathbf{u}} \mathbf{v} = \frac{\mathbf{u} \cdot \mathbf{v}}{\mathbf{u} \cdot \mathbf{u}} \mathbf{u} = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\|^2} \mathbf{u} proj u v = u ⋅ u u ⋅ v u = ∥ u ∥ 2 u ⋅ v u
Geometric meaning: The component of v \mathbf{v} v that points in the direction of u \mathbf{u} u .
Key property: proj u v \text{proj}_{\mathbf{u}} \mathbf{v} proj u v is parallel to u \mathbf{u} u .
(Orthogonal Component)
The orthogonal component (or rejection ) is:
v − proj u v \mathbf{v} - \text{proj}_{\mathbf{u}} \mathbf{v} v − proj u v
This is the part of v \mathbf{v} v that’s perpendicular to u \mathbf{u} u .
Decomposition: Every vector v \mathbf{v} v splits into:
v = proj u v ⏟ ∥ to u + ( v − proj u v ) ⏟ ⊥ to u \mathbf{v} = \underbrace{\text{proj}_{\mathbf{u}} \mathbf{v}}_{\parallel \text{ to } \mathbf{u}} + \underbrace{(\mathbf{v} - \text{proj}_{\mathbf{u}} \mathbf{v})}_{\perp \text{ to } \mathbf{u}} v = ∥ to u proj u v + ⊥ to u ( v − proj u v )
(Example: Projection)
Project v = [ 2 3 ] \mathbf{v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} v = [ 2 3 ] onto u = [ 1 1 ] \mathbf{u} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} u = [ 1 1 ] .
u ⋅ v = 1 ( 2 ) + 1 ( 3 ) = 5 \mathbf{u} \cdot \mathbf{v} = 1(2) + 1(3) = 5 u ⋅ v = 1 ( 2 ) + 1 ( 3 ) = 5
u ⋅ u = 1 + 1 = 2 \mathbf{u} \cdot \mathbf{u} = 1 + 1 = 2 u ⋅ u = 1 + 1 = 2
proj u v = 5 2 [ 1 1 ] = [ 2.5 2.5 ] \text{proj}_{\mathbf{u}} \mathbf{v} = \frac{5}{2}\begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2.5 \\ 2.5 \end{bmatrix} proj u v = 2 5 [ 1 1 ] = [ 2.5 2.5 ]
Orthogonal component:
v − proj u v = [ 2 3 ] − [ 2.5 2.5 ] = [ − 0.5 0.5 ] \mathbf{v} - \text{proj}_{\mathbf{u}} \mathbf{v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} - \begin{bmatrix} 2.5 \\ 2.5 \end{bmatrix} = \begin{bmatrix} -0.5 \\ 0.5 \end{bmatrix} v − proj u v = [ 2 3 ] − [ 2.5 2.5 ] = [ − 0.5 0.5 ]
Verify orthogonality:
u ⋅ [ − 0.5 0.5 ] = 1 ( − 0.5 ) + 1 ( 0.5 ) = 0 ✓ \mathbf{u} \cdot \begin{bmatrix} -0.5 \\ 0.5 \end{bmatrix} = 1(-0.5) + 1(0.5) = 0 \quad ✓ u ⋅ [ − 0.5 0.5 ] = 1 ( − 0.5 ) + 1 ( 0.5 ) = 0 ✓
Orthogonal Complements
(Definition)
For a subspace W ⊆ R n W \subseteq \mathbb{R}^n W ⊆ R n , the orthogonal complement W ⊥ W^\perp W ⊥ is:
W ⊥ = { v ∈ R n ∣ v ⋅ w = 0 for all w ∈ W } W^\perp = \{\mathbf{v} \in \mathbb{R}^n \mid \mathbf{v} \cdot \mathbf{w} = 0 \text{ for all } \mathbf{w} \in W\} W ⊥ = { v ∈ R n ∣ v ⋅ w = 0 for all w ∈ W }
Interpretation: W ⊥ W^\perp W ⊥ contains all vectors perpendicular to everything in W W W .
(Key Properties)
W ⊥ W^\perp W ⊥ is always a subspace
dim ( W ) + dim ( W ⊥ ) = n \dim(W) + \dim(W^\perp) = n dim ( W ) + dim ( W ⊥ ) = n
W ∩ W ⊥ = { 0 } W \cap W^\perp = \{\mathbf{0}\} W ∩ W ⊥ = { 0 }
( W ⊥ ) ⊥ = W (W^\perp)^\perp = W ( W ⊥ ) ⊥ = W
Every vector v ∈ R n \mathbf{v} \in \mathbb{R}^n v ∈ R n decomposes uniquely as v = w + w ⊥ \mathbf{v} = \mathbf{w} + \mathbf{w}^\perp v = w + w ⊥ where w ∈ W \mathbf{w} \in W w ∈ W and w ⊥ ∈ W ⊥ \mathbf{w}^\perp \in W^\perp w ⊥ ∈ W ⊥
Direct sum notation: R n = W ⊕ W ⊥ \mathbb{R}^n = W \oplus W^\perp R n = W ⊕ W ⊥
(Finding W ⊥ W^\perp W ⊥ )
If W = span { v 1 , … , v k } W = \text{span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} W = span { v 1 , … , v k } , then x ∈ W ⊥ \mathbf{x} \in W^\perp x ∈ W ⊥ iff:
{ v 1 ⋅ x = 0 v 2 ⋅ x = 0 ⋮ v k ⋅ x = 0 \begin{cases}
\mathbf{v}_1 \cdot \mathbf{x} = 0 \\
\mathbf{v}_2 \cdot \mathbf{x} = 0 \\
\vdots \\
\mathbf{v}_k \cdot \mathbf{x} = 0
\end{cases} ⎩ ⎨ ⎧ v 1 ⋅ x = 0 v 2 ⋅ x = 0 ⋮ v k ⋅ x = 0
This is a homogeneous system: A x = 0 A\mathbf{x} = \mathbf{0} A x = 0 where the rows of A A A are v 1 T , … , v k T \mathbf{v}_1^T, \ldots, \mathbf{v}_k^T v 1 T , … , v k T .
So: W ⊥ = null ( A ) W^\perp = \text{null}(A) W ⊥ = null ( A ) .
(Example: Orthogonal Complement)
Find W ⊥ W^\perp W ⊥ where W = span { [ 1 2 1 ] } W = \text{span}\left\{\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}\right\} W = span ⎩ ⎨ ⎧ 1 2 1 ⎭ ⎬ ⎫ in R 3 \mathbb{R}^3 R 3 .
We need x = [ x 1 x 2 x 3 ] \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} x = x 1 x 2 x 3 such that:
x 1 + 2 x 2 + x 3 = 0 x_1 + 2x_2 + x_3 = 0 x 1 + 2 x 2 + x 3 = 0
This is a plane through the origin. The general solution:
x = x 2 [ − 2 1 0 ] + x 3 [ − 1 0 1 ] \mathbf{x} = x_2\begin{bmatrix} -2 \\ 1 \\ 0 \end{bmatrix} + x_3\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} x = x 2 − 2 1 0 + x 3 − 1 0 1
So W ⊥ = span { [ − 2 1 0 ] , [ − 1 0 1 ] } W^\perp = \text{span}\left\{\begin{bmatrix} -2 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}\right\} W ⊥ = span ⎩ ⎨ ⎧ − 2 1 0 , − 1 0 1 ⎭ ⎬ ⎫ .
Dimension check: dim ( W ) + dim ( W ⊥ ) = 1 + 2 = 3 = n \dim(W) + \dim(W^\perp) = 1 + 2 = 3 = n dim ( W ) + dim ( W ⊥ ) = 1 + 2 = 3 = n ✓
The Gram-Schmidt Process
(The Problem)
Given a basis { v 1 , … , v k } \{\mathbf{v}_1, \ldots, \mathbf{v}_k\} { v 1 , … , v k } for a subspace, we want to construct an orthogonal (or orthonormal) basis { u 1 , … , u k } \{\mathbf{u}_1, \ldots, \mathbf{u}_k\} { u 1 , … , u k } for the same subspace.
(The Algorithm)
Start with the first vector:
u 1 = v 1 \mathbf{u}_1 = \mathbf{v}_1 u 1 = v 1
For each subsequent vector, subtract off the projections onto all previous orthogonal vectors:
u 2 = v 2 − proj u 1 v 2 \mathbf{u}_2 = \mathbf{v}_2 - \text{proj}_{\mathbf{u}_1} \mathbf{v}_2 u 2 = v 2 − proj u 1 v 2
u 3 = v 3 − proj u 1 v 3 − proj u 2 v 3 \mathbf{u}_3 = \mathbf{v}_3 - \text{proj}_{\mathbf{u}_1} \mathbf{v}_3 - \text{proj}_{\mathbf{u}_2} \mathbf{v}_3 u 3 = v 3 − proj u 1 v 3 − proj u 2 v 3
In general:
u k = v k − ∑ j = 1 k − 1 proj u j v k = v k − ∑ j = 1 k − 1 u j ⋅ v k u j ⋅ u j u j \mathbf{u}_k = \mathbf{v}_k - \sum_{j=1}^{k-1} \text{proj}_{\mathbf{u}_j} \mathbf{v}_k = \mathbf{v}_k - \sum_{j=1}^{k-1} \frac{\mathbf{u}_j \cdot \mathbf{v}_k}{\mathbf{u}_j \cdot \mathbf{u}_j} \mathbf{u}_j u k = v k − j = 1 ∑ k − 1 proj u j v k = v k − j = 1 ∑ k − 1 u j ⋅ u j u j ⋅ v k u j
Result: { u 1 , … , u k } \{\mathbf{u}_1, \ldots, \mathbf{u}_k\} { u 1 , … , u k } is an orthogonal basis.
To make it orthonormal: Normalize each vector:
q i = u i ∥ u i ∥ \mathbf{q}_i = \frac{\mathbf{u}_i}{\|\mathbf{u}_i\|} q i = ∥ u i ∥ u i
(Why This Works)
At each step, we take v k \mathbf{v}_k v k and remove its components along all previous orthogonal directions. What remains (u k \mathbf{u}_k u k ) is guaranteed to be orthogonal to u 1 , … , u k − 1 \mathbf{u}_1, \ldots, \mathbf{u}_{k-1} u 1 , … , u k − 1 .
Key insight: The span of { u 1 , … , u k } \{\mathbf{u}_1, \ldots, \mathbf{u}_k\} { u 1 , … , u k } equals the span of { v 1 , … , v k } \{\mathbf{v}_1, \ldots, \mathbf{v}_k\} { v 1 , … , v k } at each stage,we’re just changing the basis vectors, not the subspace.
(Example: Gram-Schmidt in R 3 \mathbb{R}^3 R 3 )
Orthogonalize the basis:
v 1 = [ 1 1 0 ] , v 2 = [ 1 0 1 ] , v 3 = [ 0 1 1 ] \mathbf{v}_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \quad \mathbf{v}_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \quad \mathbf{v}_3 = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} v 1 = 1 1 0 , v 2 = 1 0 1 , v 3 = 0 1 1
Step 1: u 1 = v 1 = [ 1 1 0 ] \mathbf{u}_1 = \mathbf{v}_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} u 1 = v 1 = 1 1 0
Step 2: Compute u 2 \mathbf{u}_2 u 2 :
proj u 1 v 2 = u 1 ⋅ v 2 u 1 ⋅ u 1 u 1 = 1 + 0 1 + 1 [ 1 1 0 ] = 1 2 [ 1 1 0 ] \text{proj}_{\mathbf{u}_1} \mathbf{v}_2 = \frac{\mathbf{u}_1 \cdot \mathbf{v}_2}{\mathbf{u}_1 \cdot \mathbf{u}_1} \mathbf{u}_1 = \frac{1 + 0}{1 + 1}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \frac{1}{2}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} proj u 1 v 2 = u 1 ⋅ u 1 u 1 ⋅ v 2 u 1 = 1 + 1 1 + 0 1 1 0 = 2 1 1 1 0
u 2 = [ 1 0 1 ] − 1 2 [ 1 1 0 ] = [ 1 / 2 − 1 / 2 1 ] \mathbf{u}_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} - \frac{1}{2}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1/2 \\ -1/2 \\ 1 \end{bmatrix} u 2 = 1 0 1 − 2 1 1 1 0 = 1/2 − 1/2 1
(Can scale by 2: u 2 = [ 1 − 1 2 ] \mathbf{u}_2 = \begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix} u 2 = 1 − 1 2 )
Step 3: Compute u 3 \mathbf{u}_3 u 3 :
proj u 1 v 3 = 0 + 1 2 [ 1 1 0 ] = 1 2 [ 1 1 0 ] \text{proj}_{\mathbf{u}_1} \mathbf{v}_3 = \frac{0 + 1}{2}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \frac{1}{2}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} proj u 1 v 3 = 2 0 + 1 1 1 0 = 2 1 1 1 0
proj u 2 v 3 = 0 − 1 + 2 1 + 1 + 4 [ 1 − 1 2 ] = 1 6 [ 1 − 1 2 ] \text{proj}_{\mathbf{u}_2} \mathbf{v}_3 = \frac{0 - 1 + 2}{1 + 1 + 4}\begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix} = \frac{1}{6}\begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix} proj u 2 v 3 = 1 + 1 + 4 0 − 1 + 2 1 − 1 2 = 6 1 1 − 1 2
u 3 = [ 0 1 1 ] − 1 2 [ 1 1 0 ] − 1 6 [ 1 − 1 2 ] \mathbf{u}_3 = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} - \frac{1}{2}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} - \frac{1}{6}\begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix} u 3 = 0 1 1 − 2 1 1 1 0 − 6 1 1 − 1 2
= [ − 1 / 2 − 1 / 6 1 / 2 + 1 / 6 1 − 1 / 3 ] = [ − 2 / 3 2 / 3 2 / 3 ] = \begin{bmatrix} -1/2 - 1/6 \\ 1/2 + 1/6 \\ 1 - 1/3 \end{bmatrix} = \begin{bmatrix} -2/3 \\ 2/3 \\ 2/3 \end{bmatrix} = − 1/2 − 1/6 1/2 + 1/6 1 − 1/3 = − 2/3 2/3 2/3
(Can scale by 3: u 3 = [ − 2 2 2 ] \mathbf{u}_3 = \begin{bmatrix} -2 \\ 2 \\ 2 \end{bmatrix} u 3 = − 2 2 2 or by − 3 / 2 -3/2 − 3/2 : u 3 = [ 1 − 1 − 1 ] \mathbf{u}_3 = \begin{bmatrix} 1 \\ -1 \\ -1 \end{bmatrix} u 3 = 1 − 1 − 1 )
Verify orthogonality: Check all pairs have dot product zero.
Normalize for orthonormal basis:
q 1 = 1 2 [ 1 1 0 ] , q 2 = 1 6 [ 1 − 1 2 ] , q 3 = 1 3 [ 1 − 1 − 1 ] \mathbf{q}_1 = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \quad \mathbf{q}_2 = \frac{1}{\sqrt{6}}\begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix}, \quad \mathbf{q}_3 = \frac{1}{\sqrt{3}}\begin{bmatrix} 1 \\ -1 \\ -1 \end{bmatrix} q 1 = 2 1 1 1 0 , q 2 = 6 1 1 − 1 2 , q 3 = 3 1 1 − 1 − 1
QR Factorization
(The Connection)
The Gram-Schmidt process gives the QR factorization of a matrix.
If A = [ v 1 ∣ ⋯ ∣ v n ] A = [\mathbf{v}_1 \mid \cdots \mid \mathbf{v}_n] A = [ v 1 ∣ ⋯ ∣ v n ] has linearly independent columns, then:
A = Q R A = QR A = QR
where:
Q = [ q 1 ∣ ⋯ ∣ q n ] Q = [\mathbf{q}_1 \mid \cdots \mid \mathbf{q}_n] Q = [ q 1 ∣ ⋯ ∣ q n ] has orthonormal columns (from Gram-Schmidt)
R R R is upper triangular (encodes the projection coefficients)
Why this matters: QR factorization is numerically stable and used for solving least squares problems, computing eigenvalues (QR algorithm), and more.
Applications
(Least Squares)
To solve the inconsistent system A x = b A\mathbf{x} = \mathbf{b} A x = b (more equations than unknowns), find the x \mathbf{x} x that minimizes ∥ A x − b ∥ \|A\mathbf{x} - \mathbf{b}\| ∥ A x − b ∥ .
The solution is the projection of b \mathbf{b} b onto col ( A ) \text{col}(A) col ( A ) :
x ^ = ( A T A ) − 1 A T b \mathbf{\hat{x}} = (A^TA)^{-1}A^T\mathbf{b} x ^ = ( A T A ) − 1 A T b
If A = Q R A = QR A = QR (orthonormal columns), this simplifies dramatically:
x ^ = R − 1 Q T b \mathbf{\hat{x}} = R^{-1}Q^T\mathbf{b} x ^ = R − 1 Q T b
Why? ( Q R ) T ( Q R ) = R T Q T Q R = R T R (QR)^T(QR) = R^TQ^TQR = R^TR ( QR ) T ( QR ) = R T Q T QR = R T R (since Q T Q = I Q^TQ = I Q T Q = I ), and R T R R^TR R T R is easy to work with.
(Orthogonal Decomposition)
Any vector space splits naturally into orthogonal complements. For example:
R n = row ( A ) ⊕ null ( A ) \mathbb{R}^n = \text{row}(A) \oplus \text{null}(A) R n = row ( A ) ⊕ null ( A )
R m = col ( A ) ⊕ null ( A T ) \mathbb{R}^m = \text{col}(A) \oplus \text{null}(A^T) R m = col ( A ) ⊕ null ( A T )
These are the four fundamental subspaces , paired as orthogonal complements.
(Signal Processing)
In Fourier analysis, sine and cosine waves form an orthogonal basis for periodic functions. The Fourier coefficients are just inner products,projections onto each frequency component.
Summary: Why Orthogonality Simplifies Everything
Orthogonality turns geometry into algebra:
Dot products compute angles and lengths without trigonometry
Orthogonal vectors are automatically independent (no redundancy)
Orthonormal bases make coordinates trivial (just dot products)
Projections decompose vectors into parallel and perpendicular parts
Gram-Schmidt converts any basis into an orthonormal one
Orthogonal matrices preserve structure (lengths, angles, volume)
When vectors are orthogonal, you can work component-wise,no cross-terms, no interactions, just clean decomposition. This is why orthonormal bases are the gold standard: they make every calculation as simple as possible while preserving all the geometry.