Skip to main content

Section6.2Orthogonal Complements

Objectives
  1. Understand the basic properties of orthogonal complements.
  2. Learn to compute the orthogonal complement of a subspace.
  3. Recipes: shortcuts for computing the orthogonal complements of common subspaces.
  4. Picture: orthogonal complements in R 2 and R 3 .
  5. Theorem: row rank equals column rank.
  6. Vocabulary words: orthogonal complement, row space.

It will be important to compute the set of all vectors that are orthogonal to a given set of vectors. It turns out that a vector is orthogonal to a set of vectors if and only if it is orthogonal to the span of those vectors, which is a subspace, so we restrict ourselves to the case of subspaces.

Subsection6.2.1Definition of the Orthogonal Complement

Taking the orthogonal complement is an operation that is performed on subspaces.

Definition

Let W be a subspace of R n . Its orthogonal complement is the subspace

W = A v in R n | v · w = 0forall w in W B .

The symbol W is sometimes read W perp.”

This is the set of all vectors v in R n that are orthogonal to all of the vectors in W . We will show below that W is indeed a subspace.

Note

We now have two similar-looking pieces of notation:

A T isthetransposeofamatrix A . W istheorthogonalcomplementofasubspace W .

Try not to confuse the two.

Pictures of orthogonal complements

The orthogonal complement of a line W through the origin in R 2 is the perpendicular line W .

W W

The orthogonal complement of a line W in R 3 is the perpendicular plane W .

W W

The orthogonal complement of a plane W in R 3 is the perpendicular line W .

W W

We see in the above pictures that ( W ) = W .

Example

The orthogonal complement of R n is { 0 } , since the zero vector is the only vector that is orthogonal to all of the vectors in R n .

For the same reason, we have { 0 } = R n .

Subsection6.2.2Computing Orthogonal Complements

Since any subspace is a span, the following proposition gives a recipe for computing the orthogonal complement of any subspace. However, below we will give several shortcuts for computing the orthogonal complements of other common kinds of subspaces–in particular, null spaces. To compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix, as in this important note in Section 2.6.

Since column spaces are the same as spans, we can rephrase the proposition as follows. Let v 1 , v 2 ,..., v m be vectors in R n , and let W = Span { v 1 , v 2 ,..., v m } . Then

W = A allvectorsorthogonaltoeach v 1 , v 2 ,..., v m B = Nul FQQO v T 1 v T 2 ... v Tm GRRP .

Again, it is important to be able to go easily back and forth between spans and column spaces. If you are handed a span, you can apply the proposition once you have rewritten your span as a column space.

By the proposition, computing the orthogonal complement of a span means solving a system of linear equations. For example, if

v 1 = D 172 E v 2 = D 231 E

then Span { v 1 , v 2 } is the solution set of the homogeneous linear system associated to the matrix

S v T 1 v T 2 T = S 172 231 T .

This is the solution set of the system of equations

U x 1 + 7 x 2 + 2 x 3 = 0 2 x 1 + 3 x 2 + x 3 = 0.

In order to find shortcuts for computing orthogonal complements, we need the following basic facts. Looking back the the above examples, all of these facts should be believable.

See these paragraphs  for pictures of the second property. As for the third: for example, if W is a (2 -dimensional) plane in R 4 , then W is another (2 -dimensional) plane. Explicitly, we have

Span A e 1 , e 2 B = HNLNJFQO xyzw GRP in R 4 CCCCFQO xyzw GRP · FQO 1000 GRP = 0and FQO xyzw GRPFQO 0100 GRP = 0 INMNK = HNLNJFQO 00 zw GRP in R 4 INMNK = Span A e 3 , e 4 } :

the orthogonal complement of the xy -plane is the zw -plane.

Definition

The row space of a matrix A is the span of the rows of A , and is denoted Row ( A ) .

If A is an m × n matrix, then the rows of A are vectors with n entries, so Row ( A ) is a subspace of R n . Equivalently, since the rows of A are the columns of A T , the row space of A is the column space of A T :

Row ( A )= Col ( A T ) .

We showed in the above proposition that if A has rows v T 1 , v T 2 ,..., v Tm , then

Row ( A ) = Span { v 1 , v 2 ,..., v m } = Nul ( A ) .

Taking orthogonal complements of both sides and using the second fact gives

Row ( A )= Nul ( A ) .

Replacing A by A T and remembering that Row ( A )= Col ( A T ) gives

Col ( A ) = Nul ( A T ) andCol ( A )= Nul ( A T ) .

To summarize:

Recipes: Shortcuts for computing orthogonal complements

For any vectors v 1 , v 2 ,..., v m , we have

Span { v 1 , v 2 ,..., v m } = Nul FQQO v T 1 v T 2 ... v Tm GRRP .

For any matrix A , we have

Row ( A ) = Nul ( A ) Nul ( A ) = Row ( A ) Col ( A ) = Nul ( A T ) Nul ( A T ) = Col ( A ) .

As mentioned in the beginning of this subsection, in order to compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix.

Subsection6.2.3Row rank and column rank

Suppose that A is an m × n matrix. Let us refer to the dimensions of Col ( A ) and Row ( A ) as the row rank and the column rank of A (note that the column rank of A is the same as the rank of A ). The next theorem says that the row and column ranks are the same. This is surprising for a couple of reasons. First, Row ( A ) lies in R n and Col ( A ) lies in R m . Also, the theorem implies that A and A T have the same number of pivots, even though the reduced row echelon forms of A and A T have nothing to do with each other otherwise.

Proof

In particular, by this corollary in Section 2.7 both the row rank and the column rank are equal to the number of pivots of A .