It will be important to compute the set of all vectors that are orthogonal to a given set of vectors. It turns out that a vector is orthogonal to a set of vectors if and only if it is orthogonal to the span of those vectors, which is a subspace, so we restrict ourselves to the case of subspaces.
Subsection6.2.1Definition of the Orthogonal Complement
Taking the orthogonal complement is an operation that is performed on subspaces.
Let be a subspace of Its orthogonal complement is the subspace
The symbol is sometimes read “ perp.”
This is the set of all vectors in that are orthogonal to all of the vectors in We will show below that is indeed a subspace.
We now have two similar-looking pieces of notation:
Try not to confuse the two.
Pictures of orthogonal complements
The orthogonal complement of a line through the origin in is the perpendicular line
The orthogonal complement of is since the zero vector is the only vector that is orthogonal to all of the vectors in
For the same reason, we have
Subsection6.2.2Computing Orthogonal Complements
Since any subspace is a span, the following proposition gives a recipe for computing the orthogonal complement of any subspace. However, below we will give several shortcuts for computing the orthogonal complements of other common kinds of subspaces–in particular, null spaces. To compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix, as in this important note in Section 2.6.
Proposition(The orthogonal complement of a column space)
To justify the first equality, we need to show that a vector is perpendicular to the all of the vectors in if and only if it is perpendicular only to Since the are contained in we really only have to show that if then is perpendicular to every vector in Indeed, any vector in has the form for suitable scalars so
Therefore, is in if and only if is perpendicular to each vector
Since column spaces are the same as spans, we can rephrase the proposition as follows. Let be vectors in and let Then
Again, it is important to be able to go easily back and forth between spans and column spaces. If you are handed a span, you can apply the proposition once you have rewritten your span as a column space.
By the proposition, computing the orthogonal complement of a span means solving a system of linear equations. For example, if
then is the solution set of the homogeneous linear system associated to the matrix
This is the solution set of the system of equations
The zero vector is in because the zero vector is orthogonal to every vector in
Let be in so and for every vector in We must verify that for every in Indeed, we have
Let be in so for every in and let be a scalar. We must verify that for every in Indeed, we have
Next we prove the third assertion. Let be a basis for so and let be a basis for so We need to show First we claim that is linearly independent. Suppose that Let and so is in is in and Then is in both and which implies is perpendicular to itself. In particular, so and hence Therefore, all coefficients are equal to zero, because and are linearly independent.
It follows from the previous paragraph that Suppose that Then the matrix
has more columns than rows (it is “wide”), so its null space is nonzero by this note in Section 3.2. Let be a nonzero vector in Then
Finally, we prove the second assertion. Clearly is contained in this says that everything in is perpendicular to the set of all vectors perpendicular to everything in Let By 3, we have so The only -dimensional subspace of is all of so
See these paragraphs for pictures of the second property. As for the third: for example, if is a (-dimensional) plane in then is another (-dimensional) plane. Explicitly, we have
the orthogonal complement of the -plane is the -plane.
The row space of a matrix is the span of the rows of and is denoted
If is an matrix, then the rows of are vectors with entries, so is a subspace of Equivalently, since the rows of are the columns of the row space of is the column space of
We showed in the above proposition that if has rows then
Taking orthogonal complements of both sides and using the second fact gives
Replacing by and remembering that gives
Recipes: Shortcuts for computing orthogonal complements
For any vectors we have
For any matrix we have
As mentioned in the beginning of this subsection, in order to compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix.
Suppose that is an matrix. Let us refer to the dimensions of and as the row rank and the column rank of (note that the column rank of is the same as the rank of ). The next theorem says that the row and column ranks are the same. This is surprising for a couple of reasons. First, lies in and lies in Also, the theorem implies that and have the same number of pivots, even though the reduced row echelon forms of and have nothing to do with each other otherwise.
Let be a matrix. Then the row rank of is equal to the column rank of