Skip to main content

Section2.7Basis and Dimension

Objectives
  1. Understand the definition of a basis of a subspace.
  2. Understand the basis theorem.
  3. Recipes: basis for a column space, basis for a null space, basis of a span.
  4. Picture: basis of a subspace of R 2 or R 3 .
  5. Theorem: basis theorem.
  6. Essential vocabulary words: basis, dimension.

Subsection2.7.1Basis of a Subspace

As we discussed in Section 2.6, a subspace is the same as a span, except we do not have a set of spanning vectors in mind. There are infinitely many choices of spanning sets for a nonzero subspace; to avoid reduncancy, usually it is most convenient to choose a spanning set with the minimal number of vectors in it. This is the idea behind the notion of a basis.

Definition

Let V be a subspace of R n . A basis of V is a set of vectors { v 1 , v 2 ,..., v m } in V such that:

  1. V = Span { v 1 , v 2 ,..., v m } , and
  2. the set { v 1 , v 2 ,..., v m } is linearly independent.

Recall that a set of vectors is linearly independent if and only if, when you remove any vector from the set, the span shrinks (Theorem 2.5.12). In other words, if { v 1 , v 2 ,..., v m } is a basis of a subspace V , then no proper subset of { v 1 , v 2 ,..., v m } will span V : it is a minimal spanning set. Any subspace admits a basis by this theorem in Section 2.6.

A nonzero subspace has infinitely many different bases, but they all contain the same number of vectors.

We leave it as an exercise to prove that any two bases have the same number of vectors; one might want to wait until after learning the invertible matrix theorem in Section 3.5.

Definition

Let V be a subspace of R n . The number of vectors in any basis of V is called the dimension of V , and is written dim V .

Example

The previous example implies that any basis for R n has n vectors in it. Let v 1 , v 2 ,..., v n be vectors in R n , and let A be the n × n matrix with columns v 1 , v 2 ,..., v n .

  1. To say that { v 1 , v 2 ,..., v n } spans R n means that A has a pivot position in every row: see this theorem in Section 2.3.
  2. To say that { v 1 , v 2 ,..., v n } is linearly independent means that A has a pivot position in every column: see this important note in Section 2.5.

Since A is a square matrix, it has a pivot in every row if and only if it has a pivot in every column. We will see in Section 3.5 that the above two conditions are equivalent to the invertibility of the matrix A .

Subsection2.7.2Computing a Basis for a Subspace

Now we show how to find bases for the column space of a matrix and the null space of a matrix. In order to find a basis for a given subspace, it is usually best to rewrite the subspace as a column space or a null space first: see this important note in Section 2.6.

A basis for the column space

First we show how to compute a basis for the column space of a matrix.

Proof

The above theorem is referring to the pivot columns in the original matrix, not its reduced row echelon form. Indeed, a matrix and its reduced row echelon form generally have different column spaces. For example, in the matrix A below:

A = 1 2 0 1 2 3 4 5 2 4 0 2 F G RREF −−→ 1 0 8 7 0 1 4 3 0 0 0 0 F G pivotcolumns = basis pivotcolumnsinRREF

the pivot columns are the first two columns, so a basis for Col ( A ) is

DB 1 22 C , B 2 34 CE .

The first two columns of the reduced row echelon form certainly span a different subspace, as

Span DB 100 C , B 010 CE = DB ab 0 CAAA a , b in R E = ( xy -plane),

but Col ( A ) contains vectors whose last coordinate is nonzero.

A basis of a span

Computing a basis for a span is the same as computing a basis for a column space. Indeed, the span of finitely many vectors v 1 , v 2 ,..., v m is the column space of a matrix, namely, the matrix A whose columns are v 1 , v 2 ,..., v m :

A = B ||| v 1 v 2 ··· v m ||| C .
A basis for the null space

In order to compute a basis for the null space of a matrix, one has to find the parametric vector form of the solutions of the homogeneous equation Ax = 0.

The proof of the theorem has two parts. The first part is that every solution lies in the span of the given vectors. This is automatic: the vectors are exactly chosen so that every solution is a linear combination of those vectors. The second part is that the vectors are linearly independent. This part was discussed in this example in Section 2.5.

A basis for a general subspace

As mentioned at the beginning of this subsection, when given a subspace written in a different form, in order to compute a basis it is usually best to rewrite it as a column space or null space of a matrix.

Subsection2.7.3The Basis Theorem

Recall that { v 1 , v 2 ,..., v n } forms a basis for R n if and only if the matrix A with columns v 1 , v 2 ,..., v n has a pivot in every row and column (see this example). Since A is an n × n matrix, these two conditions are equivalent: the vectors span if and only if they are linearly independent. The basis theorem is an abstract version of the preceding statement, that applies to any subspace.

Proof

In other words, if you already know that dim V = m , and if you have a set of m vectors B = { v 1 , v 2 ,..., v m } in V , then you only have to check one of:

  1. B is linearly independent, or
  2. B spans V ,

in order for B to be a basis of V . If you did not already know that dim V = m , then you would have to check both properties.

To put it yet another way, suppose we have a set of vectors B = { v 1 , v 2 ,..., v m } in a subspace V . Then if any two of the following statements is true, the third must also be true:

  1. B is linearly independent,
  2. B spans V , and
  3. dim V = m .

For example, if V is a plane, then any two noncollinear vectors in V form a basis.