- Learn to use the rank theorem and the basis theorem.
- Picture: the rank theorem.
- Theorems: rank theorem, basis theorem.
- Vocabulary words: rank, nullity.
In this section we present two important general facts about dimensions and bases.
With the rank theorem, we can finally relate the dimension of the solution set of a matrix equation with the dimension of the column space.
The rank of a matrix written is the dimension of the column space
The nullity of a matrix written is the dimension of the null space
According to this theorem in Section 3.7, the rank of is equal to the number of columns with pivots. On the other hand, this theorem in Section 3.7 implies that equals the number of free variables, which is the number of columns without pivots. To summarize:
Clearly (the number of columns with pivots) plus (the number of columns without pivots) equals (the number of columns of ), so we have proved the following theorem.
If is an matrix, then
In other words, for any consistent system of linear equations,
Recall from this example in Section 3.7 that forms a basis for if and only if the matrix with columns has a pivot in every row and column. Since is an matrix, these two conditions are equivalent: the vectors span if and only if they are linearly independent. The basis theorem is an abstract version of the preceding statement, that applies to any subspace.
Let be a subspace of dimension Then:
Suppose that is a set of linearly independent vectors in In order to show that is a basis for we must prove that If not, then there exists some vector in that is not contained in By the increasing span criterion in Section 3.5, the set is also linearly independent. Continuing in this way, we keep choosing vectors until we eventually do have a linearly independent spanning set: say Then is a basis for which implies that But we were assuming that has dimension so must have already been a basis.
Now suppose that spans If is not linearly independent, then by this theorem in Section 3.5, we can remove some number of vectors from without shrinking its span. After reordering, we can assume that we removed the last vectors without shrinking the span, and that we cannot remove any more. Now and is a basis for because it is linearly independent. This implies that But we were assuming that so must have already been a basis.
In other words, if you already know that and if you have a set of vectors in then you only have to check one of:
in order for to be a basis of If you did not already know that then you would have to check both properties.
For example, if is a plane, then any two noncollinear vectors in form a basis.