Skip to main content

Section4.1Determinants: Definition

  1. Learn the definition of the determinant.
  2. Learn some ways to eyeball a matrix with zero determinant, and how to compute determinants of upper- and lower-triangular matrices.
  3. Learn the basic properties of the determinant, and how to apply them.
  4. Recipe: compute the determinant using row and column operations.
  5. Theorems: existence theorem, invertibility property, multiplicativity property, transpose property.
  6. Vocabulary words: diagonal, upper-triangular, lower-triangular, transpose.
  7. Essential vocabulary word: determinant.

In this section, we define the determinant, and we present one way to compute it. Then we discuss some of the many wonderful properties the determinant enjoys.

Subsection4.1.1The Definition of the Determinant

The determinant of a square matrix A is a real number det ( A ) . It is defined via its behavior with respect to row operations; this means we can use row reduction to compute it. We will give a recursive formula for the determinant in Section 4.2. We will also show in this subsection that the determinant is related to invertibility, and in Section 4.3 that it is related to volumes.


The determinant is a function

det: C squarematrices D −→ R

satisfying the following properties:

  1. Doing a row replacement on A does not change det ( A ) .
  2. Scaling a row of A by a scalar c multiplies the determinant by c .
  3. Swapping two rows of a matrix multiplies the determinant by 1.
  4. The determinant of the identity matrix I n is equal to 1.

In other words, to every square matrix A we assign a number det ( A ) in a way that satisfies the above properties.

In each of the first three cases, doing a row operation on a matrix scales the determinant by a nonzero number. (Multiplying a row by zero is not a row operation.) Therefore, doing row operations on a square matrix A does not change whether or not the determinant is zero.

The main motivation behind using these particular defining properties is geometric: see Section 4.3. Another motivation for this definition is that it tells us how to compute the determinant: we row reduce and keep track of the changes.


Let us compute det A 2114 B . First we row reduce, then we compute the determinant in the opposite order:

M 2114 N det = 7 R 1 ←→ R 2 −−−−−−−→ M 1421 N det = 7 R 2 = R 2 2 R 1 −−−−−−−→ M 140 7 N det = 7 R 2 = R 2 ÷− 7 −−−−−−−→ M 1401 N det = 1 R 1 = R 1 4 R 2 −−−−−−−→ M 1001 N det = 1

The reduced row echelon form of the matrix is the identity matrix I 2 , so its determinant is 1. The second-last step in the row reduction was a row replacement, so the second-final matrix also has determinant 1. The previous step in the row reduction was a row scaling by 1 / 7; since (the determinant of the second matrix times 1 / 7 ) is 1, the determinant of the second matrix must be 7. The first step in the row reduction was a row swap, so the determinant of the first matrix is negative the determinant of the second. Thus, the determinant of the original matrix is 7.

Note that our answer agrees with this definition of the determinant.

Here is the general method for computing determinants using row reduction.

Recipe: Computing determinants by row reducing

Let A be a square matrix. Suppose that you do some number of row operations on A to obtain a matrix B in row echelon form. Then

det ( A )=( 1 ) r · (productofthediagonalentriesof B ) (productofscalingfactorsused),

where r is the number of row swaps performed.

In other words, the determinant of A is the product of diagonal entries of the row echelon form B , times a factor of ± 1 coming from the number of row swaps you made, divided by the product of the scaling factors used in the row reduction.

Example(The determinant of a 2 × 2 matrix)

Let us use the recipe to compute the determinant of a general 2 × 2 matrix A = A abcd B .

  • If a = 0, then
    det M abcd N = det M 0 bcd N = det M cd 0 b N = bc .
  • If a B = 0, then
    det M abcd N = a · det M 1 b / acd N = a · det M 1 b / a 0 d c · b / a N = a · 1 · ( d bc / a )= ad bc .

In either case, we recover the formula in Section 3.5:

det M abcd N = ad bc .

If a matrix is already in row echelon form, then you can simply read off the determinant as the product of the diagonal entries. It turns out this is true for a slightly larger class of matrices called triangular.


  • The diagonal entries of a matrix A are the entries a 11 , a 22 ,...:
    a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 GKI HLJ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a 42 a 43 GKKKI HLLLJ diagonalentries
  • A square matrix is called upper-triangular if its nonzero entries all lie above the diagonal, and it is called lower-triangular if its nonzero entries all lie below the diagonal. It is called diagonal if all of its nonzero entries lie on the diagonal, i.e., if it is both upper-triangular and lower-triangular.
    A A A A 0 A A A 0 0 A A 0 0 0 A GKKKI HLLLJ upper-triangular A 0 0 0 A A 0 0 A A A 0 A A A A GKKKI HLLLJ lower-triangular A 0 0 0 0 A 0 0 0 0 A 0 0 0 0 A GKKKI HLLLJ diagonal

  1. Suppose that A has a zero row. Let B be the matrix obtained by negating the zero row. Then det ( A )= det ( B ) by the second defining property. But A = B , so det ( A )= det ( B ) :

    E 123000789 F R 2 = R 2 −−−−→ E 123000789 F .

    Putting these together yields det ( A )= det ( A ) , so det ( A )= 0.

    Now suppose that A has a zero column. Then A is not invertible by the invertible matrix theorem in Section 3.6, so its reduced row echelon form has a zero row. Since row operations do not change whether the determinant is zero, we conclude det ( A )= 0.

  2. First suppose that A is upper-triangular, and that one of the diagonal entries is zero, say a ii = 0. We can perform row operations to clear the entries above the nonzero diagonal entries:

    GKI a 11 AAA 0 a 22 AA 000 A 000 a 44 HLJ −−−→ GKI a 11 0 A 00 a 22 A 00000000 a 44 HLJ

    In the resulting matrix, the i th row is zero, so det ( A )= 0 by the first part.

    Still assuming that A is upper-triangular, now suppose that all of the diagonal entries of A are nonzero. Then A can be transformed to the identity matrix by scaling the diagonal entries and then doing row replacements:

    E a AA 0 b A 00 c F scaleby a 1 , b 1 , c 1 −−−−−−−→ E 1 AA 01 A 001 F rowreplacements −−−−−−−→ E 100010001 F det = abc ←−−−−−−− det = 1 ←−−−−−−− det = 1

    Since det ( I n )= 1 and we scaled by the reciprocals of the diagonal entries, this implies det ( A ) is the product of the diagonal entries.

    The same argument works for lower triangular matrices, except that the the row replacements go down instead of up.

A matrix can always be transformed into row echelon form by a series of row operations, and a matrix in row echelon form is upper-triangular. Therefore, we have completely justified the recipe for computing the determinant.

The determinant is characterized by its defining properties, since we can compute the determinant of any matrix using row reduction, as in the above recipe. However, we have not yet proved the existence of a function satisfying the defining properties! Row reducing will compute the determinant if it exists, but we cannot use row reduction to prove existence, because we do not yet know that you compute the same number by row reducing in two different ways.

We will prove the existence theorem in Section 4.2, by exhibiting a recursive formula for the determinant. Again, the real content of the existence theorem is:

No matter which row operations you do, you will always compute the same value for the determinant.

Subsection4.1.2Magical Properties of the Determinant

In this subsection, we will discuss a number of the amazing properties enjoyed by the determinant: the invertibility property, the multiplicativity property, and the transpose property.

By the invertibility property, a matrix that does not satisfy any of the properties of the invertible matrix theorem in Section 3.6 has zero determinant.

In particular, if two rows/columns of A are multiples of each other, then det ( A )= 0. We also recover the fact that a matrix with a row or column of zeros has determinant zero.

The proofs of the multiplicativity property and the transpose property below, as well as the cofactor expansion theorem in Section 4.2 and the determinants and volumes theorem in Section 4.3, use the following strategy: define another function d : { n × n matrices }→ R , and prove that d satisfies the same four defining properties as the determinant. By the existence theorem, the function d is equal to the determinant. This is an advantage of defining a function via its properties: in order to prove it is equal to another function, one only has to check the defining properties.

Recall that taking a power of a square matrix A means taking products of A with itself:

A 2 = AAA 3 = AAA etc.

If A is invertible, then we define

A 2 = A 1 A 1 A 3 = A 1 A 1 A 1 etc.

For completeness, we set A 0 = I n if A B = 0.

Here is another application of the multiplicativity property.


The determinant of the product is the product of the determinants by the multiplicativity property:

det ( A 1 A 2 ··· A k )= det ( A 1 ) det ( A 2 ) ··· det ( A k ) .

By the invertibility property, this is nonzero if and only if A 1 A 2 ··· A k is invertible. On the other hand, det ( A 1 ) det ( A 2 ) ··· det ( A k ) is nonzero if and only if each det ( A i ) B = 0, which means each A i is invertible.

In order to state the transpose property, we need to define the transpose of a matrix.


The transpose of an m × n matrix A is the n × m matrix A T whose rows are the columns of A . In other words, the ij entry of A T is a ji .

a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 GKKI HLLJ A a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 a 14 a 24 a 34 GKKKKKI HLLLLLJ A T Bip

Like inversion, transposition reverses the order of matrix multiplication.

The transpose property is very useful. For concreteness, we note that det ( A )= det ( A T ) means, for instance, that

det E 123456789 F = det E 147258369 F .

This implies that the determinant has the curious feature that it also behaves well with respect to column operations. Indeed, a column operation on A is the same as a row operation on A T , and det ( A )= det ( A T ) .

The previous corollary makes it easier to compute the determinant: one is allowed to do row and column operations when simplifying the matrix. (Of course, one still has to keep track of how the row and column operations change the determinant.)

Summary: Magical Properties of the Determinant

  1. There is one and only one function det: { n × n matrices }→ R satisfying the four defining properties.
  2. The determinant of an upper-triangular or lower-triangular matrix is the product of the diagonal entries.
  3. A square matrix is invertible if and only if det ( A ) B = 0; in this case,
    det ( A 1 )= 1 det ( A ) .
  4. If A and B are n × n matrices, then
    det ( AB )= det ( A ) det ( B ) .
  5. For any square matrix A , we have
    det ( A T )= det ( A ) .
  6. The determinant can be computed by performing row and/or column operations.