Skip to main content

Section6.6Stochastic Matrices

Objectives
  1. Learn examples of stochastic matrices and applications to difference equations.
  2. Recipe: find the steady state of a positive stochastic matrix.
  3. Picture: dynamics of a positive stochastic matrix.
  4. Theorem: the Perron–Frobenius theorem.
  5. Vocabulary words: difference equation, (positive) stochastic matrix, steady state.

This section is devoted to one common kind of application of eigenvalues: to the study of difference equations, in particular to Markov chains. We will introduce stochastic matrices, which encode this type of difference equation, and will cover in detail the most famous example of a stochastic matrix: the Google Matrix.

Subsection6.6.1Difference Equations

Suppose that we are studying a system whose state can be described by a list of numbers: for instance, the numbers of rabbits aged 0,1, and 2 years, respectively, or the number of copies of Prognosis Negative in each of the Red Box kiosks in Atlanta. In each case, we can represent the state at time t by a vector v t . Suppose in addition that the state at time t + 1 is related to the state at time t in a linear way: v t + 1 = Av t for some matrix A . This is the situation we will consider in this subsection.

Definition

A difference equation is an equation of the form

v t + 1 = Av t

for an n × n matrix A and vectors v 0 , v 1 , v 2 ,... in R n .

In other words:

  • v t is the “state at time t ,
  • v t + 1 is the “state at time t + 1, and
  • v t + 1 = Av t means that A is the “change of state matrix.”

Note that

v t = Av t 1 = A 2 v t 2 = ··· = A t v 0 ,

which should hint to you that the long-term behavior of a difference equation is an eigenvalue problem.

We will use the following example in this subsection and the next. Understanding this section amounts to understanding this example.

Example

Red Box has kiosks all over Atlanta where you can rent movies. You can return them to any other kiosk. For simplicity, pretend that there are three kiosks in Atlanta, and that every customer returns their movie the next day. Let v t be the vector whose entries x t , y t , z t are the number of copies of Prognosis Negative at kiosks 1,2, and 3, respectively. Let A be the matrix whose i , j -entry is the probability that a customer renting Prognosis Negative from kiosk j returns it to kiosk i . For example, the matrix

A = D .3.4.5.3.4.3.4.2.2 E

encodes a 30% probability that a customer renting from kiosk 3 returns the movie to kiosk 2, and a 40% probability that a movie rented from kiosk 1 gets returned to kiosk 3. The second row (for instance) of the matrix A says:

The number of movies returned to kiosk 2 will be (on average):

30%ofthemoviesfromkiosk140%ofthemoviesfromkiosk230%ofthemoviesfromkiosk3

Applying this to all three rows, this means

A D x t y t z t E = D .3 x t + .4 y t + .5 z t .3 x t + .4 y t + .3 z t .4 x t + .2 y t + .2 z t E .

Therefore, Av t represents the number of movies in each kiosk the next day:

Av t = v t + 1 .

This system is modeled by a difference equation.

An important question to ask about a difference equation is: what is its long-term behavior? Where will the movies be after 100 days? In the next subsection, we will answer this question for a particular type of difference equation.

Subsection6.6.2Stochastic Matrices and the Steady State

In this subsection, we discuss difference equations representing probabilities, like the Red Box example. Such systems are called Markov chains. The most important result in this section is the Perron–Frobenius theorem, which describes the long-term behavior of a Markov chain.

Definition

A square matrix A is stochastic if all of its entries are nonnegative, and the entries of each column sum to one.

A matrix is positive if all of its entries are positive numbers.

Example

Continuing with the Red Box example, the matrix

A = D .3.4.5.3.4.3.4.2.2 E

is a positive stochastic matrix. The fact that the columns sum to one says that all of the movies rented from a particular kiosk must be returned to some other kiosk (remember that every customer returns their movie the next day). For instance, the first column says:

Of the movies rented from kiosk 1,

30%willbereturnedtokiosk130%willbereturnedtokiosk240%willbereturnedtokiosk3.

The sum is 100%, as all of the movies are returned to one of the three kiosks.

The matrix A represents the change of state from one day to the next:

D x t + 1 y t + 1 z t + 1 E = A D x t y t z t E = D .3 x t + .4 y t + .5 z t .3 x t + .4 y t + .3 z t .4 x t + .2 y t + .2 z t E .

If we sum the entries of v t + 1 , we obtain

( .3 x t + .4 y t + .5 z t )+( .3 x t + .4 y t + .3 z t )+( .4 x t + .2 y t + .2 z t )=( .3 + .3 + .4 ) x t +( .4 + .4 + .2 ) y t +( .5 + .3 + .2 ) z t = x t + y t + z t .

This says that the total number of copies of Prognosis Negative in the three kiosks does not change from day to day, as we expect.

The fact that the entries of the vectors v t and v t + 1 sum to the same number is a consequence of the fact that the columns of a stochastic matrix sum to one.

Let A be a stochastic matrix, let v t be a vector, and let v t + 1 = Av t . Then the sum of the entries of v t equals the sum of the entries of v t + 1 .

Computing the long-term behavior of a difference equation turns out to be an eigenvalue problem. The eigenvalues of stochastic matrices have very special properties.

In fact, for a positive stochastic matrix A , one can show that if λ B = 1 is a (real or complex) eigenvalue of A , then | λ | < 1. The 1 -eigenspace of a stochastic matrix is very important.

Definition

A steady state of a stochastic matrix A is an eigenvector w with eigenvalue 1, such that the entries are positive and sum to 1.

The Perron–Frobenius theorem describes the long-term behavior of a difference equation represented by a stochastic matrix. Its proof is beyond the scope of this text.

Translation: The Perron–Frobenius theorem makes the following assertions:

  • The 1 -eigenspace of a positive stochastic matrix is a line.
  • The 1 -eigenspace contains a vector with positive entries.
  • All vectors approach the 1 -eigenspace upon repeated multiplication by A .

One should think of a steady state vector w as a vector of percentages. For example, if the movies are distributed according to these percentages today, then they will be have the same distribution tomorrow, since Aw = w . And nomatter the starting distribution of movies, the long-term distribution will always be the steady state vector.

The sum c of the entries of v 0 is the total number of things in the system being modeled. The total number does not change, so the long-term state of the system must approach cw : it is a multiple of w because it is contained in the 1 -eigenspace, and the entries of cw sum to c .

Recipe 1: Compute the steady state vector

Let A be a positive stochastic matrix. Here is how to compute the steady-state vector of A .

  1. Find any eigenvector v of A with eigenvalue 1 by solving ( A I n ) v = 0.
  2. Divide v by the sum of the entries of v to obtain a vector w whose entries sum to 1.
  3. This vector automatically has positive entries. It is the unique steady-state vector.

The above recipe is suitable for calculations by hand, but it does not take advantage of the fact that A is a stochastic matrix. In practice, it is generally faster to compute a steady state vector by computer as follows:

Recipe 2: Compute the steady state vector by computer

Let A be a positive stochastic matrix. Here is how to compute the steady-state vector of A with a computer.

  1. Choose any vector v 0 whose entries sum to 1 (e.g., a standard coordinate vector).
  2. Compute v 1 = Av 0 , v 2 = Av 1 , v 3 = Av 2 , etc.
  3. These converge to the steady state vector w .
Example

Continuing with the Red Box example, we can illustrate the Perron–Frobenius theorem explicitly. The matrix

A = D .3.4.5.3.4.3.4.2.2 E

has characteristic polynomial

f ( λ )= λ 3 + 0.12 λ 0.02 = ( λ 1 )( λ + 0.2 )( λ 0.1 ) .

Notice that 1 is strictly greater in absolute value than the other eigenvalues, and that it has algebraic (hence, geometric) multiplicity 1. One computes eigenvectors for the eigenvalues 1, 0.2,0.1 to be, respectively,

u 1 = D 765 E u 2 = D 101 E u 3 = D 1 32 E .

The eigenvector u 1 necessarily has positive entries; the steady-state vector is

w = 1 7 + 6 + 5 u 1 = 1 18 D 765 E .

The eigenvectors u 1 , u 2 , u 3 form a basis B for R 3 ; for any vector x = a 1 u 1 + a 2 u 2 + a 3 u 3 in R 3 , we have

Ax = A ( a 1 u 1 + a 2 u 2 + a 3 u 3 )= a 1 Au 1 + a 2 Au 2 + a 3 Au 3 = a 1 u 1 0.2 a 2 u 2 + 0.1 a 3 u 3 .

Iterating multiplication by A in this way, we have

A t x = a 1 u 1 ( 0.2 ) t a 2 u 2 +( 0.1 ) t a 3 u 3 −→ a 1 u 1

as t →∞ . This shows that A t x approaches a 1 u 1 , which is an eigenvector with eigenvalue 1 , as guaranteed by the Perron–Frobenius theorem.

What do the above calculations say about the number of copies of Prognosis Negative in the Atlanta Red Box kiosks? Suppose that the kiosks start with 100 copies of the movie, with 30 copies at kiosk 1, 50 copies at kiosk 2, and 20 copies at kiosk 3. Let v 0 =( 30,50,20 ) be the vector describing this state. Then there will be v 1 = Av 0 movies in the kiosks the next day, v 2 = Av 1 the day after that, and so on. We let v t =( x t , y t , z t ) .

t x t y t z t 0 30.000000 50.000000 20.0000001 39.000000 35.000000 26.0000002 38.700000 33.500000 27.8000003 38.910000 33.350000 27.7400004 38.883000 33.335000 27.7820005 38.889900 33.333500 27.7766006 38.888670 33.333350 27.7779807 38.888931 33.333335 27.7777348 38.888880 33.333333 27.7777869 38.888891 33.333333 27.77777610 38.888889 33.333333 27.777778

(Of course it does not make sense to have a fractional number of movies; the decimals are included here to illustrate the convergence.) The steady-state vector says that eventually, the movies will be distributed in the kiosks according to the percentages

w = 1 18 D 765 E = D 38.888888%33.333333%27.777778% E ,

which agrees with the above table. Moreover, this distribution is independent of the beginning distribution of movies in the kiosks.

Now we turn to visualizing the dynamics of (i.e., repeated multiplication by) the matrix A . This matrix is diagonalizable; we have A = CDC 1 for

C = D 7 1160 3512 E D = D 1000 .2000.1 E .

The matrix D leaves the x -coordinate unchanged, scales the y -coordinate by 1 / 5, and scales the z -coordinate by 1 / 10. Repeated multiplication by D makes the y - and z -coordinates very small, so it “sucks all vectors into the x -axis.”

The matrix A does the same thing as D , but with respect to the coordinate system defined by the columns u 1 , u 2 , u 3 of C . This means that A “sucks all vectors into the 1 -eigenspace”, without changing the sum of the entries of the vectors.

Figure16Dynamics of the stochastic matrix A . Click “multiply” to multiply the colored points by D on the left and A on the right. Note that on both sides, all vectors are “sucked into the 1 -eigenspace” (the green line). (We have scaled C by 1 / 4 so that vectors have roughly the same size on the right and the left. The “jump” that happens when you press “multiply” is a negation of the .2 -eigenspace, which is not animated.)

The picture of a positive stochastic matrix is always the same, whether or not it is diagonalizable: all vectors are “sucked into the 1 -eigenspace,” which is a line, without changing the sum of the entries of the vectors. This is the geometric content of the Perron–Frobenius theorem.

Subsection6.6.3Google’s PageRank Algorithm

Internet searching in the 1990s was very inefficient. Yahoo or AltaVista would scan pages for your search text, and simply list the results with the most occurrences of those words. Not surprisingly, the more unsavory websites soon learned that by putting the words “Alanis Morissette” a million times in their pages, they could show up first every time an angsty teenager tried to find Jagged Little Pill on Napster.

Larry Page and Sergey Brin invented a way to rank pages by importance. They founded Google based on their algorithm. Here is how it works. (Roughly.)

Each web page has an associated importance, or rank. This is a positive number. This rank is determined by the following rule.

The Importance Rule

If a page P links to n other pages Q 1 , Q 2 ,..., Q n , then each page Q i inherits 1 n of P ’s importance.

In practice, this means:

  • If a very important page links to your page (and not to a zillion other ones as well), then your page is considered important.
  • If a zillion unimportant pages link to your page, then your page is still important.
  • If only one unknown page links to yours, your page is not important.

Alternatively, there is the random surfer interpretation. A “random surfer” just sits at his computer all day, randomly clicking on links. The pages he spends the most time on should be the most important. So, the important (high-ranked) pages are those where a random surfer will end up most often. This measure turns out to be equivalent to the rank.

The Importance Matrix

Consider an Internet with n pages. The importance matrix is the n × n matrix A whose i , j -entry is the importance that page j passes to page i .

Observe that the importance matrix is a stochastic matrix, assuming every page contains a link: if page i has m links, then the i th column contains the number 1 / m , m times, and the number zero in the other entries.

Example

Consider the following Internet with only four pages. Links are indicated by arrows.

A B C D 1 3 1 3 1 3 1 2 1 2 1 1 2 1 2

The Importance Rule says:

  • Page A has 3 links, so it passes 1 3 of its importance to pages B , C , D .
  • Page B has 2 links, so it passes 1 2 of its importance to pages C , D .
  • Page C has one link, so it passes all of its importance to page A .
  • Page D has 2 links, so it passes 1 2 of its importance to pages A , C .

In terms of matrices, if v =( a , b , c , d ) is the vector containing the ranks a , b , c , d of the pages A , B , C , D , then

FJH 001 1 21 3 000 1 31 2 0 1 21 31 2 00 GKIFJH abcd GKI = FJJH c + 1 2 d 1 3 a 1 3 a + 1 2 b + 1 2 d 1 3 a + 1 2 b GKKI = FJH abcd GKI .

The matrix on the left is the Importance Matrix, and the final equality expresses the Importance Rule.

The above example illustrates the key observation.

Key Observation

The rank vector is an eigenvector of the importance matrix with eigenvalue one.

In light of the key observation, we would like to use the Perron–Frobenius theorem to find the rank vector. Unfortunately, the Importance Matrix is not always a positive stochastic matrix.

Here is Page and Brin’s solution. First we fix the importance matrix by replacing each zero column with a column of 1 / n s, where n is the number of pages:

A = D 000000110 E becomes A A = D 001 / 3001 / 3111 / 3 E .

The modified Importance Matrix A A is always stochastic.

Now we choose a number p in ( 0,1 ) , called the damping factor. (A typical value is p = 0.15. )

The Google Matrix

Let A be the Importance Matrix for an Internet with n pages, and let A A be the modified Importance Matrix. The Google Matrix is the matrix

M =( 1 p ) · A A + p · B where B = 1 n FJJH 11 ··· 111 ··· 1............11 ··· 1 GKKI .

In the random surfer interpretation, this matrix M says: with probability p , our surfer will surf to a completely random page; otherwise, he'll click a random link on the current page, unless the current page has no links, in which case he'll surf to a completely random page in either case.

The reader can verify the following important fact.

If we declare that the ranks of all of the pages must sum to one, then we find:

The 25 Billion Dollar Eigenvector

The PageRank vector is the steady state of the Google Matrix.

This exists and has positive entries by the Perron–Frobenius theorem. The hard part is calculating it: in real life, the Google Matrix has zillions of rows.