We will now study stochastic processes, experiments in which the outcomes of events depend on the previous outcomes; stochastic processes involve random outcomes that can be described by probabilities. Such a process or experiment is called a Markov Chain or Markov process. The process was first studied by a Russian mathematician named Andrei A. Markov in the early 1900s.
About 600 cities worldwide have bike share programs. Typically a person pays a fee to join a the program and can borrow a bicycle from any bike share station and then can return it to the same or another system. Each day, the distribution of bikes at the stations changes, as the bikes get returned to different stations from where they are borrowed.
For simplicity, let’s consider a very simple bike share program with only 3 stations: A, B, C. Suppose that all bicycles must be returned to the station at the end of the day, so that each day there is a time, let’s say midnight, that all bikes are at some station, and we can examine all the stations at this time of day, every day. We want to model the movement of bikes from midnight of a given day to midnight of the next day. We find that over a 1 day period,
We can draw an arrow diagram to show this. The arrows indicate the station where the bicycle was started, called its initial state, and the stations at which it might be located one day later, called the terminal states. The numbers on the arrows show the probability for being in each of the indicated terminal states.
Because our bike share example is simple and has only 3 stations, the arrow diagram, also called a directed graph, helps us visualize the information. But if we had an example with 10, or 20, or more bike share stations, the diagram would become so complicated that it would be difficult to understand the information in the diagram.
We can use a transition matrix to organize the information,
Each row in the matrix represents an initial state. Each column represents a terminal state.
We will assign the rows in order to stations A, B, C, and the columns in the same order to stations A, B, C. Therefore the matrix must be a square matrix, with the same number of rows as columns. The entry in row 2 column 3, for example, would show the probability that a bike that is initially at station B will be at station C one day later: that entry is 0.30, which is the probability in the diagram for the arrow that points from B to C. We use this the letter T for transition matrix.
Looking at the first row that represents bikes initially at station A, we see that 30% of the bikes borrowed from station A are returned to station A, 50% end up at station B, and 20% end up at station C, after one day.
We note some properties of the transition matrix:
A city is served by two cable TV companies, BestTV and CableCast.
The two states in this example are BestTV and CableCast. Express the information above as a transition matrix which displays the probabilities of going from one state into another state.
Solution
The transition matrix is:
As previously noted, the reader should observe that a transition matrix is always a square matrix because all possible states must have both rows and columns. All entries in a transition matrix are non-negative as they represent probabilities. And, since all possible outcomes are considered in the Markov process, the sum of the row entries is always 1.
With a larger transition matrix, the ideas in Example \(\PageIndex\) could be expanded to represent a market with more than 2 cable TV companies. The concepts of brand loyalty and switching between brands demonstrated in the cable TV example apply to many types of products, such as cell phone carriers, brands of regular purchases such as food or laundry detergent, brands major purchases such as cars or appliances, airlines that travelers choose when booking flights, or hotels chains that travelers choose to stay in.
The transition matrix shows the probabilities for transitions between states at two consecutive times. We need a way to represent the distribution among the states at a particular point in time. To do this we use a row matrix called a state vector. The state vector is a row matrix that has only one row; it has one column for each state. The entries show the distribution by state at a given point in time. All entries are between 0 and 1 inclusive, and the sum of the entries is 1.
For the bike share example with 3 bike share stations, the state vector is a \(1 \times 3\) matrix with 1 row and 3 columns. Suppose that when we start observing our bike share program, 30% of the bikes are at station A, 45% of the bikes are at station B, and 25% are at station C. The initial state vector is
The subscript 0 indicates that this is the initial distribution, before any transitions occur.
If we want to determine the distribution after one transition, we’ll need to find a new state vector that we’ll call V1. The subscript 1 indicates this is the distribution after 1 transition has occurred.
We find V1 by multiplying V0 by the transition matrix T, as follows:
After 1 day (1 transition), 16 % of the bikes are at station A, 44.5 % are at station B and 39.5% are at station C.
We showed the step by step work for the matrix multiplication above. In the future we’ll generally use technology, such as the matrix capabilities of our calculator, to perform any necessary matrix multiplications, rather than showing the step by step work.
Suppose now that we want to know the distribution of bicycles at the stations after two days. We need to find V2, the state vector after two transitions. To find V2 , we multiply the state vector after one transition V1 by the transition matrix T.
\[\mathrm_=\mathrm_ \mathrm=\left[\begin
.16 & .445 & .395
\end\right]\left[\begin
0.3 & 0.5 & 0.2 \\
0.1 & 0.6 & 0.3 \\
0.1 & 0.1 & 0.8
\end\right]=\left[\begin
.132 & .3865 & .4815
\end\right] \nonumber \]
We note that \(\mathrm_=\mathrm_ \mathrm, \text < so >\mathrm_=\mathrm_ \mathrm=\left(\mathrm_ \mathrm\right) \mathrm=\mathrm_ \mathrm^\)
This gives an equivalent method to calculate the distribution of bicycles on day 2:
After 2 days (2 transitions), 13.2 % of the bikes are at station A, 38.65 % are at station B and 48.15% are at station C.
We need to examine the following: What is the meaning of the entries in the matrix T 2 ?
\[\mathrm^=\mathrm=\left[\begin
0.3 & 0.5 & 0.2 \\
0.1 & 0.6 & 0.3 \\
0.1 & 0.1 & 0.8
\end\right]\left[\begin
0.3 & 0.5 & 0.2 \\
0.1 & 0.6 & 0.3 \\
0.1 & 0.1 & 0.8
\end\right]=\left[\begin
0.16 & 0.47 & 0.37 \\
0.12 & 0.44 & 0.44 \\
0.12 & 0.19 & 0.69
\end\right] \nonumber \]
The entries in T 2 tell us the probability of a bike being at a particular station after two transitions, given its initial station.
Similarly, if we raise transition matrix T to the nth power, the entries in T n tells us the probability of a bike being at a particular station after n transitions, given its initial station.
And if we multiply the initial state vector V0 by T n , the resulting row matrix Vn=V0T n is the distribution of bicycles after \(n\) transitions.
Refer to Example \(\PageIndex\) for the transition matrix for market shares for subscribers to two cable TV companies.
Solution
a.The initial distribution given by the initial state vector is a \(1\times2\) matrix
and the transition matrix is
After 1 year, the distribution of customers is
After 1 year, 37.5% of customers subscribe to BestTV and 62.5% to CableCast.
b. The initial distribution given by the initial state vector \(\mathrm_=\left[\begin
.8 & .2
\end\right]\). Then
In this case, after 1 year, 54% of customers subscribe to BestTV and 46% to CableCast.
Note that the distribution after one transition depends on the initial distribution; the distributions in parts (a) and (b) are different because of the different initial state vectors.
Professor Symons either walks to school, or he rides his bicycle. If he walks to school one day, then the next day, he will walk or cycle with equal probability. But if he bicycles one day, then the probability that he will walk the next day is 1/4. Express this information in a transition matrix.
Solution
We obtain the following transition matrix by properly placing the row and column entries. Note that if, for example, Professor Symons bicycles one day, then the probability that he will walk the next day is 1/4, and therefore, the probability that he will bicycle the next day is 3/4.
In Example \(\PageIndex\), if it is assumed that the initial day is Monday, write a matrix that gives probabilities of a transition from Monday to Wednesday.
Solution
If today is Monday, then Wednesday is two days from now, representing two transitions. We need to find the square, T 2 , of the original transition matrix T, using matrix multiplication.
\[T=\left[\begin
1 / 2 & 1 / 2 \\
1 / 4 & 3 / 4
\end\right] \nonumber \]
\begin
\mathrm^=\mathrm \times \mathrm &=\left[\begin
1 / 2 & 1 / 2 \\
1 / 4 & 3 / 4
\end\right] \left[ \begin
1 / 2 \\
1 / 4 & 1 / 2
\end\right] \\
&=\left[\begin
1 / 4+1 / 8 & 1 / 4+3 / 8 \\
1 / 8+3 / 16 & 1 / 8+9 / 16
\end\right] \\
&=\left[\begin
3 / 8 & 5 / 8 \\
5 / 16 & 11 / 16
\end\right]
\end
Recall that we do not obtain \(T^2\) by squaring each entry in matrix \(T\), but obtain \(T^2\) by multiplying matrix \(T\) by itself using matrix multiplication.
We represent the results in the following matrix.
We interpret the probabilities from the matrix T 2 as follows:
The transition matrix for Example \(\PageIndex\) is given below.
Write the transition matrix from
a.) In writing a transition matrix from Monday to Thursday, we are moving from one state to another in three steps. That is, we need to compute T 3 .
\[\mathrm^=\left[\begin
11 / 32 & 21 / 32 \\
21 / 64 & 43 / 64
\end\right] \nonumber \]
b) To find the transition matrix from Monday to Friday, we are moving from one state to another in 4 steps. Therefore, we compute T 4 .
\[\mathrm^=\left[\begin
43 / 128 & 85 / 128 \\
85 / 256 & 171 / 256
\end\right] \nonumber \]
It is important that the student is able to interpret the above matrix correctly. For example, the entry 85/128, states that if Professor Symons walked to school on Monday, then there is 85/128 probability that he will bicycle to school on Friday.
There are certain Markov chains that tend to stabilize in the long run. We will examine these more deeply later in this chapter. The transition matrix we have used in the above example is just such a Markov chain. The next example deals with the long term trend or steady-state situation for that matrix.
Suppose Professor Symons continues to walk and bicycle according to the transition matrix given in Example \(\PageIndex\). In the long run, how often will he walk to school, and how often will he bicycle?
Solution
If we examine higher powers of the transition matrix T, we will find that it stabilizes.
\[\mathrm^=\left[\begin
.333984 & 666015 \\
.333007 & .666992
\end\right] \quad \mathrm^=\left[\begin
.33333397 & .66666603 \\
.333333301 & .66666698
\end\right] \nonumber \]
\[\text < And >\quad \mathrm^=\left[\begin
1 / 3 & 2 / 3 \\
1 / 3 & 2 / 3
\end\right] \quad \text < and >\mathrm^<\mathrm>=\left[\begin
1 / 3 & 2 / 3 \\
1 / 3 & 2 / 3
\end\right] \text < for >\mathrm>20 \nonumber \]
The matrix shows that in the long run, Professor Symons will walk to school 1/3 of the time and bicycle 2/3 of the time.
When this happens, we say that the system is in steady-state or state of equilibrium. In this situation, all row vectors are equal. If the original matrix is an n by n matrix, we get n row vectors that are all the same. We call this vector a fixed probability vector or the equilibrium vector E. In the above problem, the fixed probability vector E is [1/3 2/3]. Furthermore, if the equilibrium vector E is multiplied by the original matrix T, the result is the equilibrium vector E. That is,
ET = E , or \(\left[\begin
1 / 3 & 2 / 3
\end\right]\left[\begin
1 / 2 & 1 / 2 \\
1 / 4 & 3 / 4
\end\right]=\left[\begin
1 / 3 & 2 / 3
\end\right]\)
This page titled 10.1: Introduction to Markov Chains is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Rupinder Sekhon and Roberta Bloom via source content that was edited to the style and standards of the LibreTexts platform.