Mathematics Prelims

December 20, 2008

Introduction to Markov Chains

Filed under: Probability Theory,Stochastics — cjohnson @ 11:15 am

A stochastic process is an indexed collection of random variables, where the index is normally thought of as time.  Generally we will assume that this time index is either discrete (i.e., from the set \mathbb{N} or \mathbb{N}_0) or continous ({}[0, \infty)).  A Markov chain, unless stated otherwise, we will assume to be a discrete time process indexed by \mathbb{N}_0.  This is a process where each X_n is a random variable whose range is a countable set, which we will generally take to be \mathbb{N} or the first however-many natural numbers.  We then say that X_n is the state of the process at time n, and the range of the X_n is called the state space.

The primary property of a Markov chain is called the Markov property which effectively says that the distribution of X_{n + 1}, given X_n does not depend on any of X_0, X_1, ..., X_{n-1}.  That is, the next state we transition to depends only on the current state, and none of the previous.  Formally,

\displaystyle P(X_{n + 1} = j | X_n = i, X_{n-1} = i_{n-1}, X_{n-2} = i_{n-2}, ..., X_0 = i_0)

\displaystyle = P(X_{n+1} = j | X_n = i)

For example, imagine that you were to start flipping a coin and you record the number of heads you get as you flip the coin, where each time you flip a head you add one to your total, but you subtract one each time you flip a tail.  That is, if you were to flip HHTHTT your total would be zero.  This is an example of a Markov chain because the probability your total after the next flip depends only what your total currently is, and not the “path” you took to get to that total.  In our example, the probability we have one at the next flip is 1/2 (we flip heads) and the probability we have negative one at the next flip is 1/2 (we flip tails).  This is also true if we were to flip HT or TTTTTTHHHHHH or any other string of heads and tails with the same number of heads as tails.  A process such as the one we’ve described is called a random walk.

Because of this Markov property, we can describe the probability of transitioning between states as a matrix.  Starting off with an example with finitely many states, suppose that if it’s raining today it will rain tomorrow with probability 1/2, snow with probability 1/4, or be sunny with probability 1/4.  If it’s snowing today, it will rain tomorrow with probaiblity 1/8, snow with probability 1/2, and it will be sunny with probability 3/8.  If it’s sunny today, it will rain tomorrow with probability 1/4, snow with probability 0, and be sunny with probability 3/4.  The probability transition matrix is then

\displaystyle \left[ \begin{array}{ccc} 1/2 & 1/4 & 1/4 \\ 1/8 & 1/2 & 3/8 \\ 1/4 & 0 & 3/4 \end{array} \right]

Where, if we call this matrix P, then the entry P_{ij} refers to the probability that we transition to state j, given that we’re in state i.  Also note that each row corresponds to a discrete probability distribution, so we must have that the sum of the entries on any given row is equal to one.

Advertisements

2 Comments »

  1. […] with the weather example from last time, we have that the two-step probability transition matrix […]

    Pingback by The Chapman-Kolmogorov Equations « Mathematics Prelims — December 20, 2008 @ 12:46 pm | Reply

  2. Since we talk about the probabilities of the next day I think it’s a one-step transition matrix…

    Comment by Batuhan — March 9, 2009 @ 11:01 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: