Mathematics Prelims

October 18, 2008

Naive Probability, Part 3

Filed under: Probability Theory — cjohnson @ 3:37 pm

Often we don’t care very much about the particular outcome of an experiment per se, but some result related to that outcome.  In the case of rolling multiple dice in a board game, for instance, we generally only care about how many total pips we’ve rolled, now how we got those pips.  That is, we don’t care if we roll a four and a one, versus a three and a two; we’ve rolled five either way.  In cases such as these we use “random variables” to essentially remove the details we aren’t cared about.  A random variable is simply a function from the set of all possible events to the real numbers — a way for assigning every outcome a number.  There are some technical details about this, but this definition will do for the time being.

The probability a random variable takes on a particular value gives the distribution of the random variable.  We generally talk about cumulative distribution functions when discussing distributions.  A cumulative distribution function (CDF) F of random variable is the function F(x) = P(X \leq x).  In the case of discrete random variables (one that takes on only countably many values), we may also talk about a function that simply gives the probability the random variable takes on a given value; p(x) = P(X = x).  (For continuous random variables, a similar idea is that of the probability density function.)

Given a random variable X we define the expected value of X, denoted E[X].  The expected value is, in essence, a weighted average of the values the random variable may take on.  That is (in the discrete case),

\displaystyle E[X] = \sum_{x} x p(x)

Where the sum is over all values X may take on.  This, in some intuitive sense, tells us what we may expect the random variable to be.  A result known as the strong law of large numbers tells us that in fact if we were to have several random variables (X_n)_{n \in \mathbb{N}} that are all independent and identically distributed (IID), the average of the first n items in the sequence approaches the expected value: X_n / n \to E[X_1].  (Note the choice of X_1 instead of X_2, or X_3, or …, doesn’t matter as we’ve assumed all of the random variables are independent and identically distributed.)

The expected value of a random variable X may not be a value the random variable can actually take on, however.  For example, if we were to take the expected value of the number of pips we’d get on a roll of a six-sided die, we’d get 3.5.  This is, obviously, a value we can’t roll however.  All this means is that if we were to roll our die lots and lots of times, record the value each time, then sum up our values and divide by the number of times we rolled, we’d get something near 3.5, and the more and more we did this, the closer and closer to 3.5 we’d become.

I think this is it for the “naive” probability stuff as I’m ready to start going into measure theoretic terms.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: