Often we don’t care very much about the particular outcome of an experiment per se, but some result related to that outcome. In the case of rolling multiple dice in a board game, for instance, we generally only care about how many total pips we’ve rolled, now how we got those pips. That is, we don’t care if we roll a four and a one, versus a three and a two; we’ve rolled five either way. In cases such as these we use “random variables” to essentially remove the details we aren’t cared about. A random variable is simply a function from the set of all possible events to the real numbers — a way for assigning every outcome a number. There are some technical details about this, but this definition will do for the time being.

The probability a random variable takes on a particular value gives the distribution of the random variable. We generally talk about cumulative distribution functions when discussing distributions. A cumulative distribution function (CDF) of random variable is the function . In the case of discrete random variables (one that takes on only countably many values), we may also talk about a function that simply gives the probability the random variable takes on a given value; . (For continuous random variables, a similar idea is that of the probability density function.)

Given a random variable we define the expected value of , denoted . The expected value is, in essence, a weighted average of the values the random variable may take on. That is (in the discrete case),

Where the sum is over all values may take on. This, in some intuitive sense, tells us what we may expect the random variable to be. A result known as the strong law of large numbers tells us that in fact if we were to have several random variables that are all independent and identically distributed (IID), the average of the first items in the sequence approaches the expected value: . (Note the choice of instead of , or , or …, doesn’t matter as we’ve assumed all of the random variables are *independent and identically distributed*.)

The expected value of a random variable may not be a value the random variable can actually take on, however. For example, if we were to take the expected value of the number of pips we’d get on a roll of a six-sided die, we’d get 3.5. This is, obviously, a value we can’t roll however. All this means is that if we were to roll our die lots and lots of times, record the value each time, then sum up our values and divide by the number of times we rolled, we’d get something near 3.5, and the more and more we did this, the closer and closer to 3.5 we’d become.

I think this is it for the “naive” probability stuff as I’m ready to start going into measure theoretic terms.

## Leave a Reply