(In studying for my upcoming prelim. in stochastic processes, I decided to start at the beginning, probability theory. In particular, I wanted to read up on probability from a more formal, measure theoretic point of view, and I intend to start writing posts about formal probability theory. Before that, though, I thought it might be worthwhile to introduce some of the more basic ideas from an intuitive point of view. I will refer to this view of probability theory as “naive” to distinguish it from the formal theory.)

It’s often the case that we’re interested in the outcome of some experiment where we can’t known in advance what the outcome will be. We may flip a coin to determine who should go first in a debate; we roll a pair of dice to determine how many steps we may move our pieces in backgammon; or we might be dealt cards from a shuffled deck when playing poker. In all of these cases, what will actually happen is not known in advance, but instead relies on some random phenomenon. Despite this, we may still use mathematics to determine how likely a particular outcome is (e.g., to determine how likely it is we receive a three-of-a-kind in poker). The branch of mathematics that attempts to answer such questions is known as probability theory.

When performing some “experiment” whose outcome can’t be known for sure, we refer to the set of all possible outcomes as the “sample space.” For example, in flipping a coin the sample space is simply {H, T} (H for heads, T for tails); in rolling two dice and taking their sum, the sample space is {2, 3, 4, …, 10, 11, 12} as these are the only possible outcomes for rolling two dice and adding the pips (the dots) we get; in the case of a poker game like five-card stud, the sample space is all possible five-card hands we might be dealt.

A collection of items in our sample space is referred to as an “event.” Rolling an even number, when rolling two dice, is then the event {2, 4, 6, 8, 10, 12}. Notice that the empty set and the entire sample space are events as they are collections of items in the sample space (for the empty set, it’s the collection of no items, and the entire sample space is the collection of all possible items). We associate with each event a number called a “probability” that represents the likelihood that event will occur. These probabilities must obey certain rules, however. Namely that a probability must be no less than zero and no greater than one. That is, a number like 1.2 or -0.7 is not a valid probability, while a number such as 0.2, 0.5, 0.75, or 1.0 is.

If A is an event (e.g., A corresponds to rolling an even number) then we let P(A) denote the probability of A. So, if A is the event we get a heads when flipping a coin, we’d have P(A) = 0.5 — There is a 50% chance we flip a head. In order to calculate the probability of an event, assuming all items in the sample space are equally likely, we simply take the number of items in the event, and divide by the total number of items in sample space.

So, if we wanted the probability of rolling a multiple of three, which we’ll refer to as the event B (that is, the event B = {3, 6, 9, 12}), we would take P(B) = 4/11 = 0.363636… as there are four points in our event (three, six, nine and twelve), and eleven possible outcomes (two through twelve).

A question that quickly arises though is what happens if there are infinitely many points in our sample space? The technique that we’ve described doesn’t really work anymore as, for one thing, we can’t divide a number by infinity (see Infinity is NOT a Number for a discussion of why we can’t do arithmetic with infinity, and why we can’t divide by zero). In this case we have to use concepts from measure theory to calculate probabilities.

## Leave a Reply