# Mathematics Prelims

## October 15, 2008

### Naive Probability, Part 2

Filed under: Probability Theory — cjohnson @ 8:59 am

Last time we mentioned that a probability must be a number no less than zero, and no greater than one; for every event $A$, we have $P(A) \in [0, 1]$.  Some other important properties of probabilities are that $P(\emptyset) = 0$ and $P(S) = 1$ (if S is the sample space).  This effectively means the probability nothing happens (the empty set event) is zero, and the probability something happens (the sample space event) is one.

Given two events, A and B, we can talk about the probability of both events occuring by taking “A and B” to mean $A \cap B$, the intersection of the events.  People generally write the probability of A and B as just $P(AB)$, instead of $P(A \cap B)$.

Sometimes one event can influence the likelihood of another event.  For example, suppose you are dealt two cards from a shuffled deck.  Let A be the probability the first card you are dealt is a heart, and B be the probability the second card you’re dealt is a heart.  When you’re dealt that second card, the chance you actually get a heart depends on the first card you were dealt: If you did receive a heart on the first card, then there is one less heart in the deck when you’re given your second card.

This idea that two events influencing each other is known as independence and dependence.  We say two events are independent if they don’t influence one another.  Mathematically this means A and B are independent if

$\displaystyle P(AB) = P(A) P(B)$

And A and B are dependent if they’re not independent.  We can also define the probability of A given that B occurs as

$\displaystyle P(A|B) = \frac{P(AB)}{P(B)}$

Where we’re taking the probability A and B occur together, and dividing out the probability that B occurs by itself.  We can see then that if A and B are independent, we’ll have $P(A|B) = P(A)$, which we should expect since A and B don’t influence one another.

An example of independent events would be if we flip a coin and roll a die simultaneously: our outcome from the coin flip has no impact on what we roll on the die.  Suppose A is the event we get a head on the coin flip, and B is the event we roll a six.  Our sample space is now actually the ordered pairs (C, D) where C is the outcome of a coin flip, and D is what we roll on the dice.  So A is the event {(H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6)} as these are all the ways we could get a head on the coin.  Similarly, B is the event {(H, 3), (T, 3)}.  The intersection of A and B is just {(H, 3)}, and P(AB) = 1/12 as there are twelve total ordered pairs in the sample space.  Sure enough, if we plug these probabilities into our formula for conditional probability above (that is, the probability A given B, $P(A|B)$ formula), we’ll see that

$\displaystyle P(A|B) = \frac{1/12}{1/6} = \frac{6}{12} = \frac{1}{2} = P(A)$

And A and B are independent events.  Conditional probability is actually a very powerful tool, especially when combined with the law of total probability and Baye’s formula, which we’ll discuss in the next post.