If is a random variable on the probability space , then the expected value (aka expectation) of is, in essence, a weighted average of of the values may take on. In the case of simple random variables this weighted average is precisely the expectation. For a general random variable we’re actually taking a limit of the expectations of simple random variables that converge to the random variable of interest.

Assuming is a non-negative simple random variable of the form

where , and for the sake of simplicity that . The expected value of , denoted , is

.

When is not a simple random variable, we take to be the supremum of the expectations of simple random variables less than . That is,

where each is a simple random variable. When is not non-negative, we break into its positive and negative parts, and , and *provided and are not both infinite*, we take the expectation to be

.

Notice that our definition of is simply the Lebesgue integral of with respect to the probability measure over all of .

As such, useful properties of Lebesgue integrals, such as linearity, and powerful theorems like monotone and dominated convergence, hold for expectations. One slightly less obvious nice property is that if is the distribution function of , then we have

where the integral on the right refers to the Lebesgue-Stieltjes integral using the probability measure obtained from , or simply the usual Riemann-Stieltjes integral in those cases that it exists. To see this, simply show that the result holds for (non-negative) simple random variables, note that for a general non-negative random variable there exists an increasing sequence of non-negative simple random variables that converges to it, and apply the monotone convergence theorem. For general random variables, break the function into its positive and negative parts and apply the previous result.

### Like this:

Like Loading...

*Related*

## Leave a Reply