If is a random variable on the probability space , then one way to define the convergence of a sequence of random variables is to simply define the limit pointwise, as you would do with any old function. That is, if there exists a random variable such that for every we have

Similarly, we say that almost everywhere (almost surely) if the measure (probability) of the set of points where is zero.

Another way to define convergence of measurable functions in general is to talk about convergence in measure. This is a notion of convergence where the set of points where the functions in the sequence do not get arbitrarily close to the corresponding point in the limit function has measure zero. That is if for each ,

(Here we’re using random variables and a probability space, but the same thing is used for measurable functions of general measure spaces.) To say that converges to in probability (denoted ), is to say that for any , there exists an such that for every , the set of points where isn’t “close” to (here “close” means within distance of ) is less than .

As noted in Cohn, convergence in measure does not imply convergence pointwise (not even almost everywhere!), or vice versa. For instance, the set of functions pointwise converges to the zero function, but does not converge in measure. However, in the case of a finite measure (e.g., a probability) we have the following proposition.

**Proposition** If is a measure space with finite, then if is a sequence of measurable functions with a.e., then .

**Proof**: Let given. Define and . Clearly forms a decreasing sequence whose intersection is a subset of the points for which , by assumption this set has measure zero, , which gives that the functions converge in measure to .

(These examples and proofs are from Donald L. Cohn’s “Measure Theory.”)

### Like this:

Like Loading...

*Related*

## Leave a Reply