Mathematics Prelims

October 22, 2008

Total and Bounded Variation

Filed under: Analysis,Real Analysis — cjohnson @ 5:57 pm

The total variation of a function f over an interval \left[a, b\right], which we’ll denote V_{[a, b]}(f), describes how a function varies over that interval.  That is, suppose P = \{a = x_0 < x_1 < ... < x_n = b\}.  With respect to this particular P, the variation is simply

\displaystyle \sum_{k=1}^n |f(x_k) - f(x_{k-1})|.

The total variation of f is then the supremum of sums such as the above, but over all partitions of \left[a, b \right]:

\displaystyle V_{[a, b]}(f) = \sup_{P \vdash [a, b]} \sum_{k=1}^n |f(x_k) - f(x_{k-1})|.

In the case of f : \mathbb{R} \to \mathbb{C}, with f continuous, V_{[a, b]}(f) gives the length of the curve in \mathbb{C} defined by f.  We take the interval, and break it into lots of little pieces, then look at the corresponding pieces of that function in the complex plane.  As the absolute value in the complex plane gives the distance between two points, we’re basically connected the dots on our curve, and measure the lengths of the connecting line segments.  As we make our partition finer and finer, the pieces of the curve are getting smaller and smaller, giving a better approximation to the length of the curve.  When we take the supremum, we’re getting the “best” approximation (the true value).

Notice that the total variation doesn’t give us the length of a curve for f : \mathbb{R} \to \mathbb{R}, though.  For instance, consider the interval [0, 1] and the function f(x) = x.  Obviously the length of the curve in \mathbb{R}^2 is \sqrt{2}.  However, if we partition the interval, the terms |f(x_k) - f(x_{k-1})| only give the vertical component of the length.  Summing up all these terms, however, we will only have one, not \sqrt{2}.

We say that f is of bounded variation if its total variation is finite.  That is, if there exists an M \in \mathbb{R} such that for any partition P of \left[a, b\right] we choose,

\displaystyle \sum_{k=1}^n |f(x_k) - f(x_{k-1})| \leq M.

Update: We have to be a little bit careful with the discussion of the length of the curve in the above.  A better way to state the above is to talk about the trace of the curve (range of the function).  If the function is continuous then total variation gives the length of the trace, which in the case f : \mathbb{R} \to \mathbb{R} is the the length “travelled” along the path if projected to the y axis.  There’s a nice animation of this idea on Wikipedia’s page on total variation.


October 19, 2008

The Riemann-Stieltjes Integral

Filed under: Analysis,Calculus,Real Analysis — cjohnson @ 10:58 am

The technique of integration that is generally taught in a second semester calculus course is called Riemann integration.  This is given by taking a closed, bounded interval \left[a, b\right] and partitioning it into a set of points P = \{x_0, x_1, ..., x_n\} where a = x_0 < x_1 < ... < x_{n-1} < x_n = b.  Given a function f : [a, b] \to \mathbb{R} we then consider sums of the form

\displaystyle \sum_{k=1}^n f(y_k) [x_k - x_{k-1}]

where y_k \in [x_{k-1}, x_k].  Sums such as this are called Riemann sums.  In particular, we’d like to consider Riemann sums where the y_k we choose in each sub-interval of the partition is the one that gives us the largest value of f(x) over that subinterval.  Defining M_k and m_k as follows, we can then do this easily.

\displaystyle M_k = \sup_{x \in [x_{k-1}, x_k]} f(x)

\displaystyle m_k = \inf_{x \in [x_{k-1}, x_k]} f(x)

We can then define the upper and lower Riemann sums of f with respect to the partition P, denoted U(P, f) and L(P, f), respectively, as follows.

\displaystyle U(P, f) = \sum_{k=1}^n M_k [x_k - x_{k-1}]

\displaystyle L(P, f) = \sum_{k=1}^n m_k [x_k - x_{k-1}]

Now, we can take the infimum and supremum of these over all partitions of \left[a, b\right] to get the upper and lower Riemann integrals of f over the interval \left[a, b\right]:

\displaystyle \overline{\int_a^b f} = \inf_{P \vdash [a, b]} U(P, f)

\displaystyle \underline{\int_a^b f} = \sup_{P \vdash [a, b]} L(P, f)

(Where P \vdash [a, b] means that P is a partition of \left[a, b\right].)

In the event that \overline{\int_a^b f} = \underline{\int_a^b f}, we say that the function f is Riemann integrable, and simply write \int_a^b f for this common value.

While the Riemann integral is certainly a useful tool, it has some severe restrictions.  It is only defined for bounded intervals, but that is easily fixed by taking a limit as one (or both) of the endpoints goes to infinity.  There are some serious problems with having a sequence of functions, as the integral of the limit may not equal the limit of the integrals.  In those cases we have to either impose some pretty severe restrictions on how the sequence converges (i.e., require uniform convergence), or use more advanced tools from measure theory (namely the monotone and dominated convergence theorems with the Lebesgue integral).

Another less serious limitation is that it’s not immediately clear how to extend the Riemann integral to allow us to integrate in other spaces, namely how to integrate over \mathbb{R}^2 or \mathbb{C}.  An important, though very simple, extension of the Riemann integral that can help us rectify those problems (as well as make notation in probability theory a bit more compact) by letting us consider contour integrals is the Riemann-Stieltjes integral.

The Riemann-Stieltjes integral is defined almost exactly like the Riemann integral is above, except that instead of multiplying by the factor \left[x_k - x_{k-1}\right] in our Riemann sum, we multiply by \left[g(x_k) - g(x_{k-1})\right].  That is, given two functions f, g : [a, b] \to \mathbb{R} we can define,

\displaystyle U(P, f, g) = \sum_{k=1}^n M_k [g(x_k) - g(x_{k-1})]

\displaystyle L(P, f, g) = \sum_{k=1}^n m_k [g(x_k) - g(x_{k-1})]

And we define the upper and lower integrals of f with respect to g as

\displaystyle \overline{\int_a^b f \, dg} = \inf_{P \vdash [a, b]} U(P, f, g)

\displaystyle \underline{\int_a^b f\, dg} = \sup_{P \vdash [a, b]} L(P, f, g)

Again, if these values coincide, we refer to this value as \int_a^b f \, dg.  We call f the integrand and g the integrator.

Of course, now we may ask if the Riemann-Stieltjes integral has all of the properties of the traditional Riemann integral, and what new properties it may have that the Riemann integral does not.  One property that’s easy to check, though, is that of linearity.

Thanks to properties of the supremum, and infimum, we know that if \alpha is a constant and S is a set, \sup (\alpha s) = \alpha \sup S.  Carrying this into our definition of the Riemann-Stieltjes integral, we have that if \alpha is a constant, and f, g are functions such that \int_a^b f \, dg exists, then \int_a^b (\alpha f) \, dg = \alpha \int_a^b f \, dg.

Similarly, as \sup(A + B) = \sup A + \sup B, we can show that \int_a^b f + h \, dg = \int_a^b f \, dg + \int_a^b h \, dg.  (Of course, to use this for linearity of the integral we need to also show that U(P, f + h, g) = U(P, f, g) + U(P, h, g), and similarly for L(P, f + h, g), but this follows easily by distributing the sum (f + h)(y_k) = f(y_k) + h(y_k) over the g(x_k) - g(x_{k-1}) term in the Riemann sum.)

Blog at