Mathematics Prelims

July 21, 2008

The Banach Fixed Point Theorem

Filed under: Analysis,Functional Analysis — cjohnson @ 10:06 pm

A mapping T : X \to X from a metric space X to itself is called a contraction if there exists an \alpha \in (0, 1) such that for every x, y \in X we have d(Tx, Ty) \leq \alpha d(x, y).  The Banach fixed point theorem (aka the contraction theorem) says that every contraction on a non-empty complete metric space has exactly one fixed point.

Proof:  Suppose X is a complete metric space and that T : X \to X is a contraction.  Let x_0 \in X be any point.  Define x_1 = T x_0, x_2 = T x_1 = T^2 x_0, x_3 = T x_2 = T^3 x_0 and so forth.  If we write d(x_{n+1}, x_n) in terms of Tx_n andTx_{n-1}, we see that d(x_{n+1}, x_n) \leq \alpha^n d(x_1, x_0).  Now consider d(x_n, x_m) where we assume, without loss of generality, that n > m.  By the triangle inequality we have

\displaystyle d(x_n, x_m) \leq (\alpha^{n-1} + \alpha^{n-2} + ... + \alpha^m) d(x_1, x_0)

\displaystyle \qquad = \frac{\alpha^m (1 - \alpha^{n - m})}{1 - \alpha} d(x_1, x_0)

\displaystyle \qquad \leq \frac{\alpha^m}{1 - \alpha} d(x_0, x_1)

We can obviously make this value as small as we’d like by picking a large enough m, so the sequence (x_n)_{n \in \mathbb{N}} is Cauchy and must converge.  Call the limit of this sequence x.  Now consider the distance between x and Tx.

\displaystyle d(x, Tx) \leq d(x, x_m) + d(x_m, Tx) \leq d(x, x_m) + \alpha d(x_{m-1}, x)

By picking a large enough m, we can make this as small as we’d like as well, so we have d(x, Tx) = 0 so T has a fixed point.  For the uniqueness of x, suppose y is also a fixed point.

\displaystyle d(x, y) = d(Tx, Ty) \leq \alpha d(x, y)

The only way this holds is if d(x, y) = 0, so x = y.  Note that the x we constructed didn’t depend on on our initial value of x_0, so every sequence we construct by picking a point in X and iterating T will result in the same limit.

July 20, 2008

The Continuous Dual Space

Filed under: Analysis,Functional Analysis — cjohnson @ 8:35 pm

An important subspace of the algebraic dual space of a normed space X is the space which consists only of bounded linear functionals on X.  This space is called the continuous space (or just dual space, and the conjugate space in some older texts, such as Kolmogorov), and is denoted X'.  Note that we can define the algebraic dual space for any vector space, but require a norm for the (continuous) dual.  This space is itself a normed space where functionals are given the operator norm.  In fact, regardless of the normed space X, the dual X' is a Banach space.

Let (f_n)_{n \in \mathbb{N}} be a Cauchy sequence in X'.  Then for each \epsilon > 0 there exists a N \in \mathbb{N} such that \| f_n - f_m \| < \epsilon for all m, n > N.  This means that

\displaystyle \| f_n - f_m \| = \sup_{x \in X, \, \|x\| = 1} |f_n(x) - f_m(x)| < \epsilon

So for a given x \in X with \|x \| = 1, (f_n(x)) forms a Cauchy sequence in \mathbb{R}, and so converges.  We will then define a functional f pointwise as follows.

\displaystyle f(x) = \lim_{n \to \infty} f_n(x).

We must show that this functional is linear and bounded.  Linearity follows easily from the linearity of each f_n and properties of limits.

\displaystyle f(\alpha x + \beta y) = \lim_{n \to \infty} f_n(\alpha x + \beta y) = \alpha \lim_{n \to \infty} f_n(x) + \beta \lim_{n \to \infty} f_n(y).

For boundedness,

\displaystyle |f(x)| = \lim_{n \to \infty} |f_n(x)| \leq \lim_{n \to \infty} \| f_n \| \|x\| \leq M \|x\|

for some M (this is because every Cauchy sequence is bounded).

So f is a bounded linear operator on X, and we just need to show f_n \to f.

Let \epsilon > 0 be given.  As (f_n) is Cauchy, there is an N \in \mathbb{N} such that for all m, n > N we have \|f_m - f_n\| < \epsilon.  Letting m \to \infty, we have \|f - f_n\| \leq \epsilon for all n > N, so f_n \to f.

And so the dual of any normed space is Banach, regardless of whether the original space was or not.

Note that for every finite dimesional normed space, the continuous and algebraic duals are in fact the same as all linear operators are bounded in finite dimensions.

(This proof is a modified version of a proof in Eidelman.)

July 18, 2008

The Algebraic Dual Space

Filed under: Analysis,Functional Analysis — cjohnson @ 10:27 pm
Tags:

Suppose X is a vector space over a field K.  We say a function f : X \to K is a linear functional if for every \alpha, \beta \in K and every x, y \in X, we have f(\alpha x + \beta y) = \alpha f(x) + \beta f(y).  We will always assume that K is either \mathbb{R} or \mathbb{C}.  Properties and theorems associated with “traditional” linear operators apply since \mathbb{R} and \mathbb{C} can be thought of as normed spaces with the “traditional” norms (absolute values).  Note that the set of all linear functionals on X, which is denoted X^*, is itself a vector space if we allow scalar multiplication and addition of functions in the traditional way ((\alpha f)(x) = \alpha f(x) and (f + g)(x) = f(x) + g(x)) .

The vector space X^* is referred to as the algebraic dual of X.  Since X^* is itself a vector space, we can define its algebraic dual, (X^*)^* = X^{**}, which is called the second algebraic dual of X.  An important property of X^{**} is that there exists an injective mapping C : X \to X^{**} called the canonical mapping of X into X^{**}.  This mapping is given by taking an x \in X and considering the functional g_x : X^* \to \mathbb{R} such that for each f \in X^* we map g_x(f) = f(x).

Since C is a linear map from X to X^{**} (this follows from the fact that each f \in X^* is linear), we have X is isomorphic to a subspace of of X^{**} (recall that the range of a linear operator is a subspace of the operator’s codomain).  For this reason, C is sometimes called the canonical embedding of X into X^{**}.  (A space A is said to be embeddable in B if A is isomorphic to a subspace of B.)  In the event that C is also surjective, so we have X is isomorphic to all of X^{**}, we say that X is algebraically reflexive.

Every finite dimensional vector space is algebraically reflexive.

Proof: Suppose X is an n-dimensional vector space with basis \{ e_1, ..., e_n \}.  Let f \in X^* and x \in X with x = \alpha_1 e_1 + ... + \alpha_n e_n.  As f is linear, we have f(x) = f(\alpha_1 e_1 + ... + \alpha_n e_n) = \alpha_1 f(e_1) + ... + \alpha_n f(e_n).  This implies that $f$ is uniquely determined by the values of f(e_1), ..., f(e_n), which means we can view f as an n-tuple of scalars.  That in turn means that \dim X^* = n and that the set of n-tuples which have a single one and then n-1 zeroes form a basis for X^*.  This is called the dual basis of X.  Applying the same procedure to X^*, we see that \dim X^{**} = n.  Now, since C is an injective linear map from X into X^{**} we have that X is isomorphic to an n-dimensional subspace of X^{**}, and since the only n-dimensional subspace of X^{**} is X^{**} itself, we have that X is isomorphic to X^{**}, and so every finite dimensional vector space is algebraically reflexive.

Linear Operators

Filed under: Analysis,Functional Analysis — cjohnson @ 4:22 pm

If X and Y are normed spaced and \mathcal{D}_T \subseteq X, then a mapping T : \mathcal{D}_T \to Y is called a linear operator if for every x,y \in \mathcal{D}_T and \alpha, \beta \in K (the underlying field),

\displaystyle T(\alpha x + \beta y) = \alpha Tx + \beta Ty

If T is a linear operator, then T is injective if and only if Tx = 0 implies x = 0.

Proof: Suppose T is injective.  Then Tx = Ty \Rightarrow x = y, since T is linear, it must map zero to zero, so Tx = 0 \Rightarrow x = 0.  Now suppose that Tx = 0 only for x = 0.  If x,y \in \mathcal{D}_T and Tx = Ty, then Tx - Ty = 0 and so T(x - y) = 0 (by linearity), and this means that x - y = 0, so x = y.

Now, we say that T : \mathcal{D}_T \to Y is bounded if there exists a c \in \mathbb{R} such that for every x \in \mathcal{D}_T we have \| Tx \| \leq c \|x\|.  If such a c exists (i.e., if T is bounded), then we say that the least such c is the operator norm of T, and write \|T\| = c.  We can then find c as follows.

\displaystyle \|T\| = \sup_{x \in \mathcal{D}_T, \, x \neq 0} \frac{\|Tx\|}{\|x\|}

(This follows from rewriting the above earlier inequality as \frac{\|Tx\|}{\|x\|} \leq c.)  Note that by letting c = \|T\|, we have that \|Tx\| \leq \|T\|\|x\|.

In a finite dimensional normed space, every linear operator is bounded.  Suppose that \dim X = n and that \{ e_1, ..., e_n \} is a basis for X.  If we let x \in X with x = \sum_{i=1}^n \alpha_i e_i, then we have the following.

\displaystyle \|Tx\| = \left\| T\left( \sum_{i=1}^n \alpha_i e_i \right) \right\|

\displaystyle \qquad = \left\|\sum_{i=1}^n \alpha_i Te_i \right\|

\displaystyle \qquad \leq \sum_{i=1}^n | \alpha_i | \|Te_i\|

\displaystyle \qquad \leq \frac{k}{c} \|x\|

Where k = \max \{ \|T e_1\|, ..., \|T e_n \| \} and c is the value such that \| \alpha_1 e_1 + ... + \alpha_n e_n \| \geq c (|\alpha_1| + ... + |\alpha_n|).

Another important property is that a linear operator is bounded if and only if it is continuous.

Proof: Suppose T is bounded and let \epsilon > 0 be given.  Let \delta = \frac{\epsilon}{\|T\|}.  Let x_0 be a fixed point in \mathcal{D}_T, and x some other point in \mathcal{D}_T such that \| x - x_0 \| < \delta.

\displaystyle \|Tx - Tx_0\| = \|T(x - x_0)\|

\displaystyle \qquad \leq \|T\| \|x - x_0\|

\displaystyle \qquad \leq \|T\| \delta

\displaystyle \qquad \leq \|T\| \frac{\epsilon}{\|T\|}

\displaystyle \qquad = \epsilon

This shows that T is continuous at x_0.  However, our x_0 was arbitrary, and \delta didn’t depend on our choice of x_0, so T is uniformly continuous.

For the converse I’m going to copy the proof from Wikipedia, since I think it’s a bit clearer than Kreyszig’s proof.

Suppose that T is continuous.  Since \mathcal{D}_T is a subspace of X, it contains the zero vector.  Since T is continuous, it’s continuous at the zero vector.  This means there exists a \delta > 0 such that for all x \in \mathcal{D}_T satisfying \|x\| \leq \delta we have that \|Tx\| = \|T(x - 0)\| = \|Tx - T0\| < 1.  Now let y be any point in \mathcal{D}_T.

\displaystyle \|Ty\| = \left\| \frac{\|y\|}{\delta} T\left( \frac{\delta}{\|y\|} y\right )\right\|

\displaystyle \qquad = \frac{\|y\|}{\delta} \left\| T \left( \frac{\delta}{\|y\|} y \right) \right\|

\displaystyle \qquad \leq \frac{1}{\delta} \| y \|

And so T is bounded.

Compact Closed Unit Ball Implies Finite Dimension

Filed under: Analysis,Functional Analysis — cjohnson @ 2:32 pm

If X is a normed space and the closed unit ball centered at zero is compact, then X is finite dimensional.

Proof: Suppose X is an infinite dimensional normed space and let x_1 be any point in X with \|x_1\| = 1 and let Y_1 be the one-dimensional subspace of X generated by x_1.  Recall that a finite dimensional subspace is always closed, and since X is infinite dimensional, Y_1 is a proper subspace of X.  By Riesz’s Lemma, there exists an x_2 \in X \setminus Y_1 with \|x_2\| = 1 and \|x_2 - y\| \geq \frac{1}{2} for all y \in Y_1.  Let Y_2 be the two-dimensional subspace generated by x_1, x_2.  There exists a x_3 \in X \setminus Y_2 such that \|x_3\| = 1 and \|x_3 - y\| \geq \frac{1}{2} for all y \in Y_2.  Note that since X is infinite dimensional, we can keep applying this procedure generating a sequence (x_n) such that \|x_n\| = 1 but \|x_m - x_n\| \geq \frac{1}{2} for all m \neq nThis means that our sequence can not be Cauchy, and so it can not be convergent, and so the closed unit ball of radius one can not be compact. This means that, since all points in the sequence are at least distance 1/2 from one another, no subsequence can be Cauchy, so no subsequence can be convergent.  Hence, if the closed unit ball of radius one is compact, then the space is finite dimensional.

July 17, 2008

Riesz’s Lemma

Filed under: Analysis,Functional Analysis — cjohnson @ 1:35 pm

If X is a normed space (of any dimension), Z is a subspace of X and Y is a closed proper subspace of Z, then for every \theta \in [0, 1] there exists a z \in Z such that \|z\| = 1 and \|z - y\| \geq \theta for every y \in Y.

Proof: Let v \in Z \setminus Y and let a = \inf_{y \in Y} \| v - y\|.  As Y is closed and v \notin Y, we have a > 0.  Now let \theta \in (0, 1) and note that there exists a y_0 \in Y such that a\leq \| v - y_0 \| \leq \frac{a}{\theta} (as \theta < 1, we have \frac{a}{\theta} > a).  Let

\displaystyle z = \frac{v - y_0}{\| v - y_0 \|}

Obviously z \in Z and \| z \| = 1.  Let y be any element of Y.  We have the following.

\displaystyle \|z-y\| = \left\| \frac{1}{\|v - y_0\|} (v - y_0) - y\right\|

\displaystyle \qquad = \frac{1}{\|v - y_0\|} \| v - y_0 - (\|v - y_0\|) y\|

\displaystyle \qquad \geq \frac{a}{\| v - y_0 \|}

\displaystyle \qquad \geq \frac{a}{a/\theta}

\displaystyle \qquad = \theta

July 16, 2008

l^\infty is not separable

Filed under: Analysis,Functional Analysis — cjohnson @ 6:21 pm

Recall that a a subset M of a metric space X is called dense if the closure of M is the entire space; \overline{M} = X.  If M is a dense subset of X, then every point of X is a limit point of M, and so for every x \in X and r > 0, the open ball B_r(x) must contain a point of M that is distinct from x.  We say that a space is separable if it has a countable dense subset.

The space \ell^\infty is the set of all bounded sequences of real (or complex) numbers where the metric is given by d(x, y) = \sup_{n \in \mathbb{N}} |x_n - y_n| where x = (x_n)_{n \in \mathbb{N}} and similarly for y.

Consider the set of all sequences that whose entries are made up of zeroes and ones.  Obviously this is a subset of \ell^\infty.  Furthermore, each of these sequences corresponds to the binary representation of a number in (0, 1), and every number in (0, 1) has a binary representation, so a bijective mapping between (0, 1) and our set exists.  This means that our set is uncountable.  Note that because of the metric on \ell^\infty, any two (distinct) elements in the set are distance one apart.  If we place a ball of radius r < \frac{1}{2} around each point, then none of these balls will intersect.  This tells us that, since any dense subset of \ell^\infty must have an element in each ball, any dense subset of \ell^\infty must be uncountable, so \ell^\infty is not separable.

Equivalence of Norms in n dimensions

Filed under: Analysis,Functional Analysis — cjohnson @ 4:08 pm

We say that two norms, \| \cdot \|_1 and \| \cdot \|_2, on the same vector space X are equivalent if there exist \alpha, \beta > 0 such that for every x \in X,

\displaystyle \alpha \| x \|_1 \leq \| x \|_2 \leq \beta \| x \|_1.

In a finite dimensional normed space, all norms are equivalent.

Proof: Suppose X is an n-dimensional space with basis elements \{ e_1, ..., e_n \} and that \| \cdot \|_1 and \| \cdot \|_2 are norms on X.  We know (from an earlier lemma) there exists a c > 0 such that for any x \in X we choose

\displaystyle \| x \|_1 \geq c (| \alpha_1 | + ... + | \alpha_n |)

(Where x = \alpha_1 e_1 + ... + \alpha_n e_n.)

Now consider \| x \|_2 and apply the triangle inequality.

\displaystyle \| x \|_2 \leq k \sum_{i=1}^n | \alpha_i |

Where k = \max \{ \| e_1 \|_2, ..., \| e_n \|_2 \}.  If we then apply our earlier inequality we’re left with

\displaystyle \| x \|_2 \leq \frac{k}{c} \| x \|_1.

If we repeat this process but with \| \cdot \|_1 and \| \cdot \|_2 reversed, we achieve the other inequality.  Since our choice of x \in X was arbitrary, and since our k and c don’t depend on our choice of x, we see that \| \cdot \|_1 and \| \cdot \|_2 are equivalent.  Since these were arbitrary norms, all norms on X are equivalent.

Every Finite Dimensional Normed Space is Banach

Filed under: Analysis,Functional Analysis — cjohnson @ 2:45 pm

Every finite dimensional normed space (over a complete field, namely \mathbb{R} or \mathbb{C}) is Banach (complete in the metric induced by the norm).

Proof: Let Y be an n-dimensional normed space with basis \{ e_1, ..., e_n \} and let (y_n)_{n \in \mathbb{N}} be any Cauchy sequence in Y.  As (y_n) is Cauchy, for any given \epsilon > 0 there exists an N \in \mathbb{N} such that for all m, n > N we have \| y_n - y_m \| < \epsilon.  However, by a previous lemma,there exists a c > 0 such that

\displaystyle \| y_n - y_m \| = \left\| \sum_{i=1}^n \left( \alpha_i^{(n)} - \alpha_i^{(m)} \right) e_i \right\| \geq c \sum_{i=1}^n \left| \alpha_i^{(n)} - \alpha_i^{(m)} \right|

Which means that \sum_{i=1}^n \left| \alpha_i^{(n)} - \alpha_i^{(m)} \right| \leq \epsilon/c, which in turn implies for a fixed i, the sequence (\alpha_i^{(m)}) is Cauchy in a complete space, so it converges.  Let the limit of this sequence be \alpha_i and let y = \alpha_1 e_1 + ... \alpha_n e_n.  Now we have

\displaystyle \| y_m - y \| = \left\| \sum_{i=1}^n \left( \alpha_i^{(m)} - \alpha_i \right) e_i \right\| \leq \sum_{i=1}^n \left| \alpha_i^{(m)} - \alpha_i \right| \|e_i\|.

Since \alpha_i^{(m)} \to \alpha_i, we can make this right-hand side arbitrarily small by picking a large enough N \in \mathbb{N} and considering m > N.  This means that \| y_m - y \| \to 0, so y_m \to y, and the space is complete.

Some Compactness Properties

Filed under: Analysis,Functional Analysis — cjohnson @ 2:32 pm

In a metric space, a subset is called compact if every sequence in the space contains a convergent subsequence (In a more general setting this is called “sequentially compact,” and compactness refers to a set where every open cover has a finite subcover, but in the case of a metric space these two definitions are equivalent.)  As it turns out, every compact subset of a metric space will be both closed and bounded.

Suppose M is a compact subset of some metric space (X, d).  Suppose that M were not bounded, then for a fixed m \in M we could find a sequence (y_n)_{n \in \mathbb{N}} such that for each n \in \mathbb{N}, d(m, y_n) > n.  Note that such a sequence has no convergent subsequence.  Since unboundedness implies a set can’t be compact, every compact set must be bounded.  For closedness, let x \in \overline{M}.  There exists a sequence in M, (x_n)_{n \in \mathbb{N}} such that x_n \to x.  By compactness of M, this sequence has a convergent subsequence (x_{p_n}) which converges to an element of M.  However, every subsequence of a convergent sequence must have the same limit as the “original” sequence, so x_{p_n} \to x, and this implies x \in M, so \overline{M} = M and M is closed.

In a finite dimensional normed space, this becomes boundedness and closedness are no longer simply necessary conditions for compactness, but they are in fact sufficient conditions.  That is, in a finite dimensional normed space, a set is compact if and only if it is both closed and bounded.

We’ve already show that compactness implies closed and bounded, so we now show the converse.  Suppose (X, \|\cdot\|) is an n-dimensional normed space with basis vectors \{ e_1, e_2, ..., e_n \}.  Let M \subseteq X and suppose M is both closed and bounded.  Let x_m = \sum_{i=1}^n \alpha_i^{(m)} e_i for any sequence in M.  Since M is bounded, there exists a K > 0 such that for each m \in \mathbb{N} we have \| x_m \| \leq K.  We know (from an earlier lemma) there exists a c > 0 such that

\displaystyle K \geq \| x_m \| = \left\| \sum_{i=1}^n \alpha_i^{(m)} e_i \right\| \geq c \sum_{i=1}^n |\alpha_i^{(m)}|

This tells us for any fixed i, that (\alpha_i^{(m)}) is bounded, so it must have a convergent subsequence.  Call the limit of this subsequence \alpha_i.  We now have that (x_m) must have a convergent subsequence with limit x = \alpha_1 e_1 + ... + \alpha_n e_n.  Note this limit is in M, so M is compact.

Next Page »

Blog at WordPress.com.