Mathematics Prelims

December 1, 2009

Submodules and Homomorphisms

Filed under: Abstract Algebra,Algebra — cjohnson @ 8:57 pm

Just as a subgroup is a group within a group or a subfield is a field within a field, a submodule is a module within a module.  That is, if $M$ is an $R$-module, we say that $N$ is a submodule of $M$ if $N$ is a subgroup of $M$ and is closed under multiplication by elements of $R$.  This can be summarized by saying that for every $x, y \in N$ and every $r \in R$, the following two properties hold.

1. $x - y \in N$
2. $rx \in N$

In the case of groups, we could only form a quotient group if the subgroup we were modding out by as sufficiently “nice” (i.e., was a normal subgroup); likewise, in the case of rings we required that a subring be nice (an ideal) in order to form a quotient ring.  With modules however, we can always form the quotient module with a submodule.  We can do this since, as $M$ is an abelian group under addition, all subgroups are normal and we can form the quotient group $M/N$.  This is naturally an abelian group, so in order to turn this into an $R$-module we have to define multiplication of elements in $R$ and $M/N$, which we do in the most obvious way: $r(x + N) = rx + N$.  To check that this is in fact an $R$-module, we simply verify that the three axioms of a module hold.

1. $r((x + N) + (y + N)) = r(x + y + N) = r(x + y) + N = rx + ry + N = rx + N + ry + N$
2. $(r + s)(x + N) = (r + s)x + N = rx + sx + N = rx + N + sx + N$
3. $(rs)(x + N) = (rs)x + N = r(sx) + N = r(sx + N)$

Finally, if $M$ is a unitary module, then so too is $M/N$: $1(x + N) = 1x + N = x + N$.

We say a map $\phi : M \to N$ between two $R$-modules is a homomorphism if $\phi$ is a group homomorphism with the additional property that for each $r \in R$, $\phi(rx) = r \phi(x)$.  As you would expect, the kernel of this homomorphism is a submodule of $M$, and the image is a submodule of $N$.

Supposing that $S \subseteq M$, we define $R\langle S \rangle$ to be the smallest submodule of $M$ containing $S$, which is naturally the intersection of all submodules containing $S$

$\displaystyle R\langle S \rangle = \bigcap \left\{ N : N \text{ a submodule of } M \text{ and } S \subseteq N \right\}$

If $S$ is a finite set, we may write $R\langle s_1, s_2, \ldots, s_n \rangle$ in place of $R \langle S \rangle$, and in the event that $S$ is a singleton, we say that $R\langle s_1 \rangle$ is a cyclic submodule of $M$.  We call the elements of $S$ the generating set of the $R \langle S \rangle$ submodule, and call the elements of $S$ the generators of this submodule.

November 28, 2009

Modules

Filed under: Abstract Algebra,Algebra — cjohnson @ 9:59 pm

Anyone who has spent any amount of time in algebra or analysis has come across vector spaces: a set of elements, vectors, which we can add together or multiply by a scalar from some fixed field.  This idea is simple and natural enough that students in high-school physics classes become familiar with the basic idea of vectors, if only in an informal way.  However, in the axioms of a vector space there’s nothing that particularly requires that we pull our scalars from a field (though other results in vector space theory depend on our having an underlying field).  Indeed, we could just require that the scalars we multiply by simply be elements of a ring; this gives us a structure known as a module.  Though this seems like a simple enough generalization, module theory has some quirks that those of us more accustomed to vector spaces will find odd.  For instance, though every vector space has a basis, there are modules which do not; and even if a module has a basis, two different bases may have different cardinalities.

To be more precise, given a ring $R$ and an abelian group $M$ (written additively), we say that $M$ is a (left) $R$-module if there exists a ring homomorphism $\phi : R \to \text{End}(M)$.  (Recall that if $M$ is an abelian group, the collection of endomorphisms of $M$ forms a ring under piecewise addition and function composition.)  This says simply that given an $r \in R$ and an $x \in M$, we can define the multiplication of $x$ by $r$ as $rx = \phi(r)(x)$.  Given any $r, s \in R$ and any $x, y \in M$, the following are immediate consequences of our definition.

1. $(r + s)x = rx + sx$
2. $r(x + y) = rx + ry$
3. $(rs)x = r(sx)$

If in addition the ring $R$ has an identity, we may also require that $1x = x$ for all $x \in M$.  We call such a module unitary.  Though this isn’t strictly required, we will assume that all modules we deal with are unitary if the ring has identity.  Notice that we only perform multiplication on the left.  If we define multiplication on the right, we have a right $R$-module.  We will assume that all of our modules are left module unless otherwise specified.

The three properties listed above aren’t simply consequences of our definition of a module, but can actually be used as an alternative definition.  That is, if we have an abelian group $M$, a ring $R$, and some multiplication $R \times M \to M$ satisfying the above properties, we then have a module.  To see this, define for each $r \in R$ a $M$-endomorphism $\phi_r : M \to M$ by $x \mapsto rx$.  Notice that the second property above guarantees that $\phi_r$ is indeed an endomorphism: $\phi_r(x + y) = r(x + y) = rx + ry = \phi_r(x) + \phi_r(y)$.  Now defining a map $\phi : R \to \text{End}(M)$ as $r \mapsto \phi_r$, properties one and three guarantee that this is in fact a ring homomorphism.

As stated earlier, a module can be thought of as a generalization of a vector space.  In fact, if our ring $R$ is a field, then any $R$-module is simply a vector space over $R$.  A module can also be thought of as a generalization of an abelian group, in the sense that every abelian group is in fact a $\mathbb{Z}$-module.  Suppose that $A$ is an abelian group.  For each positive $n \in \mathbb{Z}$, define $na$ as $\sum_{i=1}^n a$.  If $n = 0$, define $na = 0$.  Finally, if $n < 0$, define $na$ as the inverse (in $A$) of $(-n)a$.

Notice that every ring can be viewed as a module over itself; $R$ is an $R$-module where scalar multiplication is simply the ring’s usual multiplication.  Additionally, if $S$ is a subring of $R$, then $R$ can be viewed as an $S$-module as the three properties of an $S$-module are satisfied by usual multiplication in the ring.  Similarly, if $I$ is left ideal of $R$, then $I$ is a left $R$-module; if $I$ is a right ideal, then it’s a right $R$-module.

As we may pull our scalars from a ring instead of a field, we can treat some more “interesting” objects are scalars.  For instance, suppose that $V$ is an $F$-vector space and $T : V \to V$ a linear transformation.  Then $F[x]$, the set of all polynomials with coefficients from $F$, forms a ring.  For any $f(x) \in F[x]$ with $f(x) = \sum_{i=0}^n f_i x^i$, we define $f(T) = \sum_{i=0}^n f_i T^i$ where $T^i$ means the $i$-fold composition of $T$ with itself.  For any $v \in V$ we define $f(x) \cdot v$ as $f(T)(v) = \sum_{i=0}^n f_i T^i(v)$.  Notice this satisfies the properties of a $F[x]$-module.

1. $(f(x) + g(x)) \cdot v = (f(T) + g(T)) \cdot v = f(T)(v) + g(T)(v) = f(x) \cdot v + g(x) \cdot v$
2. $f(x) \cdot (u + v) = f(T)(u + v) = f(T)(u) + f(T)(v) = f(x) \cdot u + f(x) \cdot v$
3. $f(x) \cdot (g(x) \cdot v) = f(T)(g(T)(v)) = (f(T) \circ g(T))(v) = (f(T) g(T))(v) = (f(x) g(x)) \cdot v$

Thus this module, which we’ll denote $V_T$, has polynomials as its scalars.

February 3, 2009

Cyclic Groups Part I – The Order of an Element

Filed under: Abstract Algebra,Algebra — cjohnson @ 3:14 pm

We mentioned last time that if we have a group $G$ and a subset (not necessarily a subgroup) of elements $S \subseteq G$, then there was a unique minimal subgroup of $G$ which contained $S$.  This subgroup is the intersection of all subgroups which contain $S$ and is denoted $\langle S \rangle$.  In the case when $S$ is a singleton (i.e., contains only one element), say $S = \{ x \}$, then we just write $\langle x \rangle$ for the smallest group containing $x$.  Groups generated by a single element like this are referred to as cyclic.

Recalling the order of a group is simply the cardinality of the underlying set, we define the order of an element of the group to be the order of the subgroup generated by that element; $|x| = |\langle x \rangle|$.  Also recall that an alternative way of describing a group generated by a set was to talk about all of the ways to multiply (apply the group operation) to elements of the set.  In the case of a cyclic group, where the group is generated by a single element, this means that every element in the group is actually a power of the generator.  That is, if $y \in \langle x \rangle$, then there exists a $k \in \mathbb{Z}$ such that $y = x^k$.

If $|x| = n < \infty$, then $n$ is actually the smallest natural number such that $x^n = 1$.  To see this, suppose there was another number, $m < n$ with $x^m = 1$.  Then we would have for every $k$ that $k = mq + r$, where $0 \leq r < m$ (by Euclid’s algorithm).  This gives $x^k = x^{mq + r} = x^{mq} x^r = (x^m)^q x^4 = 1^q x^r = x^r$.  If this were the case, then the group would in fact only have order $r$.

Likewise, if $|x| = n < \infty$ and $x^m = 1$, then $n | m$.  Again, use Euclid’s algorithm to write $m = nq + r$ where $0 \leq r < n$.  We have again $1 = x^m = x^{nq + r} = x^{nq} x^r = (x^n)^q x^r = x^r$, but this contradicts the minimality of $n$ if $r \neq 0$, so we conclude $r = 0$ and $n | m$.

January 26, 2009

Generators and the Dihedral Groups

Filed under: Abstract Algebra,Algebra — cjohnson @ 10:34 pm

Consider the Dihedral group of order 2n, denoted $D_{2n}$.  This group represents the “symmetries” of a regular n-sided polygon in the plane, where a “symmetry” is a way of moving the n-gon around with rigid body transformations in 3-space, and laying it back down perfectly on top of a copy of the original n-gon.  (Note that we’re using $D_{2n}$ to mean the symmetries of an n-gon, not a 2n-gon, consistent with Dummit & Foote.  Other authors, however, will use the notation $D_n$ for this group.  The reason for the 2n will become apparent soon.)

For instance, imagine an equilateral triangle in the plane.  Call one vertex A, another B, and third C.  We want to pick this triangle up and move it around so that when we lay it back down it completely covers the space occupied by the original triangle.  In doing so, though, the position of our A, B, C vertices may change.  For instance, if we picked the triangle up and placed it back down so that it had been rotated clockwise 120 degrees, instead of seeing the vertices A, B, C in their original order, we may now see C, A, B in their place.  We could also “pinch” one of the vertices, keeping it still, and rotating  the triangle so that the other two vetices swapped.  In this case we may go from A, B, C to A, C, B if A was the vertex we kept still.

Of course, we could also rotate the triangle some multiple of 120 degrees clockwise or counter-clockwise, or we could choose to “pinch” another vertex when we flip the triangle over as discussed above.  For instance, if we pinched B we might go from A, B, C to C, B, A.  Supposing we always pinch the top-most vertex, though, we could rotate, flip, then rotate back and achieve the same effect.  Any counter-clockwise rotation could likewise be represented as repeating the clockwise rotation an appropriate number of times.  For instance, if we rotate counter-clockwise we’d go from A, B, C to B, C, A.  If we just rotated clockwise twice though we’d go from A, B, C to C, A, B on the first rotation, then from there to B, C, A.  In general, any of these symmetries could be represented entirely as a combination of flips with the top-most vertex pinched, and clockwise rotations by 120 degrees.

What we’ve described for a triangle could easily be extended for polygons with more sides, though.  Suppose that we label the n vertices of our n-gon as 1, 2, 3, …, n clockwise around the figure.  Any of the symmetries we could perform is uniquely determined by two things:  which vertex we send 1 to, and whether 2 is “in front of” or “behind” 1 (meaning, if we were to traverse the figure’s vertices clockwise, if we’d get to 2 before we got to 1, or after).  This means that there are 2n possible symmetries:  pick a place to send 1 (n possibilities), then pick whether 2 comes before or after (2 possibilities).  This is the reason we use $D_{2n}$ instead of $D_n$ for this set.

Now, we’ve said that every element of $D_{2n}$ could be thought of as a combination of flips and rotations.  In algebra-speak, we say that $D_{2n}$ is generated by flips and rotations.  If we denote a flip by $f$ and a rotation by $r$, we would then write $D_{2n} = \langle f, r \rangle$ for this.  We can make this notion of a group being generated by a subset of its elements more precise, but before we do that we’ll need the following:

Let $G$ be a group, $\Lambda$ some index set, and suppose $H_\alpha \leq G$ for every $\alpha \in \Lambda$.  We will show that $\bigcap_{\alpha \in \Lambda} H_\alpha \leq G$ by applying the subgroup test previously described.  First note that the identity is in $\bigcap H_\alpha$ since each $H_\alpha \leq G$ means the identity is in each $H_\alpha$.  Now let $x,y \in \bigcap H_\alpha$.  Since $y \in \bigcap H_\alpha$, we must have $y \in H_\alpha$ for each $\alpha \in \Lambda$.  Since each of these is a subgroup we must have $y^{-1} \in H_\alpha$ for each $\alpha$, so $y^{-1} \in \bigcap H_\alpha$.  Now note $xy^{-1} \in H_\alpha$ for each $\alpha$, so $xy^{-1} \in \bigcap_{\alpha \in \Lambda} H_\alpha$, and $\bigcap H_\alpha \leq G$.  In short: an intersection of subgroups of some larger group is itself a subgroup of that group.

Now suppose $S$ is a general, non-empty subset of elements of $G$; $S$ need not be a subgroup of $G$.  The smallest subgroup containing $S$ (where “smallest” means with respect to the inclusion relation), which we call the subgroup generated by $S$ and denote $\langle S \rangle$, is the intersection of all subgroups of $G$ which contain $S$:

$\displaystyle \langle S \rangle = \bigcap_{\substack{S \subseteq H \\ H \leq G}} H$

We’ve shown that arbitrary intersections of subgroups forms another subgroup, and by construction $S$ is included in this subgroup.  If we have another subgroup containing $S$, though, it will be one of the $H$ subgroups in the above intersection, and so it will include this subgroup.  That is, if $S \subseteq H$, then $\langle S \rangle \leq H$.

Alternatively, we could describe the subgroup as the collection of all products, powers, and inverses of elements of $S$.  That is,

$\displaystyle \langle S \rangle = \{ t_1^{a_1} \cdot t_2^{a_2} \cdot ... \cdot t_n^{a_n} : n \in \mathbb{N}, \, t_i \in S, \, a_i \in \mathbb{Z} \, \text{ for each } 1 \leq i \leq n \}$

This tends to be an easier way of thinking about subgroups generated by a set: just mix up all the elements of the set however you want, and the collection of everything you could possibly get is the subgroup generated by the set.

In the case $S = \{s_1, s_2, ..., s_n\}$ we will normally write $\langle s_1, s_2, ..., s_n \rangle$ instead of $\langle \{ s_1, s_2, ..., s_n \} \rangle$, and if $S = \{s\}$ is a singleton set, we’ll just write $\langle s \rangle$.  A group generated by a single element is called a cyclic group, but that’s a topic for another post.

January 24, 2009

Subgroups and Permutations

Filed under: Abstract Algebra,Algebra — cjohnson @ 6:35 pm

Given a group $G$, a subset $H \subseteq G$ is a called a subgroup if $H$ is itself a group under the same operation as $G$.  This relationship between $H$ and $G$ is denoted $H \leq G$, and we write $H < G$ if $H$ is a proper subset of $G$ and we say $H$ is a proper subgroup of $G$.  Going through the “usual” process of verifying $H$ is a group isn’t necessary to see that a subset $H$ is a subgroup though, as some of these properties will be inherited from $G$.  For instance, associativity of the group operation doesn’t need to be checked.  If the operation was not associative in $H$, then there would be elements of $G$ for which the operation wasn’t associative and $G$ wouldn’t be a group.  The only three things we need to check for $H$ to be a subgroup are that it contains the identity (which we’ll just call 1), that it’s closed under the group operation (i.e., $\forall x,y \in H : xy \in H$) and closed under inverses ($\forall x \in H : x^{-1} \in H$).

In fact we don’t even have to show all of those properties.  The subgroup criterion (as Dummit & Foote call it), aka the two-step subgroup test (Gallian) says that all we need to do is show that $H$ is non-empty, and for every $x, y \in H$ that $xy^{-1} \in H$.  Obviously these hold if $H \leq G$, so we need to show that if these properties hold then $H \leq G$.  Letting $y = x$, then we have that $xy^{-1} = xx^{-1} = 1 \in H$.  Now let $x = 1$ and we have $xy^{-1} = 1 \cdot y^{-1} = y^{-1} \in H$.  Finally, since we’ve shown that $H$ is closed under inversion, we have that for $x(y^{-1})^{-1} = xy \in H$ and so $H$ is a subgroup of $G$.

As an example, let $G$ be $\mathbb{R}^2$ under vector addition.  We will show that the set of all points lying on the line $y = x$ form a subgroup of $G$.  Let $H = \{ (x, y) \in \mathbb{R}^2 : y = x \}$.  This is clearly a non-empty set as $(1, 1) \in H$.  Now let $x,y \in H$ where $x = (x, x)$ and $y = (y, y)$.  Note the inverse of $(y, y)$ is $(-y, -y)$.  Adding our vectors together we have $(x, x) + (-y, -y) = (x-y, x-y) \in H$ and so $H$ is a subgroup of $G$.

If $A$ is any set, we denote by $S_A$ the set of all bijective functions (normally just called permutations) of $A$.  We claim that $S_A$ is a group under function composition.  Note that function composition is associative as for any $f, g, h \in S_A$ and $a \in A$ that ${}[(f \circ g) \circ h](a) = [f \circ g](h(a)) = f(g(h(a)) = f([g \circ h](a)) = [f \circ (g \circ h)](a)$.  There is an identity element given by the map $a \mapsto a$, and that for any $f \in S_A$ we have an inverse by mapping $f(a)$ back to $a$: by assumption $f$ is bijective so there are no problems with having multiple things map to $f(a)$ under $f$, or mapping multiple things to $a$ under $f^{-1}$.

In the case $A = \{1, 2, ..., n\}$ we generally write $S_n$ in place of $S_A$.  We also generally write these permutations in cycle notation.  A cycle is a parenthesized list of distinct numbers where the first number maps to the second, the second to the third, the third to the fourth, and so on.  The last number in the list maps back to the first number, hence the name “cycle.”  For example, (1 4 5 3 2) means $1 \mapsto 4$, $4 \mapsto 5$, $5 \mapsto 3$, $3 \mapsto 2$ and finally $2 \mapsto 1$.  In the cycle decomposition of a function we list all such cycles.  If we see (1 4 5 3 2)(6 7)(8), for instance, we have the cycle previously discussed, but in addition we have $6 \mapsto 7$ and $7 \mapsto 6$ by the second cycle, and the third cycle tells us $8 \mapsto 8$ (elements that map to themselves are called fixed points).  The convention is not to list any fixed points in the cycle decomposition though, with the understanding that any any points not appearing in the decomposition map to themselves, so the above permutation would normally be written (1 4 5 3 2)(6 7).  Of course, this permutation is equivalently given by (7 6)(3 2 1 4 5).  By convention though, we list the cycle containing 1 first, the cycle containing the lowest number not in the first cycle next, and so forth.  In a given cycle we will normally list the least element first.

Now let $H$ be set of all permutations of $S_{10}$ that fix the elements 3 and 7.  We will show that $H \leq S_{10}$.  Obviously $H \neq \emptyset$.  Let $f, g \in H$.  Note that if 3 and 7 are fixed by $g$, they will be fixed by $g^{-1}$ as well.  Applying $f$ after $g^{-1}$ we still keep 3 and 7 fixed.  This gives us that $fg^{-1} \in H$.

January 18, 2009

Introductory Algebra and Definitions

Filed under: Abstract Algebra,Algebra — cjohnson @ 10:21 am

If $S$ is a set, a binary operation on $S$ is a function $\circ : S \times S \to S$, where we generally write $s \circ t$ for $\circ(s, t)$.  We say that this binary operation is associative if for every $s,t,u \in S$ we have that $(s \circ t) \circ u = s \circ (t \circ u)$.  A pair $(S, \circ)$ where $S$ is a set and $\circ$ an associative binary operation on $S$ is then called a semigroup.  For example, $(\mathbb{N}, +)$ is a semigroup.

If $(S, \circ)$ is a semigroup and there exists an $e \in S$ such that for every $s \in S$ we have that $s \circ e = e \circ s = s$, then we say that $e$ is an identity (of $\circ$) and we call $(S, \circ)$ a monoid.  The natural numbers together with zero, which we’ll denote $\mathbb{N}_0$, is then a monoid under addition.

Generally we’ll let juxtaposition denote the binary operation, writing $st$ for $s \circ t$.  In cases where $+$ is the “natural” symbol to use for the binary operation, however, (e.g., the natural numbers with addition), then we’ll still use $+$.

When $s,t \in S$ and $st = e$ (where $e$ still denotes the identity), then we say that $s$ is the inverse of $t$ (and vice versa).  We will normally write $s^{-1}$ to mean the inverse of $s$, though when $+$ is the operation we’ll write $-s$ and write $t - s$ to mean $t + (-s)$.  A monoid where every element has an inverse is called a group.  The integers under addition form the group $(\mathbb{Z}, +)$.  To check this, simply note that addition of integers is associative, addition of two integers yields another integer, zero is the identity element, and for any $z \in \mathbb{Z}$ we have that $-z$ is its inverse.

Blog at WordPress.com.