Wednesday, March 31, 2010

New LaTeX Formatting

I have hopefully found a new (and better!) way to format math and equations in my blog using $\LaTeX$. I'm going to try and integrate this as seemlessly as possible, but I can't really imagine it happening without some format hiccups, so try and bear with me. To demonstrate my newfound power I'm going to try and write out one of my old examples.
Example: Homomorphism
Choose $n \in \mathbf{N}$ and let $\phi:\mathbf{Z}\to \mathbf{Z}$ be given by $\phi(x)=x\; \text{mod}\; n$. Now, suppose $x,y \in \mathbf{Z}$. We can write $x = h n + p$ and $y = k n + q$ where $p,q \in \mathbf{Z}_n$ and $h,k \in \mathbf{Z}$. Then
\[ \phi(x + y) = x + y\; \text{mod}\; n = (h n + p) + (k n + q)\; \text{mod}\; n = p + q\; \text{mod}\; n \]
and also
\[ \phi(x) + \phi(y) = (h n + p\; \text{mod}\; n) + (k n + q\; \text{mod}\; n) = p + q\; \text{mod}\; n \]
so $\phi(x + y) = \phi(x) + \phi(y)$. Thus $\phi$ is operation preserving and is a homomorphism.
The image of $\phi$ should be fairly obvious. Indeed, $\mathbf{Z}_n \subseteq \mathbf{Z}$ and if $x \in \mathbf{Z}_n$ the $\phi(x) = x$ giving that $\mathbf{Z}_n \subseteq \text{Im}(\phi)$ but by the definition of modular division, nothing in $\text{Im}(\phi)$ can be outside of $\mathbf{Z}_n$ and so $\text{Im}(\phi) = \mathbf{Z}_n$. The kernel of $\phi$, however, is a little more interesting. Suppose that $x \in \text{Ker}(\phi)$. We can write $x = h n + p$ as before but then $\phi(x) = p = 0$ so $x = h n$ giving that the elements in the kernel of $\phi$ are the multiples of $n$, or $\text{Ker}(\phi) = n \mathbf{Z} = {k n : k \in \mathbf{Z}}$.
If you're wondering what's going on, I added some javascript (that I found thanks to WatchMath) that renders standard LaTeX into the images you see above. If you'd like to learn how to do it (its very easy), you can read about it in this journal entry on my personal blog.

Tuesday, March 30, 2010

Dr. Perelman and an Introduction to Topology

I know I promised an entry on group actions today, and we'll get to that soon enough, but I think for a couple of posts I'm going to go in a slightly different direction - partially because group actions are hard to explain, partially because I'm getting a little bored with writing about group theory, and mostly because opportunity has struck in the form of Dr. Perelman's idiocy.
Grigori Perelman is a Russian topologist (from what I can tell) and quite the eccentric to say the least. In 2002 he solved a very old and very difficult math problem called the Poincare Conjecture which happens to be one of the seven Millennium Problems. In 2000 the Clay Institute of Mathematics commissioned a $1 million award for a solution to any of seven of the most difficult and important problems that mathematicians were facing at the time, which they called the seven Millennium Problems. Then, two years later, Perelman presented a solution to the Poincare conjecture which has taken until now to verify. As promised, the Clay Institute awarded Perelman the $1 million prize. Also, in 2006 he was awarded the Fields Medal, which is the most prestigious award offered to mathematicians and comes with a $15,000 prize.
Now, the most interesting part of these accomplishments is that Perelman refused both the $15,000 Fields Medal award as well as the $1 million Millennium Prize award. He says its because he hates the intellectual and moral shortcomings of the mathematical community and does not want to be seen as its figurehead. Since his proof in 2002, Perelman tried teaching for a brief while at American universities, but has ultimately given up math completely. He now lives in a flat with his mother and enjoys playing table tennis against his wall in his basement. When a journalist asked him to comment he replied, "You are disturbing me. I am picking mushrooms."
The problem that Perelman solved, the Poincare Conjecture, is extremely difficult, but surprisingly accessible. Suppose you stretch a rubber band over the circumference of a sphere - or really around a sphere in any way. You can then slowly and carefully (the math terms are continuously and uniformly) move the rubber band along the surface of the sphere such that the rubber band shrinks down into a single point. Now, suppose that we try to do the same thing with a doughnut (mathematically a torus) instead of a sphere. If we can imagine that the rubber band has been stretched around the donut in an appropriate way, then the rubber band cannot be reduced to a point without either tearing the rubber band or breaking the doughnut. In topological terms, the sphere is "simply connected" and the doughnut is not. This is considered the three dimensional case and the Poincare Conjecture, in essence, asked for a way to classify four-dimensional simply connected objects.
This brings us to a nice way of introducing topology. The difference between a doughnut and a sphere is more than simple-connectedness. If you imagine that a sphere were made of clay, then there is no way to deform that sphere into a doughnut without poking a hole in the clay. However, if you had a clay coffee cup, you can imagine how it is possible to deform the coffee cup into a doughnut because the coffee cup already has a hole. If you're having trouble visualizing that, wikipedia has a nice animation of this transformation. In topological terms, we call the coffee cup and the doughnut homeomorphic (note that the words homomorphism and homeomorphism are not the same). Topology, from my point of view, is basically an in-depth study of what it means and how to tell when two things are homeomorphic. That being said, though, if you took a first semester in topology or read an introductory topology text, you would find yourself pretty far removed from that idea of homeomorphism and, in fact, the definition of a topological space does not really resemble anything I've mentioned here.
Topology is extremely interesting and it often forms a gateway to other ideas in other branches of mathematics. I think I might start to work though some topology during some breaks in the group theory stuff even though topology does not help me in my character theory goals. Topology will most likely be in a lower capacity since I do not understand it, myself, as well as I would like, but sometimes I feel like the barrage of group theory has been pretty intense lately and topology might provide a nice break from that at times.

Friday, March 19, 2010

More About Homomorphisms

Before we talk about the first group isomorphism theorem, I suppose a few examples of homomorphisms are in order. They are often very similar to isomorphisms, as to be expected, but make sepcial notes of their kernels and images.
Example:
Choose n∈N and let φ:ZZ be given by φ(x) = x mod n ∀x∈Z. Now, suppose x,y∈Z. We can write x = hn + p and y = kn + q where p,q∈Zn and h,k∈Z. Then
φ(x + y) = x + y mod n = (hn + p) + (kn + q) mod n = p + q mod n
and also
φ(x) + φ(y) = (hn + p mod n) + (kn + q mod n) = p + q mod n
so φ(x+y) = φ(x)+φ(y). Thus φ is operation preserving and is a homomorphism.
The image of φ should be fairly obvious. Indeed, ZnZ and if x∈Zn then φ(x) = x giving that Zn⊆Imφ but by the definition of modular division, nothing in Imφ can be outside of Zn and so Imφ = Zn. The kernel of φ, however, is a little more interesting. Suppose that x∈Kerφ. We can write x = hn + p as before but then φ(x) = p = 0 so x = hn giving that the elements in the kernel of φ are the multiples of n, or Kerφ = nZ = {kn : k∈Z}.
Example:
Let G be any group and let N be a normal subgroup of G. Then we can form the factor group, G/N. Let σ:G→G/N be defined by σ(g) = gN ∀g∈G. Choose x,y∈G and note that
σ(ab) = abN = aNbN = σ(a)σ(b)
so σ is operation preserving and thus a homomorphism.
We see that σ is surjective (if K∈G/N simply choose k∈K and σ(k) = K) so that Imσ = G/N. It is also not hard to see the kernel of σ either. Since g→gN, the only elements of G that map to the coset, N, are the elements inside of N, giving that Kerσ = N.
This particular homomorphism crops up quite a bit and as such is given its own special name.
Definition: Canonical Homomorphism
Let G be a group and N a normal subgroup of G. The map σ:G→G/N defined by σ(g) = gN ∀g∈G is called the canonical homomorphism.
Example:
You might recall that the determinant is a homomorphism. Let G = GL(n,R) and let θ:G→R* be defined by θ(A) = det(A) ∀A∈G. (R* is the multiplicative group of real numbers without zero.) You should recall from linear algebra that det(AB) = det(A)det(B) when A and B are square matrices of the same size. Now, choose A,B∈G and observe that
θ(AB) = det(AB) = det(A)det(B) = θ(A)θ(B),
so θ is a homomorphism.
It is not hard to see that θ is surjective, although it may take a little bit of extra linear algebra knowledge. Choose α∈R and let A = α1/nIn where In is the identity matrix in G. Then φ(A) = det(A) = α. Thus θ is surjective and Imθ = R*. The kernel of θ is a little more complicated. Kerθ consists of all the matrices with determinant equal to one. There is no nice, explicit form for these matrices, but they are used quite a lot and as such this group has its own name.
Definition: Special Linear Group
Let n be an integer and define the set SL(n,R) = {A∈GL(n,R) : det(A) = 1}. Then SL(n,R) forms a subgroup of GL(n,R) and is called the special linear group.
The first group isomorphism theorem gives a relationship between isomorphisms, homomorphisms, kernels, and images. It is an interesting way to find an isomorphism between often unrelated groups.
Theorem: First Group Isomorphism Theorem
Let φ:G→G* be a group homomorphism. Then G/Kerφ ≈ Imφ.
Proof:
Let φ:G→G* be any group homomorphism, let K = Kerφ, and define θ:G/K→Imφ by θ(gK) = φ(g) ∀gK∈G/K. Also call e the identity element of G*
First we must be sure that this is actually a function, that is that the mapping of θ is independent of coset representative. Choose H∈G/K and then choose g,h∈H. Since g∈hK ∃k∈K such that g = hk. Now observe that
θ(gK) = φ(g) = φ(hk) = φ(h)φ(k) = φ(h)e = φ(h) = θ(hK)
so finally θ(gK) = θ(hK) and θ is independent of the choice of coset representative, therefor making it a function.
To show that θ is surjective, choose q∈Imφ. Then clearly ∃p∈G such that φ(p) = q. Finally, θ(pK) = φ(p) = q giving that θ is surjective.
To show that θ is injective, it suffices to show that Kerθ is trivial, or that it contains only the identity of G/K. If we recall, this identity is simply K. Choose gK∈Kerθ. Observe that
e = θ(gK) = φ(g)
which means that g∈K and gK = K. Finally, Kerθ = {K} and θ is injetcive.
Choose aK,bK∈G/K and observe that
θ(aKbK) = φ(ab) = φ(a)φ(b) = &theta(aK)θ(bK)
so that θ is operation preserving.
Finally, we see that θ is an isomorphism and that G/Kerφ ≈ Imφ.
This is a pretty cool theorem and from our examples we arrive at a couple of interesting results. First, Z/nZZn which can actually be shown without the use of the first isomorphism theorem. The second example gives us that for N⊳G, G/N ≈ G/N which is obvious. The first isomorphism theorem, however, does give a very interesting result from the third example that we saw - that is that GL(n,R)/SL(n,R) ≈ R*, which is pretty cool and rather unexpected.
There is one last thing that I should mention about homomorphisms for the time being, and that is that they can be used as a sort of subgroup test. From the way I introduced and defined SL(n,R) it is very obvious that its a normal subgroup of GL(n,R). However, suppose I'd defined SL(n,R) immediately after defining GL(n,R) and asked you to prove that SL(n,R)⊳GL(n,R) (instead of inventing the special linear group from the kernel of the determinant homomorphism like I did above). It can be done using the normal subgroup test, but its a giant pain with tons of pointless symbol chasing. The easiest way to prove it would be to discover a homomorphism (namely the determinant homomorphism) for which GL(n,R) is the domain and SL(n,R) is the kernel, which automatically gives that SL(n,R)⊳GL(n,R) since all kernels are normal. This is a very common technique for finding normal subgroups.
In the next post, I'm going to be taking a detour from homomorphisms and subgroups and talk about group actions.
References
Previous Related Post: Homomorphisms
Text Reference: Gallain Chapter 10
The Unapologetic Mathematician: The First Isomorphism Theorem

Friday, March 12, 2010

Homomorphisms

Group homomorphisms are very closely related to group isomorphisms. In fact, it turns out that every isomorphism is a homomorphism and as such, homomorphisms can be viewed as a generalization of isomorphisms. These very important functions are fundamental in the world of algebra and are one of the most important tools that we have. As we'll see in this and in upcoming posts, the application of homomorphisms to various groups gives a lot of useful results.
Definition: Group Homomorphism
A function, φ, from a group G to a group G* is a group homomorphism if it preserves the group operation - that is if ∀a,b∈G, φ(ab) = φ(a)φ(b).
A homomorphism is a function that has the operation preserving property that we described for isomorphisms without regard to bijectiveness. Of course, homomorphisms can be bijective, explaining why each isomorphism is a homomorphism. This means that it is easier for a function to be a homomorphism than an isomorphism, but it also means that homomorphisms have more interesting properties. Isomorphisms are used to identify groups as isomorphic, but not for much else, whereas homomorphisms provide us with a lot of usefullness, mostly as a result of the following sets.
Definition: Kernel
The kernel of a homomorphism, φ:G→G*, is the set {g∈G : φ(g)=e} and it is denoted by Ker(φ) or Kerφ.
Definition: Image
The image of a homomorphism, φ:G→G*, is the set {φ(x) : x∈G} and is denoted by Im(φ) or Imφ.
The first thing to notice here is that if φ:G→G* is a homomorphism, then Kerφ⊆G and Imφ⊆G*. The kernel is all of the things in G that are mapped to the identity, and the image is all of the things in G* that can be attained by a mapping of φ. It is important to note that these sets are trivial in the case that φ is actually an isomorphism - that is that in this case, Kerφ={e} and Imφ=G*. This does give another convenient way to check if a function is an isomorphism. Instead of proving that φ is bijective, it suffices to show that Kerφ={e} and Imφ=G* (and, of course, that φ is operation preserving). However, we get much more than that. It turns out that both the image and the kernel are subgroups.
Theorem: Kernels are Subgroups
Let φ:G→G* be a group homomorphism. Then Kerφ is a subgroup of G.
Proof:
Let φ:G→G* be a group homomorphism and choose a,b∈Kerφ. We then have that φ(ab-1) = φ(a)φ(b)-1 = e(e-1) = e and thus ab-1∈Kerφ. Therefore, by the one-step subgroup test, Kerφ is a subgroup of G.
Theorem: Images are Subgroups
Let φ:G→G* be a group homomorphism. Then Imφ is a subgroup of G*.
Proof:
Let φ:G→G* be a group homomorphism and choose x,y∈Imφ. Then ∃a,b∈G such that φ(a)=x and φ(b)=y. Now, φ(ab-1) = φ(a)φ(b)-1 = xy-1 and thus xy-1∈Imφ. Therefore, by the one-step subgroup test, Imφ is a subgroup of G*.
In those proofs I used a fact that I have not yet proved - specifically that φ(b-1 = φ(b)-1. This is true, though, and is an easy thing to prove. The fact that these two sets are subgroups is important and very convenient, but we get even more than that.
Theorem: Kernels are Normal
Let φ:G→G* be a group homomorphism. Then Kerφ⊳G.
Proof:
Let φ:G→G* be a group homomorphism and choose g∈G and n∈Kerφ. Then φ(gng-1) = φ(g)φ(n)φ(g)-1 = φ(g)eφ(g)-1 = φ(g)φ(g)-1 = e thus giving that gng-1∈Kerφ. Since the choice of n∈Kerφ was arbitrary, gKerφg-1⊆Kerφ for g∈G. Thus, by the normal subgroup test, Kerφ⊳G
It turns out that images are not normal subgroups. This is not all that surprising when you really think about what an image is in relation to a codomain. However, the fact that kernels are normal subgroups is a surprisingly wonderful fact and we get a lot of mileage out of it. In the next post, we'll see a very useful result called the first group isomorphism theorem that uses the normality of kernels in a very natural way.
References
Previous Related Post: Properties of Isomorphisms
Text Reference: Gallain Chapter 10
Wolfram Mathworld: Group Homomorphism
Planet Math: Group Homomorphism

Wednesday, March 10, 2010

A Fork in the Road

This is the point where we have enough background to go in a lot of different directions. There is a lot more to be learned about groups. From here we could move on to group actions, Sylow Theory, and the various different ways to combine groups and make new groups. However, we have also learned a sufficient amount of group theory to move onto the theory of integral domains, rings, and then fields which are all new structures that are similar to groups but with more rules. This would be the direction that you'd want to go if you were interested in studying polynomials and algebraic topology. A third option would be to work on linear algebra, which is the study of vector spaces, matrices, and linear functions.
My personal area of research (at this point) is in character theory, which is a very cool and useful way of looking at groups and extracting all of their information. Character theory, at least from what I can tell so far, is a cool new 21st century tool that we've developed to study groups after running dry the well of standard group theory. I think I'm going to try to move in a direction that gets us toward understanding character theory. The problem with that, though, is that the path to character theory is nowhere near a straight line. When I learned it myself it was a frustrating process of patching together bits and pieces of a lot of different topics and it is my goal to keep you from that experience. The problem is, though, that even though I'm going to try and keep you away from the frustrating process, its still going to take the combination of a couple different branches of mathematics. First, we need a significant amount of group theory - including homomorphisms, group actions, conjugacy classes, and simple groups. Then we need to learn something called representation theory which is the study of functions that take arbitrary groups into groups of matrices. That means that before we can learn representation theory we need to understand these matrix groups which requires some linear algebra.
So, as you can see, there is quite a lot of work ahead of us before we get to character theory. And on top of that, I did not choose this path because its better or more important than any of the other topics - I only chose it because its what I'm familiar with, so even if we get to character theory there's quite a bit ahead of that. For now, though, I'm going to continue pushing along with group theory.

Saturday, March 6, 2010

Lagrange's Theorem

A big part of the hard work and the research in group theory is in finding specific types of subgroups inside of finite groups. If you have an arbitrary finite group of order n, then the number of distinct subsets is 2n. For reference, consider |G|=9. Then the number of subsets of G is 29=512 and each of these is a possible subgroup of G. As you can see, this gets very large very quickly. Lagrange's Theorem, however, gives us a way to drastically cut down the number of possible subgroups of any group. You should read the proof of Lagrange's Theorem, because it makes use of some of the things we've already learned.
Theorem: Lagrange's Theorem
If G is a finite group and H is a subgroup of G, then |H| divides |G|.
Proof:
Let G be a finite group and let H be a subgroup of G. Now, consider the set of all left cosets of H in G, {gH : g∈G}. Note, this set does not necessarily form a factor group because H is not normal, but we still have the cosets. Now choose one representative from each coset such that {g1H, g2H,...,gkH} is the set of distinct left cosets. Then we have that each element of G is a member of one of these cosets, or that
G = g1H∪g2H∪⋯∪gkH.
We know that this union is disjoint - that is if i≠j then giH∩gjH=∅. From this we get that
|G| = |g1H|+|g2H|+⋯+|gkH|.
Also, we know that every coset contains the same number of elements, or |giH| = |H|. Finally, this gives us that |G| = k|H| so then k = |G|/|H| and |H| divides |G| as desired.
Lagrange's Theorem gives us that the order of every single subgroup must be a divisor of the order of the original group. What this does is immediately cuts down the number of possible subgroups of any finite group. Earlier I mentioned that if |G|=9 then there are 512 unique subsets of G. Lagrange's Theorem, though, tells us that of these 512 choices, the only subsets that could possibly be subgroups are the ones of order 3, which cuts the choices down to only 84. This might not seem like that big of a deal at this point, but Lagrange's Theorem is definitely the most important and most useful theorem in all of group theory.
I must make a point to emphasise what it is that Lagrange's Theorem does not say. It is easy to think that if k is some divisor of a group, G, then G has some subgroup of order k. This is not at all the case. Lagrange's Theorem only tells us a list of possible group orders and a criteria for when a subset cannot be a subgroup, it is in no way a subgroup test. It would be convenient if the converse of Lagrange's Theorem were true, but it is not. In fact, there are lots of groups that fail the converse. For example, A4 has no subgroup of order 6.
I've written a lot of group theory proofs and I think about 90% of them use Lagrange's Theorem in one form or another - there are definitely a lot of places where we'd get stuck without it. For now, though, I'm going to give some of the immediate results of Lagrange's Theorem without proving them.
Theorem: |G/N|
Let G be a group and N⊳G. Then |G/N| = |G|/|N|.
Theorem: |a| divides |G|
In a finite group, the order of each element divides the order of the group.
Theorem: Prime Order Groups
Any group of a prime order is cyclic.
Theorem: a|G| = e
If G is a finite group and a∈G then a|G| = e.
Theorem: Fermat's Little Theorem
Let a be an integer and p be any prime number. Then ap mod p = a mod p.
Theorem: Groups of order 2p
Let G be any finite group such that |G| = 2p where p is a prime number greater than 2. Then either G≈Z2p or G≈Dp.
A few of these things seem to be very far away from Lagrange's Theorem and maybe they are, but I promise that they're all direct consequences. I didn't give the proofs because some of them are quite complicated, but I can write them if someone needs. There are a lot of proofs that start with "Suppose H is a subgroup with the property that..." and ends with "...but then the order of H does not divide the order of G hence contradicting Lagrange's Theorem." It is a very useful, very powerful tool.
References
Previous Related Post: Examples of Factor Groups
Text Reference: Gallain Chapter 7
Wolfram Mathworld: Lagrange's Theorem

Examples of Factor Groups

Today we're going to beat to death an example of a factor group. One of the most common (albeit boring) examples of a factor group is a factor group of a cyclic group. I'm going to use the cyclic group of order 6, Z6. Note that N={0,3} is a subgroup of Z6. As we will see later, N is automatically normal in Z6 because Z6 is abelian. As we learned last time, Z6/N forms a group since N⊳Z6 and it is this group that we're going to inspect. When I defined a factor group, I did it all in terms of a multiplicative group. Z6 is an additive, group, though, so it doesn't really make any sense to use multiplicative notation. Instead of writing cosets as aN, I'm going to be writing them as a+N.
By definition we have that Z6/N = {k+N : k∈Z6}. Normally, it is sufficient just to think of these abstractly, but since Z6 is finite and we are working out an example, we can write all of these out. That is, Z6/N = {0+N,1+N,2+N,3+N,4+N,5+N}. However, not all of these cosets are distinct. For instance, 1+N = {1+0,1+3} = {1,4} and 4+N = {4+0,4+3} = {4,1}, so we see that 1+N = 4+N. However, using one of the first theorems we learned about cosets, we can know when two cosets are equal without actually calculating the cosets. That is, h+N = k+N if h-k∈N. We see that 5-2=3∈N, so then 5+N = 2+N. Similarly, 0+N = 3+N. So then we get that Z6/N = {0+N,1+N,2+N}. Lets look at one example calculation, that is (1+N)+(2+N). The group operation gives us that (1+N)+(2+N) = (1+2)+N = 3+N. This is a perfectly fine calculation, but it would be nicer if we could get our answer as one of the cosets in {0+N,1+N,2+N}. As we already noted, 0+N = 3+N so we can also write (1+N)+(2+N) = 0+N. Through similar calculations we can calculate the Cayley Table of Z6/N.
cyclic factor group cayley table
There are some pretty interesting things about the Cayley Table of the factor group relative to the Cayley Table of the original group. Below is a modified Cayley table for Z6. You'll notice that the elements along the left side and the top are in a strange order and that I've seperated certain blocks of elements with black lines.
altered cyclic group cayley table
What is interesting about the particular groupings in this table is that it shows how a factor group is related to the group itself. What I did was arrange the headings along the top and along the side such that elements in the same coset are adjacent to one another. Then, by blocking off the table as above, what we've basically done is created the Cayley Table for the factor group within the Cayley Table for the original group. If you choose a coset along the side and the top, the corresponding block in the table is the corresponding coset according to the operation in the factor group. Another thing to notice about this particular factor group is that it shows pretty clearly that Z6/N is isomorphic to Z3. Indeed, if you take the Cayley table that we gave for Z6/N and remove "+N" from every entry, you get precisely the cayley table of Z3. For rigor, if φ:Z6/N→Z3 is defined by &phi:(k+N) = k mod 3 ∀k+N∈Z6/N, then you can check that φ is the desired isomorphism. Interestingly enough, if n is an integer, and k is a positive divisor of n, then Zn/<n/k>Zk.
I'm going to give one more example of a factor group. Consider D4 and β = {e,R180}. It is easy to verify that β is a subgroup of D4, but in fact β is a normal subgroup of D4. Thus, D4 forms a factor group. The Cayley Table for this factor group is given below.
dihedral factor group cayley table
It might be useful to work your way through the calculations for that Cayley Table yourself, for practice. When you do that, keep in mind that = R180β, R90β = R270β, Fhβ = Fvβ, and Flβ = Frβ (which are also some calculations you can do yourself). The most interesting thing that can be extracted from this Cayley Table is that D4 is abelian, which is surprising since D4 is non-abelian. It is not the case, however, that all factor groups are abelian. (If you'd like an example of a non-abelian factor group, D6/<σ3>S3 which is non-abelian, but that's a pretty complicated example.)
I hope that by now we've gained some understanding of factor groups, but these normal subgroups might still be a little bit of a mystery. You'll notice that in both of the examples that I gave you, I didn't really do anything to convince you that the subgroups were normal and at this point, if I had wanted to prove it then all I could have done is calculated all of the right and left cosets and shown their equality. This is okay, but is only efficient for very small groups and is completely ineffective for arbitrary groups. Because of this, we've developed a convenient criterion to guarantee normality. I'm not sure that I've introduced the notation aHb yet, but if H⊆G and a,b∈G, then aHb = {ahb : h∈H} (which is very similar to the notation of cosets).
Theorem: Normal Subgroup Test
Let G be a group and N a subgroup of G. N⊳G if and only if ∀x∈G, xNx-1⊆N.
Proof:
Suppose N⊳G. Then ∀x∈G and ∀n∈N, ∃n'∈N such that xn = n'x since xN = Nx. Thus xnx-1=n'∈N and therefore xNx-1⊆N.
Suppose that ∀x∈G, xNx-1⊆N. Choose g∈G. Then letting x=g we get gNg-1⊆N or gN⊆Ng. Conversely, letting x=g-1 we get g-1N(g-1)-1⊆N or g-1Ng⊆N so Ng⊆gN and finally gN = Ng.
This normal subgroup test is the most useful theorem that we have to show that an arbitrary group is a normal subgroup. For example, it just so happens that the center of a group is always a normal subgroup and this is very easy to prove with the normal subgroup test. Also, as it turns out, every subgroup of an abelian group is normal. The reason is easy, because if H is a subgroup of an abelian group, G, then when g∈G and h∈H, xh = hx which gives coset equality.
Theorem: Normal Subgroups of Abelian Groups
Every subgroup of an abelian group is a normal subgroup.
Proof:
Let G be an abelian group and let H be a subgroup of G. Choose g∈G and h∈H. Since G is abelian we get that ghg-1 = hgg-1 = he = h. Thus ghg-1∈H ∀h∈H, which gives that gHg-1⊆H ∀g∈G and H⊳G by the normal subgroup test.
Next time we'll look at Lagrange's Theorem - a very important and useful result in group theory.
References
Previous Related Post: Factor Groups
Text Reference: Gallain Chapter 9
Wikipedia: Quotient Group
Wolfram Mathworld: Quotient Group
Planet Math: Quotient Group

More Notation

I'm going to interrupt our regularly-scheduled discussion of factor groups to inroduce a few notational conventions I've been avoiding that I now think are going to be necessary. First, I've been using Z(n) to denote the cyclic group of order n, but the standard notation is Zn so that's what I'll be using from now on. (The answer to your next question is that I have no idea why I didn't just introduce it as Zn to begin with.) Also, in multiplicative groups, I'm going to start leaving out the ∙ most of the time. That is, instead of a∙b I will usually just write ab from now on. Along those same lines, I'm going to start leaving out a lot of parentheses - especially in cosets. Where I would have written (aN)∙(bN) = (a∙b)N I will generally write aNbN = abN in the future. These conventions are not just my laziness - I do it that way because that's the way everyone else does it. I will admit, though, that the reason that these things have become convention - and the reason behind most mathematical convention - is because mathematicians before me are lazy.

Friday, March 5, 2010

Factor Groups

Last time we learned about cosets and at the end I alluded to an extremely important group called a factor group. I mentioned that if H is a subgroup of a group, G, then the set of all left cosets of H in G is a group only when H has some mysterious property. The condition that we require is that H is normal, and this condition of normality is invented precisely to allow this set of cosets to form a group. I not only want to convince you that this factor group exists, but also that this normality condition is required. I'm going to start by defining and discussing normal subgroups.
Definition: Normal Subgroup
Let G be a group and N be a subgroup of G. Then N is a normal subgroup of G if aN = Na ∀a∈G and is written N⊳G.
Basically, a subgroup is normal when left cosets are the same as right cosets. I know that it seems like a strange condition, but it turns out to work nicely. Now we first should note what this is not saying. The condition that aN = Na does not say anything about commutivity. New students often think that this means that for n∈N, a∙n = n∙a or something like that - it says nothing about the commutivity of elements - it only says that the sets are equal. I'm not going to work out an entire example, but I do want to show how the cosets can be equal without commutivity. Consider S4 and N = {ε,(12)(34),(13)(24),(14)(23)}. I'm going to compute (123)N and N(123). Observe that (123)(12)(34)=(134)=(14)(23)(123), (123)(13)(24)=(243)=(12)(34)(123), and (123)(14)(23)=(142)=(13)(24)(123). Because of this, we then get that (123)N = N(123) however none of the elements of N (except for ε) commute with (123). We haven't yet proven that N is normal - that would require showing that πN = Nπ ∀π∈S4. However, it does turn out to be true that N is normal in S4. We are now prepared to make factor groups.
Theorem: Factor Groups form Groups
Let G be a group and let N⊳G. Then the set G/N = {aN : a∈G} is a group under the operation of (aN)∙(bN) = (a∙b)N. We call G/N a factor group.
I'm not going to formally prove it, but its important to talk about why the normality of N is necessary. G/N easily passes closure and associativity without normality - that is that (aN)∙(bN)=(a∙b)N∈G/N and [(aN)∙(bN)]∙(cN) = ((a∙b)∙c)N = (a∙(b∙c))N = (aN)∙[(bN)∙(cN)] ∀aN,bN,cN∈G/N. Also, it is pretty clear that eN (or just N) is the identity, whereas a-1N is the inverse of aN. However, we defined the operation on G/N by (aN)∙(bN) = (a∙b)N and the problem is that G/N is a set of cosets and we defined the operation between these cosets based on their coset representatives. But, as we already learned, any element of a coset can serve as its coset representative, so we must ensure that the operation on G/N is independent of choice of coset representative - that is that if a1N = a2N and b1N = b2N, then we should ensure that (a1∙b1)N = (a2∙b2)N. I am going to write that part out formally in an effort to make it as clear as I can.
Proof:
Choose a1,a2,b1,b2∈G such that a1N = a2N and b1N = b2N. Notice that a1∈a1N=a2N and also that b1∈b1N=b2N. The first of these gives us that ∃n∈N such that a1=a2∙n, and similarly, the second of these gives us that ∃m∈N such that b1=b2∙m. Also, remember that when x∈N, xN = N = Nx. Now we are prepared to show that (a1∙b1)N = (a2∙b2)N. Observe that (a1∙b1)N = (a2∙n∙b2∙m)N = (a2∙n∙b2)(mN) = (a2∙n∙b2)N = (a2∙n)(b2N) = (a2∙n)(Nb2) = a2(nN)b2 = a2Nb2 = a2(Nb2) = a2(b2N) = (a2∙b2)N. Finally, this gives us that coset multiplication does not depend on the choice of coset representative.
You'll see that I only barely needed normality in that proof, but twice I used the fact that bN = Nb. And you can try as hard as you like, but there is no way to prove this without the normality of the subgroup. In fact, I'm not going to prove it, but it turns out that if N is a subgroup of G and G/N forms a group under the above-defined operation, then it is guaranteed that N is normal. This idea of a factor group is a bit of an abstract thing and can be hard to understand, but next time I'm going to give an example to help clarify.
References
Previous Related Post: Cosets
Text Reference: Gallain Chapter 9
Wikipedia: Quotient Group
Wolfram Mathworld: Quotient Group
Planet Math: Quotient Group

Tuesday, March 2, 2010

Cosets

Up until my last post, I feel like the things I've talked about have flowed pretty linearly. However, math doesn't usually work that way. That often seems counter-intuitive because since first grade everything that we learn builds off of each previous topic, but usually math is all over the place and different topics influence each other in many different ways. This is usually why its so hard to read a book because books, by nature, are linear but math is not. Because of this, I'm going to step back away from isomorphisms and the symmetric groups and talk about cosets, which will eventually get us to normal subgroups and factor groups. There are some properties about cosets that are important to understand that I also need to introduce. These properties may not be obvious, so I'm going to prove them, but you are welcome to just trust me.
Definition: Left and Right Cosets.
Let H be a subgroup of a group, G and let a∈G. We then define a∙H = {a∙h : h∈H} and call it the left coset of H in G containing a and we define H∙a = {h∙a : h∈H} and call it the right coset of H in G containing a. In either case, a is called the coset representative of a∙H or H∙a.
It is noted in the definition that H is a subgroup of G, but even if H is only a subset of G, the notations of a∙H and H∙a are still valid, but they are not called cosets. It will be rare - if ever - that we use a∙H and H∙a when H is not a subgroup in this blog. This definition of a coset is very simple - all you do to calculate a coset, a∙H, is to take every element of H and multiply it on the left by a. What we are mostly concerned with is the interaction between between different cosets. First, though, I'm going to give an example where I calculate all of the left cosets of a subgroup of D4.
Example: Cosets of the Dihedral Group
Let G = D4 = {e,R90,R180,R270,Fh,Fr,Fv,Fl} and let H = {e,Fh}. It is very simple to verify that H is a subgroup. Below I have calculated each of the left cosets of H in G.
e∙H = {e,Fh}
R90∙H = {R90,Fr}
R180∙H = {R180,Fv}
R270∙H = {R270,Fl}
Fh∙H = {Fh,e}
Fr∙H = {Fr,R90}
Fv∙H = {Fv,R180}
Fl∙H = {Fl,R270}
This is not meant to just demonstrate a routine calculation. You should look at these coset calculations and try to see patterns. There are a lot of things to notice, and those patterns are the properties that I'm going to prove. The first thing that you should notice, that isn't really a theorem, is that most of the cosets are not subgroups - in fact most of them don't even have the identy element. The only cosets that are subgroups of D4 are the ones that are equal to H.
Theorem:
Let G be a group and H a subgroup of G. Then a∙H = H if and only if a∈H.
Proof:
First, suppose that a∙H = H. Then a=a∙e∈a∙H=H. Next, assume that a∈H. Since H is closed, we get that a∙H⊆H. To show that H⊆a∙H, let h∈H. Note that since a∈H, a-1∈H, and since h∈H, a-1∙h∈H. Now we get that h=e∙h=(a∙a-1)∙h=a∙(a-1∙h)∈a∙H so H⊆a∙H and a∙H = H.
This property illuminates something very interesting. We noted before that some cosets of H in G from our example are actually equal to H. What this gives us is that this happens precisely when the coset representative is in H which is a very nice condition.
Theorem:
Let G be a group and H a subgroup of G. Then ∀a,b∈G, a∙H = b∙H if and only if a-1∙b∈H.
Proof:
We observe that a∙H = b∙H if and only if H = (a-1∙b)∙H and from the previous theorem, H = (a-1∙b)∙H if and only if a-1∙b∈H.
This is a bit of an extension of our last property. An immediate consequence is that two cosets are equal when the representative of one lies in the other (that is a∙H = b∙H when a∈b∙H and b∈a∙H). This is pretty cool because it means that if we have a coset of H, then we can choose any element in that coset to be its representative. Which means that if K is some coset of H in G, then ∀a∈K, K = a∙H. This is very, very useful.
Theorem:
Let G be a group and H a subgroup of G. Then ∀a,b∈G, either a∙H = b∙H or a∙H∩b∙H = ∅.
Proof:
Suppose ∃x∈a∙H∩b∙H. Then a-1∙x∈H so a∙H = x∙H and similarly b-1∙x∈H so b∙H = x∙H. Finally we have that a∙H = b∙H and so either a∙H = b∙H or a∙H∩b∙H = ∅.
The statement of this property might be a little bit confusing, but what it essentially means is that given two cosets, either they are the same or they have no elements in common. The last property told us how to know if two cosets are the same and this property tells us that if they are not the same, then they are completely distinct. This means that the set of cosets of a particular subgroup partitions G - or basically that if H is a subgroup of G, then every single element of G is in exactly one coset of H.
Theorem:
Let G be a group and H a subgroup of G. Then ∀a,b∈G, |a∙H| = |b∙H|.
Proof:
Define the function, φ:a∙H→b∙H by φ(a∙h)=b∙h ∀a∙h∈a∙H. This is obviously a surjection, and it is an injection because cancellation gives that a∙h=b∙h implies that a=b. Since there exists a bijection between a∙H and b∙H, it then follows that |a∙H| = |b∙H|.
Finally, we have that if H is a subgroup of G, then every coset of H is the exact same size. Now we already saw that the cosets of H partition G, but we now see that these cosets partition G into partitions of the exact same size.
Now we can try to put everything we've learned about cosets together. Suppose G is a group with subgroup H and suppose that K is a coset of H in G. We know that ∀a∈K, K = a∙H so that any element of K can be chosen as its coset representative. We also know that two different cosets are in fact completely disjoint - that is that they have no elements in common - but are the exact same size. You can (and should) go back up to my example of all the cosets of {e,Fh} in D4 and verify that these four properties hold. One final, very important thing to mention is that even though all of these theorems that I gave were concerned with left cosets, they all have analogous results for right cosets.
We learned a lot about cosets today, but I haven't yet explained how they're useful and it is definitely not obvious. As we'll see, if G is a group and N is a certain type of subgroup (called a normal subgroup), then the sets of all cosets of N in G form a group in itself, called a factor group, and these are extremely important in the world of group theory. Factor groups have a lot of very interesting, very important properties, and there are a lot of proofs in group theory that involve showing a property about a group by inspecting its factor groups, and a lot of these things could not be proved any other way.
References
Previous Related Post: Early Properties of Groups
Text Reference: Gallain Chapter 7
Wikipedia: Cosets
Wolfram Mathworld: Cosets

Monday, March 1, 2010

Cayley's Theorem

Today we're going to culminate the last few posts into a theorem that combines the idea of symmetric groups and isomorphisms into something quite useful. I'm going to present a proof because its a good example of a constructive proof. Cayley's theorem claims the existence of something (particularly a permutation group isomorphic to a given group) and most theorems about existence only show a theoretical existence, but constructive proofs show that something exists by actually constructing the thing for which they're claiming existence. The statement and proof of Cayley's Theorem is given below.
Theorem: Cayley's Theorem
Every group is isomorphic to a group of permutations.
Proof:
Before we can prove the isomporphism, we must first build this permutation group. ∀g∈G, define Tg:G→G by Tg(x) = g∙x ∀x∈G. We recall that the definition of a permutation is a bijection from a set to itself. G is a set, Tg is a function from G to G ∀g∈G, and it is easy to verify that Tg is also a bijection ∀g∈G, thus making Tg a permutation on G. Now let G* = {Tg : g∈G}. Then G* is a group under the operation of function composition. I will verify this, but only briefly. Choose Tg,Th∈G*, let x∈G, and observe that (Tg∘Th)(x) = Tg(Th(x)) = Tg(h∙x) = g∙(h∙x) = (g∙h)∙x = Tg∙h(x). Then Tg∘Th = Tg∙h and Tg∘Th∈G* which gives closure. Since Tg∘Th = Tg∙h, it is then clear that Te is the identity of G* and that (Tg)-1 = Tg-1. Finally, all function composition is associative, so G* is a group.
The isomporphism between G and G* is now obvious - define θ:G→G* by θ(g) = Tg ∀g∈G. Suppose that g,h∈G and θ(g) = θ(h). Then Tg = Th and Tg(e) = Th(e) so g∙e = h∙e giving g = h. Thus θ is injective. Since G* = {Tg : g∈G}, it is clear that ∀Tg∈G*, θ(g) = Tg so θ is surjective. To see that θ is operation-preserving, observe that ∀a,b∈G, θ(a∙b) = Ta∙b = Ta∘Tb = θ(a)∘θ(b). Finally, θ is an isomporphism and G ≈ G* where G* is a group of permutations, as desired.
Cayley's theorem tells us some neat things. Note that it doesn't say that every group is isomorphic to a symmetric group, but that it is isomorphic to some permutation group. I know I never formally defined a permutation group, but as you might think, its some set of permutations that form a group under function composition, and I did define a permutation. Now suppose G is any arbitrary group, Cayley's Theorem tells us that G ≈ G* where G* is a permutation group and, in particular, G* is a group of permutations on the set G. When we first discussed permutations of symmetric groups, we thought of them as "rearrangements" of the set Ωn. Similarly, we can think of the permutations in G* as rearrangements of the elements of G. So now we can assign each element of G an integer from Ω|G| and in this manner, each element of G* corresponds to an element of S|G|. This correspondence is certainly injective and operation-preserving, but may or may not be surjective. It may require a bit of an imaginative leap of faith, but what this gives us is that G* ≈ H where H is some subgroup of S|G|. The final result here, is that any group is isomorphic to some subgroup of a symmetric group (that is that G≈G*≈H where H is a subgroup of a symmetric group). This is important because even the most abstract of groups can be represented in a very concrete way. And that is very, very cool.
References
Previous Related Post: Properties of Isomorphisms
Text Reference: Gallain Chapter 6
Wikipedia: Cayley's Theorem