Commutators and the Ore Conjecture

In a talk yesterday by Boris Kunyavski at the University of Ottawa, I learned a little about the Ore conjecture, which in 2010 was proved a theorem in:

Liebeck, Martin W.; O'Brien, E. A.; Shalev, Aner; Tiep, Pham Huu. The Ore conjecture. J. Eur. Math. Soc. (JEMS) 12 (2010), no. 4, 939-1008.

It's quite a fascinating result that arises by considering commutators in groups. If $G$ is a group, its commutator subgroup $[G,G]$ is the subgroup of $G$ generated by all the commutators $[g,h] = ghg^{-1}h^{-1}$ of $G$. It's easy to see that the commutator subgroup is normal. A group $G$ is said to be perfect if $G = [G,G]$.

So let's assume $G$ is perfect. This implies that every element of $G$ can be written as a product of commutators. But can every element of $G$ be written as a single commutator? That's really far from obvious. For example, take your favourite perfect group and an element in it: can you prove that this single element is a commutator? Not so easy, right?

In fact, we can define the commutator length of any $g\in G$ to be the minimum number of commutators in all products of commutators equal to $g$. If $g$ can't be written as the product of commutators, then its commutator length is infinite.

The commutator width of a group is defined to be the supremum over commutator lengths of all the elements of $G$. (Note: I think this should just be called the commutator length of $G$ as well, but that's how the terminology ended up!)

It turns out that finding a perfect group $G$ with commutator width greater than one is quite tricky. In fact, the theorem proved in loc. cit. is:

Former Ore Conjecture/Now Theorem. If $G$ is a finite nonabelian simple group, then every element of $G$ is a commutator.

That's pretty cool, though the proof is very long. That's not surprising since it is a theorem about all finite nonabelian simple groups. What's perhaps even more surprising is that there are examples of finitely-generated infinite simple groups containing elements that are not commutators. In fact, examples exist of such $G$ with infinite commutator length, as given in Alexey Muranov's paper:

Finitely generated infinite simple groups of infinite square width and vanishing stable commutator length. J. Topol. Anal. 2 (2010), no. 3, 341-384.

This result makes use of small cancellation theory, which is a geometric group theory machinery that studies presented groups whose relations don't have too much in common.

What is a residually finite group?

We say that a group $G$ is residually finite if for each $g\in G$ that is not equal to the identity of $G$, there exists a finite group $F$ and a group homomorphism
$$\varphi:G\to F$$ such that $\varphi(g)$ is not the identity of $F$.

The definition does not change if we require that $\varphi$ be surjective. Therefore, a group $G$ is residually finite if and only if for each $g\in G$ that is not the identity, there exists a finite index normal subgroup $N$ of $G$ such that $g\not\in N$.

Hence, if $G$ is residually finite, then the intersection of all finite-index normal subgroups is trivial. The converse holds, too (why?).
…read the rest of this post!

Links to Atiyah's preprints on the Riemann hypothesis

Sir Michael Atiyah's preprints are now on the internet:

  1. The Riemann Hypothesis
  2. The Fine Structure Constant

The meat of the claimed proof of the Riemann hypothesis is in Atiyah's construction of the Todd map $T:\C\to \C$. It supposedly comes from the composition of two different isomorphisms
$$\C\xrightarrow{t_+} C(A)\xrightarrow{t^{-1}_{-}} \C$$ of the complex field $\C$ with the $C(A)$, the center of a von Neumann hyperfinite factor $A$ of type II-1. Understanding Atiyah's work boils down to understanding this Todd map, and therefore in understanding what is in the paper "The Fine Structure Constant".

Assuming there is a zero $b$ of the Riemann zeta function $\zeta(s)$ off the critical line, Atiyah defines a function
$$F(s) = T(1 +\zeta(s + b)) – 1.$$ The function $F$ satisfies $F(0) = 0$ because one of the properties of the Todd map $T$ is that $T(1) = 1$, and $F$ is also supposedly analytic. According to some of the basic properties of the Todd function which is a polynomial on a closed rectangle containing this zero, this would imply that $F(s) = 0$ and therefore that $\zeta$ is identically zero, which is the contradiction.

Now, I have very little understanding of von Neumann algebras so I won't comment at all on the Todd map. I have no doubt that the experts will dissect this because there's so much attention on it. Even assuming all the properties of the Todd function, I find the proof difficult to follow. For example, the assumed zero $b$ off the critical strip: I can't find where "$b$ is off the critical strip" is even being used. In fact, it's hard to see where any of the basic properties of the zeta function are being used.

For real? Atiyah's proof of the Riemann hypothesis

Well this is strange indeed: according to this New Scientist article published today, the famous Sir Michael Atiyah is supposed to talk this Monday at the Heidelberg Laureate Forum. The topic: a proof of the Riemann hypothesis. The Riemann hypothesis states that the Riemann Zeta function defined by the analytic continuation of $\zeta(s) = \sum_{n=1}^\infty n^{-s}$ has nontrivial zeros only on the critical line whose numbers have real part $1/2$. Check out this MathWorld article for more details.

The Riemann hypothesis is considered by many to be the outstanding problem in mathematics. Many people have tried to prove it and failed.

Is this for real?

Some 2018 Springer Math Texts

When I was a student at McGill I loved looking at the latest Springer texts in the now-nonexistant Rosenthall library. So, I thought that I'd list some of the cool looking titles that have come out in 2018:

  1. Walter Dittrick, Reassessing Riemann's Paper: This book is an analysis of Riemann's paper "On the Number of Primes Less Than a Given Magnitude", and could be a great historical starting point into the subject
  2. Andreas Hinz, Sandi Klavžar, and Ciril Petr, The Tower of Hanoi – Myths and Maths: Looks like a fun recreational math book about the Tower of Hanoi game
  3. Wojciech Chachólski, Tobias Dyckerhoff, John Greenlees, Greg Stevenson, Building Bridges Between Algebra and Topology: It contains notes from four different mini-lectures on Hall algebras, triangulated categories, homotopy invariant commutative algebra, and idempotent symmetries!
  4. Karin Erdmann, Thorsten Holm, Algebras and Representation Theory: Nice-looking text on representation theory. Has classical things like the Artin-Wedderburn theorem and more modern topics like quivers
  5. V. Lakshmibai, Justin Brown, Flag Varieties: An introduction to flag varieties, assuming some background in commutative algebra and algebraic geometry
  6. Nicolas Privault, Understanding Markov Chains
  7. Masao Jinzenji, Classical Mirror Symmetry
  8. Berthé, Michel Rigo (editors), Sequences, Groups, and Number Theory: Now this looks interesting! It is a curious volume of lectures on the interactions between words (as in formal languages and presented groups), number theory, and dynamical systems
  9. Ibrahim Assem, Sonia Trepode (editors), Homological Methods, Representation Theory, and Cluster Algebras: A series of lectures from CRM minicourses

*N.B. I do not work for Springer and I do not get compensated for posting these links. I just think they look cool.

For (most) PIDs: Trace zero matrices are commutators

Let $R$ be a commutative ring and $M_n(R)$ denote the ring of $n\times n$ matrices with coefficients in $R$. For $X,Y\in M_n(R)$, their commutator $[X,Y]$ is defined by
$$[X,Y] := XY – YX.$$ The trace of any matrix is defined as the sum of its diagonal entries.

If $X$ and $Y$ are any matrices, what is the trace of $[X,Y]$? It's zero! That's because the trace of $XY$ is the same as the trace of $YX$. Therefore:

Any commutator has trace zero.

What about the converse? Is any trace zero matrix also a commutator? In other words, given a trace zero matrix $Z\in M_n(R)$, can we find matrices $X$ and $Y$ such that $Z = [X,Y]$? Albert and Muckenhoupt proved that you can, assuming that $R$ is a field.

What happens if you also want $X$ and $Y$ to have trace zero?

Good question. In general, this is not possible. For example, let's consider the simplest field of all, the field with two elements denoted by $\F_2$. Okay, it's actually debatable whether $\F_2$ really is the simplest field, because so many problems happen in characteristic two. For example, this problem we've been considering: in $\F_2$, if $X$ and $Y$ are $2\times 2$ matrices of trace zero, then $[X,Y]$ will have zero off-diagonal entries. So for example, the matrix
$$\begin{pmatrix}1 & 1\\1 & 1\end{pmatrix}\in M_2(\F_2)$$ cannot be written as a commutator of two matrices with trace zero.

It seems that characteristic two is the only obstruction, in the case of $2\times 2$ matrices. In fact, Alexander Stasinski proved in his paper [1] the following:

Theorem. Let $R$ be a principal ideal domain. If $n\geq 3$ then any matrix in $M_n(R)$ of trace zero can be written as the commutator of two matrices in $M_n(R)$, each having trace zero. The same holds for $n=2$ if two is invertible in $R$.

Notice how the characteristic two problem only happens in the $2\times 2$ case.

[1] Stasinski, A. Isr. J. Math. (2018). https://doi.org/10.1007/s11856-018-1762-5

My top nine favourite math texts

Here they are:

Keith Devlin, The Joy of Sets


If you're not a set theorist but want to understand set theory, this book is awesome and one of a kind. I have not read it all, but what I have read I can actually understand!

Frank de Meyer and Edward Igraham, Separable Algebras over Commutative Rings


This classic book explains Galois theory but for commutative rings. Even though there are many more technicalities in the general commutative ring case compared to fields, I actually found the approach in this book more natural than the Galois theory for fields that I learned in undergrad algebra. There are some exercises and this book is easy to read.
…read the rest of this post!

On reasonably sure proofs

I happened to come across a 1993 opinion piece, Theorems for a price: Tomorrow's semi-rigorous mathematical culture by Doron Zeilberger. I think it's a rather fascinating document as it questions the future of mathematical proof. Its basic thesis is that some time in the future of mathematics, the expectation of proof will move to a "semi-rigorous" state where mathematical statements will be given probabilities of being true.

It helps to clarify this with an example even more simple than in Zeilberger's paper. Take the arithmetic-geometric mean inequality for two variables $a,b\geq 0$. It says that
$$\frac{a + b}{2} \geq \sqrt{ab}.$$ This simple identity is just a rearrangement of the inequality $(a – b)^2 \geq 0$. For simplicity, let's say that $a,b\in [0,1]$. Instead of actually proving this inequality, we could generate uniform random numbers in $[0,1]$ and see if this inequality actually works for them. So if I test this inequality 1000 times, of course I will get that it works 1000 times.
…read the rest of this post!

Abelian categories: examples and nonexamples

I've been talking a little about abelian categories these days. That's because I've been going over Weibel's An Introduction to Homological Algebra. It's a book I read before, and I still feel pretty confident about the material. This time, though, I think I'm going to explore a few different paths that I haven't really given much thought to before, such as diagram proofs in abelian categories, group cohomology (more in-depth), and Hochschild homology.

Back to abelian categories. An abelian category is a category with the following properties:
…read the rest of this post!