Posted by Jason Polak on 06. May 2017 · 2 comments · Categories: group-theory

When I first saw convolution I happily used it without thinking about why we should define it in the first place. Here’s a post that might tell you why it makes sense from an algebraic viewpoint. Let’s recall the convolution product, as it is for functions $f:\R\to\R$. If $f,g:\R\to\R$ are two such functions, then the convolution $f*g$ is defined as
(f*g)(y) := \int_\R f(y-x)g(x){\rm d}x$$
provided the integral exists. Why do we do this? For convolution, the explanation starts with polynomials. Multiplying polynomials is simplicity itself. For example, here is the product of two real-valued polynomials:
$$(1 + 2t + 5t^2)(1 + 3t) = 1 + 5t + 11t^2 + 15y^3$$
But a real-valued polynomial could also be thought of as a function $f:\Z\to \R$, where $f(x)$ is zero for all but finitely many $x$ and nonzero only if $x \geq 0$. In this setup, $f(x)$ represents the coefficient of $t^x$. Addition of polynomials becomes pointwise addition of functions. Now here’s the good part: polynomial multiplication becomes convolution: if $f,g:\Z\to\R$ are two polynomials then
$$(fg)(y) = \sum_{x\in \Z} f(y-x)g(x).$$
For a finite group $G$ then, complex representations of $G$ correspond to representations of the algebra of complex-valued functions on $G$ where the multiplication is the convolution product.

Posted by Jason Polak on 23. August 2016 · Write a comment · Categories: group-theory · Tags: ,


Let $k$ be a commutative ring. Let $\G_a$ be group functor $\G_a(R) = R$ and $\G_m$ be the group functor $\G_m(R) = R^\times$, both over the base ring $k$. What are the homomorphisms $\G_a\to \G_m$? In other words, what are the characters of $\G_a$? This depends on the ring, of course!

The representing Hopf algebra for $\G_a$ is $k[x]$. And, the representing Hopf algebra for $\G_m$ is $k[x,x^{-1}]$. Homomorphisms $\G_a\to \G_m$ correspond to Hopf algebra maps $k[x,x^{-1}]\to k[x]$. Such a map is a $k$-algebra homomorphism that satisfies the additional conditions for being a Hopf algebra homomorphism.
More »

Posted by Jason Polak on 19. December 2015 · Write a comment · Categories: group-theory, homological-algebra · Tags:

Let $A$ be an abelian group. We call an element $a\in A$ torsion if there exists a natural number $n$ such that $na = 0$. The set of all torsion elements $T(A)$ of $A$ form a subgroup of $A$, and we can think of $T$ as an endofunctor on the category of abelian groups. Here are some examples:

  • $T(\Z) = 0$
  • $T(\Z\oplus \Z/n) = 0\oplus\Z/n\cong\Z/n$
  • $T(\Q) = 0$
  • $T(\Q/\Z) = \Q/\Z$

Finding the set of torsion points of an abelian group isn’t always easy as in these examples, since abelian groups may not always be written out in such an explicit way. A fascinating and nontrivial result of Barry Mazur is that the torsion subgroup of $E(\Q)$ for an elliptic curve $E$ over $\Q$ is one of fifteen possibilities: $\Z/n$ for $1\leq n\leq 10$ or $n=12$ or $\Z/2\times \Z/2n$ for $1\leq n\leq 4$. Such strange numerology!
More »

Suppose we have a $2\times 2$ matrix
M = \begin{pmatrix}
x_{11} & x_{12}\\
x_{21} & x_{22}
with entries in a field $F$. The characteristic polynomial of this matrix is $p(t) := {\rm det}(tI_2 – M) = t^2 – (x_{11} + x_{22})t + x_{11}x_{22} – x_{21}x_{12}$. One might ask: how can we produce a matrix with a given characteristic polynomial? This can be accomplished using the rational canonical form:
t^2 + at + b\mapsto
0 & -b\\
1 & -a
We can calculate that the characteristic polynomial of this matrix to be $t^2 + at + b$. This map gives a bijection between quadratic monic polynomials in $F[t]$ and matrices of the above form. One way to understand this phenomenon is through algebraic groups. To explain, let’s stick with $F$ having characteristic zero, though much of what we do can be done in characteristic $p$ for $p$ sufficiently large as well using very different techniques.
More »

Posted by Jason Polak on 23. August 2014 · Write a comment · Categories: algebraic-geometry, group-theory · Tags:

Suppose one day you run into the following algebraic group, defined on $\mathbb{Z}$-algebras $R$ by
$$G(R) = \left\{ \begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \\
a_{12} & a_{11} & -a_{14} & -a_{13} \\
a_{13} & -a_{14} & a_{11} & -a_{12} \\
a_{14} & -a_{13} & -a_{12} & a_{11}
\end{pmatrix} \in\mathrm{GL}_4(R) \right\}$$
Can you figure out what this group is? I actually did run into this group one day, but luckily I discovered its true identity. Today we’ll see one way to do so.

One thing to try to identify unknown algebraic groups in practice is to check out their matrix multiplication. If $A = (a_{ij}), B = (b_{ij})$ have the above form then $C = AB$ has first row
c_{11} = a_{11} b_{11} + a_{12} b_{12} + a_{13} b_{13} + a_{14} b_{14} \\c_{12}= a_{12} b_{11} + a_{11} b_{12} – a_{14} b_{13} – a_{13} b_{14} \\c_{13}= a_{13} b_{11} – a_{14} b_{12} + a_{11} b_{13} – a_{12} b_{14} \\c_{14}= a_{14} b_{11} – a_{13} b_{12} – a_{12} b_{13} + a_{11} b_{14}
(We only need to specify the first row to get the whole matrix in this group). Can you tell what the group is yet? At first, I couldn’t, so let’s try something else. One can check now that this group is actually commutative, and this would be enough to determine what it is given the context of where it came from, but let’s assume we don’t know that. Instead, let’s take the determinant:
\mathrm{det}(A) = (a_{11} + a_{12} + a_{13} – a_{14})(a_{11} + a_{12} – a_{13} + a_{14})\\\times(a_{11} – a_{12} + a_{13} + a_{14})(a_{11} – a_{12} – a_{13} – a_{14})$$
More »

Here is an old classic from linear algebra: given an $ n\times n$ matrix $ A = (a_{ij})$, the determinant of $ A$ can be calculated using the permuation formula for the determinant:

$ \det(A) = \sum_{\sigma\in S_n} (-1)^\sigma a_{1\sigma(1)}\cdots a_{n\sigma(n)}$.

Here $ S_n$ denotes the permutation group on $ n$ symbols and $ (-1)^\sigma$ denotes the sign of the permutation $ \sigma$. The computation of the determinant is much more easily done via the ‘minor expansion method’, but this just reduces to the above formula. Now, typically we would never use the above formula to calculate the determinant, but that doesn’t mean it isn’t useful!

The Lie Algebra of an Algebraic Group

In fact, computating the Lie algebra of $ \mathrm{SL}_n$ over a fixed commutative ring $ k$ happens to be a quick and interesting application of the above determinant formula. First, let us recall the definition of the Lie algebra, or at least one of the many equivalent definitions. Consider the algebra of dual numbers, $ k[\tau]/(\tau^2)$. Each element in this algebra can be written uniquely as $ a + b\tau$: this gives a $ k$-algebra map $ C: k[\tau]/(\tau^2)\to k$ by sending $ a + b\tau$ to $ a$.
More »

Posted by Jason Polak on 03. August 2013 · Write a comment · Categories: algebraic-geometry, group-theory · Tags: ,

Conventions: $ G$ is an algebraic group over an algebraically closed field $ k$ and we identify $ G$ with $ G(k)$.

Consider the algebraic groups $ \mathbb{G}_a$ and $ \mathbb{G}_m$. They are the only one-dimensional connected groups and they are both solvable. What about two-dimensional connected groups? It turns out that if $ \mathrm{dim} G\leq 2$ then $ G$ is solvable.

For $ \mathrm{dim} G= 3$, this is no longer true, for instance $ \mathrm{SL}_2$ is $ 3$-dimensional but not solvable since it is perfect, i.e. equal to its commutator subgroup. So let’s prove our theorem:

Theorem. Let $ G$ be a connected algebraic group over $ k$. If $ \mathrm{dim} G = 2$ then $ G$ is solvable.

More »

Posted by Jason Polak on 09. June 2013 · Write a comment · Categories: group-theory · Tags: ,


In the post Can You See in Four Dimensions?, we saw some ways of visualising functions plotted in four-dimensional ‘space’ in various ways. Of course, we used colour and time for two dimensions because it is a bit difficult to plot in four actual spatial dimensions!

Here is another example: the orthogonal group $ \mathrm{O}_2(\mathbb{R})$ over the real numbers $ \mathbb{R}$. This is the group of all matrices $ A\in \mathrm{GL}_2(\mathbb{R})$ such that $ AA^t = I_2$, where $ I_2$ is the $ 2\times 2$ identity. Let

$ A = \begin{pmatrix}a & b\\ c & d\end{pmatrix}$

be such a matrix. Then $ a = \pm\sqrt{1 – b^2}$ and $ c = \pm\sqrt{1 – d^2}$. Finally, $ ac + bd =0$. As long as these conditions are satisfied, then $ A$ will indeed be in the orthogonal group.
More »

Posted by Jason Polak on 07. June 2013 · Write a comment · Categories: algebraic-geometry, group-theory · Tags: , ,

IMG861From Highlights 12 and Highlights 13, we have gained quite a bit of information on connected reductive groups $ G$ of semisimple rank 1. Recall, this means that $ G/R(G)$ has rank 1 where $ R(G)$ is the radical of $ G$, which is in turn the connected component of the unique maximal normal solvable subgroup of $ G$.

But wait, why have we been looking at groups of semisimple rank 1 at all? Let’s take a quick look at how we can get a good source of this groups inside a general group $ G$.


Let $ G$ be a connected algebraic group over $ k=\overline{k}$. We are not assuming that $ G$ is reductive or anything else besides this. We start by dividing up the tori of $ G$ into two kinds: the regular tori and the singular tori. Both of these species will be important in our study of algebraic groups.
More »

In Highlights 12, we used some of the equivalent conditions for a connected algebraic group $ G$ over a field $ k=\overline{k}$ to have semisimple rank 1 in the study of reductive groups (these are the groups whose unipotent radical $ R(G)_u$ is trivial).

Precisely, we showed that such a $ G$ must have a semisimple commutator $ [G,G]$ subgroup whose dimension is three, and that we can write

$ G = Z(G)^\circ\cdot [G,G]$

where the $ -\cdot-$ denotes that this is an almost direct product: in other words, the multiplication map $ Z(G)^\circ\times[G,G]\to G$ is surjective with finite kernel.

Let $ T\subseteq G$ be a maximal torus. We will show in this post that $ C_G(T) = T$ for a connected reductive group $ G$ of semisimple rank 1.
More »