Posted by Jason Polak on 27. August 2017 · Write a comment · Categories: math, modules · Tags: , ,

Let $R$ be an associative ring with identity. The Jacobson radical ${\rm Jac}(R)$ of $R$ is the intersection of all the left maximal ideals of $R$. So, ${\rm Jac}(R)$ is a left ideal of $R$. It turns out that the Jacobson radical of $R$ is also the intersection of all the right maximal ideals of $R$, and so ${\rm Jac}(R)$ is also an ideal!

The idea behind the Jacobson radical is that one might be able to explore the properties of a ring $R$ by first looking at the less complicated ring $R/{\rm Jac}(R)$. Since the ideals of $R$ containing ${\rm Jac}(R)$ correspond to the ideals of $R/{\rm Jac}(R)$, the ring $R/{\rm Jac}(R)$ has zero Jacobson radical. Often the rings $R$ for which ${\rm Jac}(R) = 0$ are called Jacobson semisimple.

This terminology might be a tad bit confusing because typically, a ring $R$ is called semisimple if every left $R$-module is projective, or equivalently, if every left $R$-module is injective. How does the notion of semisimple differ from Jacobson semisimple? The Wedderburn-Artin theorem gives a classic characterisation of semisimple rings: they are exactly the rings that are finite direct products of full matrix rings over division rings. Since a full matrix ring over a division ring has no nontrivial ideals, the product of such rings must have trivial Jacobson radical. Thus:

A semisimple ring is Jacobson semisimple.

The converse is false: there exists a ring that is Jacobson semisimple but not semisimple. For example, let $R$ be an infinite product of fields. Then ${\rm Jac}(R) = 0$. However, $R$ is not semisimple. Why not? If it were, by Wedderburn-Artin it could also be written as a finite product of full matrix rings over division rings, which must be a finite product of fields because $R$ is commutative. But a finite product of fields only has finitely many pairwise orthogonal idempotents, whereas $R$ has infinitely many.

Incidentally, because $R$ is not semisimple, there must exist $R$-modules that are not projective. However, $R$ does have the property that every $R$-module is flat!

In Highlights 9 of this series, we showed that for an algebraic group $ G$ and a closed subgroup $ H\subseteq G$, we can always choose a representation $ G\to\rm{GL}(V)$ with a line $ L\subseteq V$ whose stabiliser is $ H$. In turn, this allows us to identify the quotient $ G/H$ with the orbit of the class $ [L]$ in the projective space $ \mathbf{P}(V)$, which satisfies the universal property for quotients, thereby giving us a sensible variety structure on $ G/H$.

In this post, we specialise to the case of a Borel subgroup $ B\leq G$; that is $ B$ is maximal amongst the connected solvable groups. Such a subgroup is necessarily closed!

The fact that will allow us to study Borel subgroups is the fixed point theorem: a connected solvable group that acts on a nonempty complete variety has a fixed point. By choosing a representation $ G\to \rm{GL}(V)$ with a line $ L\subseteq V$ whose stabiliser is $ B$, we get identify $ G/B$ with a quasiprojective variety. However, in this case $ G/B$ is actually projective. Here is a short sketch:
More »

In the previous post, we saw that if $ G\times X\to X$ is an algebraic group acting on a variety $ X$ and $ F\subseteq k[X]$ is a finite-dimensional subspace then there exists a finite dimensional subspace $ E\subseteq k[X]$ with $ E\supseteq F$ such that $ E$ is invariant under translations.

Recall that if $ g\in G$ and $ f\in k[X]$ then the translation $ \tau_g(f)(y) = f(g^{-1}y)$ so that $ E$ being invariant under translations means that $ \tau_g(E) = E$ for all $ g\in G$. Now, let’s use the method outlined in the previous post to actually construct a three-dimensional representation of $ G = \mathrm{SL}_2(k)$.

The Example

In this setting we specialise to the case where $ G$ is an algebraic group acting on itself via multiplication: $ m:G\times G\to G$ is given by a Hopf algebra homomorphism $ \Delta:k[G]\to k[G]\otimes_kk[G]$. Of course, in this case we will need to actually choose some finitely generated Hopf $ k$-algebra as our ring of functions of $ \mathrm{SL}_2(k)$. Let’s use $ k[\mathrm{SL}_2] = k[T_1,T_2,T_3,T_4]/(T_1T_4 – T_2T_3 – 1)$. Thus we think of elements of $ \rm{SL}_2(k)$ as homomorphisms $ k[\mathrm{SL}_2]\to k$ corresponding to the matrix:

$ \begin{pmatrix}T_1 & T_2 \\ T_3 & T_4\end{pmatrix}$

The comultiplication map is then easily checked to be given by
More »