# Morita Duality and the Center of Full Matrix Algebras

Posted by Jason Polak on 20. October 2016 · Write a comment · Categories: math

Let $R$ be a commutative ring, and $M_n(R)$ be the ring of $n\times n$ matrices with coefficients in $R$. Did you know that the center of $M_n(R)$ is the set of scalar matrices? How can we prove this? Should we use matrix multiplication? No way! Let’s use Morita theory instead!

Morita theory works even if $R$ is not commutative. I’ll state the gist of Morita theory, more of which can be found in the post “The Double Dual and Morita Duality”. Let $M$ a left $R$-module. Write $M^* = {\rm Hom}(M,R)$ for the dual of $M$. We have an evaluation homomorphism $M^*\otimes_R M\to R$ given on pure tensors by $f\otimes m\mapsto f(m)$. The image of this homomorphism is an ideal of $R$ called the trace ideal of $M$. If this ideal is all of $R$, then $M$ is called a generator. If in addition to being a generator, $M$ is finitely generated and projective, then $M$ is called a progenerator.
More »

# Two Versions of Nakayama’s Lemma

Posted by Jason Polak on 13. October 2016 · Write a comment · Categories: commutative-algebra

Nakayama’s lemma probably comes in as many flavours as ice cream.

In this post we’ll review some forms of them, and deduce some consequences. Before we continue, recall that in an associative ring $R$ with unity, the Jacobson radical is the intersection of all the left maximal ideals of $R$. It is easy to show that this ideal coincides with the intersection of all the right maximal ideals of $R$; thus, the Jacobson radical is a two-sided ideal.

The form of Nakayama’s lemma I like best is:

Nakayama’s Lemma #1. Let $R$ be a ring, $I\subset R$ an ideal contained in the Jacobson radical of $R$, and $M$ a finitely-generated left $R$-module. If $IM = M$ then $M = 0$.

# Number of arXiv Papers By Area in the Last Ten Months

Posted by Jason Polak on 11. October 2016 · Write a comment · Categories: math · Tags:

I subscribe to arXiv RSS feeds through Thunderbird, which gives me some interesting stats about the number of papers posted to each area. Here are the numbers in a handy graph for the last ten months or so:

Of course, I’m ignoring the problem of cross-posts, missing feed items possibly from not retrieving them for a while in some months, etc. No doubt more sophisticated data can be presented somehow.

# A Perfectoid Field is Deeply Ramified

Posted by Jason Polak on 08. October 2016 · Write a comment · Categories: math

A normed field is a field $F$ together with a multiplicative norm $|\cdot|:F\to \R$. Given such a field, we form the ring $\Ocl_F := \{ x\in F : |x| \leq 1\}$, called the ring of integers of $F$. The ring of integers of $F$ is a local ring, with maximal ideal $\{ x\in F : |x| \le 1\}$.

A perfectoid field is a complete nonarchimedean normed field $(F,|\cdot|)$ that has residue characteristic $p > 0$, is not discretely valued, and such that ${\rm Fr}:\Ocl_F/p\to\Ocl_F/p$ is surjective, where ${\rm Fr}(x) = x^p$ is the Frobenius. An example of a perfectoid field is $\F_p((t))(t^{1/p^\infty})^\wedge$ — the field obtained from the Laurent series $\F_p((t))$ by appending all $p$-power roots of $t$ and then taking the completion.

Given a normed field $F$, define $\Ocl_{F^\flat}$ to be the inverse limit of the system $\cdots\to \Ocl_F/p\to\Ocl_F/p$ where the maps are the Frobenius, and $F^\flat$ to be the fraction field of $\Ocl_{F^\flat}$.

For $F = \Q_p$, then $F^\flat = \F_p$, and these two fields are very different from each other. If $F$ is perfectoid, then $F^\flat$ is also perfectoid and it turns out their Galois groups are isomorphic.

One can look at perfectoid fields through the notion of being deeply ramified: a normed field $F$ is called deeply ramified if for every finite extension $L/F$, the $\Ocl_L$-module of Kahler differentials $\Omega_{\Ocl_L/\Ocl_F}$ is zero. In this post, we will show that a perfectoid field is deeply ramified, following the proof written in Kedlaya’s paper “New Methods for $(\Phi,\Gamma)$-Modules”, with expanded details.
More »

# Self Injective Integral Domains are Fields: Two Proofs

Posted by Jason Polak on 05. October 2016 · 2 comments · Categories: commutative-algebra, homological-algebra · Tags:

For finite commutative rings, integral domains are the same as fields. This isn’t too surprising, because an integral domain $R$ is a ring such that for every nonzero $a\in R$ the $R$-module homomorphism $R\to R$ given by $r\mapsto ra$ is injective. Fields are those rings for which all these maps are surjective. But injective and surjective coincide for endofunctions of finite sets. Therefore, domains are the same thing as fields for finite rings.

But did you know that there is another class of commutative rings for which fields are the same as integral domains? Indeed, for self-injective rings, fields are the same as domains. By definition, a commutative ring $R$ is self-injective if $R$ is injective as an $R$-module. Note: for noncommutative rings, which we don’t consider here, there is a difference between left and right self-injective; that is, an arbitrary ring may be injective as a left module over itself, but not right self-injective, and vice-versa.

In other words, self-injective integral domains are fields. And, the proof is sort of along the lines of the one for finite rings:

Proof. Let $a\in R$ be nonzero. Then the multiplication map $R\xrightarrow{a} R$ is injective, and fits into a diagram

Where the dotted arrow exists because $R$ is injective as an $R$-module; since it is a map $R\to R$ it is given by multiplication by some $b\in R$. Therefore $1 = ab$. QED.

# Submodules of the Form R/P

Posted by Jason Polak on 05. October 2016 · Write a comment · Categories: commutative-algebra

Let $R$ be a ring and $M$ be a nonzero left $R$-module. If we take a nonzero $m\in M$, then the map $R\to M$ given by $r\mapsto rm$ has some kernel $I$, which is a left ideal of $R$ and thus $M$ admits a left $R$-submodule isomorphic to $R/I$. So, arbitrary modules contain submodules isomorphic to quotients of $R$ by left ideals.

In the commutative world, a remarkable fact is that sometimes you can ensure that an $R$-module contains a submodule isomorphic to $R/P$ for some prime $P$! This happens when $R$ is Noetherian and $M$ is finitely generated. Then the zero divisors of $R$ on $M$ (thos elements $r\in R$ such that $rm = 0$ for some nonzero $m\in M$) by the theory of associated primes is a set that is the union of primes, each of which is the annihilator of a nonzero element of $M$.

Therefore, if $R$ is Noetherian and $M$ is finitely generated, you can always find an $m\in M$ such that $R\to M$ given by $r\mapsto rm$ has kernel $P$ where $P$ is a prime ideal! Comes in handy on occasion.

Can you find a counterexample of a Noetherian ring $R$ and an infinitely generated module $M$ where $M$ does not contain any submodule isomorphic to one of the form $R/P$ where $P$ ranges over the primes?

# Example: Cohen-Macaulay Ring that is Not Regular

Posted by Jason Polak on 21. September 2016 · Write a comment · Categories: commutative-algebra · Tags:

Suppose $R$ is a Noetherian local ring with unique maximal ideal $m\subset R$. We say that $R$ is regular if the dimension of $R$ is equal to the dimension of $m/m^2$ as an $R/m$-vector space. Regular local rings arise as the local rings of varieties over a field corresponding to smooth points, and this gives an abundant supply of them: for a field $k$, the rings $k[x,y]_{(x,y)}$ and $k[x,y]_{(x-a,y-a^2)}/(y – x^2)$ for instance are regular local rings.
More »

# Pop Quiz: Fixed Rings and Fraction Fields

Posted by Jason Polak on 10. September 2016 · Write a comment · Categories: math

Let $R$ be an integral domain and let $f:R\to R$ be an automorphism of $R$. Is it always true that $\mathrm{Frac}(R^f) = [\mathrm{Frac}(R)]^f$ where $\mathrm{Frac}$ denotes the fraction field, and $(-)^f$ denotes the ring of fixed elements under $f$?
More »

# Homomorphisms from G_a to G_m

Posted by Jason Polak on 23. August 2016 · Write a comment · Categories: group-theory · Tags: ,

Let $k$ be a commutative ring. Let $\G_a$ be group functor $\G_a(R) = R$ and $\G_m$ be the group functor $\G_m(R) = R^\times$, both over the base ring $k$. What are the homomorphisms $\G_a\to \G_m$? In other words, what are the characters of $\G_a$? This depends on the ring, of course!

The representing Hopf algebra for $\G_a$ is $k[x]$. And, the representing Hopf algebra for $\G_m$ is $k[x,x^{-1}]$. Homomorphisms $\G_a\to \G_m$ correspond to Hopf algebra maps $k[x,x^{-1}]\to k[x]$. Such a map is a $k$-algebra homomorphism that satisfies the additional conditions for being a Hopf algebra homomorphism.
More »

# Trace of an Endomorphism on the Symmetric Algebra

Posted by Jason Polak on 14. August 2016 · Write a comment · Categories: math

Let $V$ be a vector space over a field $k$ and $S = \oplus S_n$ the symmetric algebra on $V$. If $f$ is a $k$-endomorphism of $V$, then $f$ extends to a linear operator $f_n$ on $S_n$ for each $n$. What is the trace of $f$ on $S_n$? There’s a surprisingly elegant way to compute this, which encapsulates the combinatorics of computing $f_n$ directly into a formal power series, which I learnt from Bourbaki’s “Groupes et algebres de Lie”:
$$\sum_{m=0}^\infty {\rm Tr}(f_n)T^n = \det(1 – fT)^{-1}$$
The proof of this identity is not difficult. First, we may as well assume $k$ is algebraically closed since computing the trace does not depend on the conjugacy class of a matrix chosen for $f$. In this case we can choose a basis $v_1,\dots, v_k$ of $V$ such that $f$ is in lower-triangular form, with diagonal entries $\lambda_1,\dots,\lambda_k$.

Choose a basis of the symmetric algebra $S_n$ to be the vectors $v_1^{i(1)}\otimes\cdots\otimes v_k^{i(k)}$ with $\sum_j i(j) = n$, ordered lexicographically. Then the diagonal entries of $f_n$ will be the products $\lambda_1^{i(1)}\cdots\lambda_k^{i(k)}$ (this part requires a little thought), so
$${\rm Tr}(f_n) = \sum_{i(1) + \cdots + i(k) = n} \lambda_1^{i(1)}\cdots\lambda_k^{i(k)}$$
Hence,
$$\sum_{n=0}^\infty {\rm Tr}(f_n)T^n \\ = \sum_{n=0}^\infty\sum_{i(1) + \cdots + i(k)=n}\lambda_1^{i(1)}\cdots\lambda_k^{i(k)}T^n\\ =\sum_{n=0}^\infty \lambda_1^nT^n \cdots \sum_{n=0}^\infty \lambda_nT^n\\ =(1-\lambda_1T)^{-1}\cdots (1-\lambda_kT)^{-1} =\det(1 – fT)^{-1}$$
The key observation was that term $\lambda_1^{i(1)}\cdots\lambda_k^{i(k)}T^n$ for $i(1) + \cdots + i(k) = n$ corresponds to the way power series are multiplied. To give an example, consider the matrix
$$\begin{pmatrix}1 & 2\\-1 & 4\end{pmatrix}$$
on a $2$-dimensional vector space. Then $\det(1 – fT)$ is $(1- 2T)(1 – 3T)$. To compute its inverse we must find the following product of power series:
$$(1 + 2T + 4T^2 + 8T^3 + \cdots)(1 + 3T + 9T^2 + 27T^3 + \cdots)$$
For example, ${\rm Tr}(f_3) = 8 + 4\cdot 3 + 2\cdot 9 + 27 = 65$. In general,
$${\rm Tr}(f_k) = \sum_{j=0}^k 2^{k-j}3^j.$$
Getting power series to keep track of this calculation really makes the calculation straightforward – for example,
$${\rm Tr}(f_{100}) = 1546132562196033990574082188840405015112916155251$$