# On a characterisation of Krull dimension zero rings

Posted by Jason Polak on 21. September 2017 · Write a comment · Categories: modules · Tags:

Here is one characterisation of commutative rings of Krull dimension zero:

Theorem. A commutative ring $R$ has Krull dimension zero if and only if every element of the Jacobson radical ${\rm Jac}(R)$ of $R$ is nilpotent and the quotient ring $R/{\rm Jac}(R)$ is von Neumann regular.

Recall that a ring $R$ is von Neumann regular if for every $x\in R$ there exists a $y\in R$ such that $xyx = x$. This odd property is equivalent to saying that every $R$-module is flat.

Here are two examples of what happens when we drop various assumptions in the “if” direction of the theorem:

1. The ring $\Z_{(p)}$ of integers localised away from the prime $(p)$ is an example of a ring such that $R/{\rm Jac}(R)$ is von Neumann regular but ${\rm Jac}(R)$ has no nontrivial nilpotent elements. The ring $\Z_{(p)}$ has Krull dimension one.
2. Another type of example is given by $\Z[[t]]/t^n$ where $\Z[[t]]$ denotes the power series ring with integer coefficients. Unlike our first example, the Jacobson radical of this ring is the ideal $(t)$, which is also the nilradical (=set of nilpotent elements), but $R/{\rm Jac}(R) = \Z$, which is not von Neumann regular and has Krull dimension one.

Note that we were forced look for counterexamples to dropped assumptions in the class of infinite rings. That’s because every finite commutative ring has Krull dimension zero.

# Dimension zero rings for three types of dimension

Posted by Jason Polak on 20. September 2017 · Write a comment · Categories: homological-algebra, model-theory

There are all sorts of notions of dimension that can be applied to rings. Whatever notion you use though, the ones with dimension zero are usually fairly simple compared with the rings of higher dimension. Here we’ll look at three types of dimension and state what the rings of zero dimension look like with respect to each type. Of course, several examples are included.

All rings are associative with identity but not necessarily commutative. Some basic homological algebra is necessary to understand all the definitions.

## Global Dimension

The left global dimension of a ring $R$ is the supremum over the projective dimensions of all left $R$-modules. The right global dimension is the same with “left” replaced by “right”. And yes, there are rings where the left and right global dimensions differ.

However, $R$ has left global dimension zero if and only if it has right global dimension zero. So, it makes sense to say that such rings have global dimension zero. Here is their characterisation:

A ring $R$ has global dimension zero if and only if it is semisimple; that is, if and only if it is a finite direct product of full matrix rings over division rings.

Examples of such rings are easy to generate by this characterisation:

1. Fields and finite products of fields
2. $M_2(k)$, the ring of $2\times 2$ matrices over a division ring $k$
3. etc.

# How to choose a PhD program

Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: advice

Choosing where to get your PhD is an important decision. If you continue onto academia, your PhD might be the longest time you spend at any one institution until you get a permanent position. The most obvious choice is apply to the high-ranking schools. However, you should consider far more than that. Here, we’ll look at some of the important factors to consider, with the context of mathematics in mind. However, most of what I say applies to some other fields as well.

## Represented research areas

Unlike choosing an undergraduate program, where the curriculum doesn’t differ much around the world (though it certainly can vary greatly in strength or intensity), a PhD will be on a very specialised topic. So, if you go to a school where analysis and statistics are the main topics represented and you like algebra, you probably won’t like it. This can be worse for those places where you don’t have to choose an advisor until the second year. So I suggest you look at the represented research areas on departmental websites and see what catches your interest. Unfortunately, some math department websites look like they were coded on a Super Nintendo, if that were even possible. So:

Make sure someone is actually doing something you’re interested in at prospective schools!

If you’re at the undergraduate level and not sure of your interests yet, it could be a good idea to consider a masters program first before starting a PhD. I enjoyed doing a masters degree first, even though in the long run it is more expensive.

## Total school atmosphere

If you’re lucky enough to be nearby some schools you’re interested in, you should visit them, meet some professors, and even sit in on some classes and departmental seminars. Just walk around and see what it’s like. Some schools have a much nicer atmosphere than others. You should also get a sense of the surrounding city. This is true especially if you are a very independent worker: having an enjoyable city will in fact make working much easier. Conversely, living in a place you dislike for several years is quite draining.

Sadly, living temporarily in cities you don’t like is very probable in at least one stage of climbing the academic ladder.
More »

# Is it a projective module?

Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: homological-algebra, modules

Consider a field $k$. Define an action of $k[x,y]$ on $k[x]$ by $f*g = f(x,x)g(x)$ for all $f\in k[x,y]$ and $g\in k[x]$. In other words, the action is: multiply $f$ and $g$ and then replace every occurrence of $y$ by $x$.

Is $k[x]$ a projective $k[x,y]$-module? Consider first the map $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$. It’s easy to check that this map is in fact a $k[x,y]$-module homomorphism. It would be tempting to try and split this map with the inclusion map $k[x]\to k[x,y]$. But this doesn’t work: this inclusion is not a $k[x,y]$-module homomorphism.

In fact, the $k[x,y]$-module homomorphism $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$ cannot split simply because there are no nonzero $k[x,y]$-module homomorphisms $k[x]\to k[x,y]$. Therefore, $k[x]$ is not projective as a $k[x,y]$-module, using the module structure we gave it.

Here are two more ways to see this:

1. Through the notion of separability: by definition, $k[x]$ being a projective $k[x,y]\cong k[x]\otimes_k k[x]$-module under the structure that we have defined means that $k[x]$ is a separable $k$-algebra. However, all separable $k$-algebras are finite-dimensional as vector spaces over $k$, whereas $k[x]$ is infinite-dimensional.
2. Through Seshradi’s theorem: this theorem says that every finitely-generated projective module over $k[x,y]$ is actually free. Therefore, we just have to show that $k[x]$ is not free because $k[x]$ is certainly finitely-generated as a $k[x,y]$-module. But $(x^2y – xy^2)$ annihilates all elements of $k[x]$, which cannot happen in a free module.

# Strong Nilpotence and the Jacobson Radical

Posted by Jason Polak on 01. September 2017 · Write a comment · Categories: ring-theory · Tags:

In the previous post we saw the following definition for a ring $R$: An element $r\in R$ is called strongly nilpotent if every sequence $r = r_0,r_1,r_2,\dots$ such that $r_{n+1}\in r_nRr_n$ is eventually zero. Why introduce this notion?

Well, did you know that every finite integral domain is a field? If $R$ is an integral domain and $a\in R$ is nonzero, then the multiplication map $R\to R$ given by $x\mapsto ax$ is injective. If $R$ is finite, then it must also be surjective so $a$ is invertible!

Another way of stating this neat fact is that if $R$ is any ring and $P$ is a prime ideal of $R$ such that $R/P$ is finite, then $P$ is also a maximal ideal. A variation of this idea is that every prime ideal in a finite commutative ring is actually maximal. Yet another is that finite commutative rings have Krull dimension zero.
More »

# Nilpotent and Strongly Nilpotent

Posted by Jason Polak on 31. August 2017 · Write a comment · Categories: ring-theory · Tags:

Let $R$ be an associative ring. An element $r\in R$ is called nilpotent if $r^n = 0$ for some $n$. There is a stronger notion: an element $r\in R$ is called strongly nilpotent if every sequence $r = r_0,r_1,r_2,\dots$ such that $r_{n+1}\in r_nRr_n$ is eventually zero.

How are these two related? It is always the case that a strongly nilpotent element is nilpotent, because if $r$ is strongly nilpotent then the sequence $r,r^2,r^4,r^8,\dots$ vanishes. However, the element
$$\begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix}$$
in any $2\times 2$ matrix ring is nilpotent but not strongly nilpotent. Notice how we had to use a noncommutative ring here—that’s because for commutative rings, a nilpotent element is strongly nilpotent!

# Comparing Methods for Finding the Vertex of a Parabola

Posted by guest on 27. August 2017 · Write a comment · Categories: math
A guest Post by Paul Pierce and Ashley Ross

With the advances in calculator technology, some developmental and college-level math courses are restricting the use of any type of graphing or programmable calculators. This is to help students avoid becoming dependent on their calculators for both simple arithmetic and graphing. So, some teachers are going “old school” and forbidding the use of calculators in the classroom. Therefore, it is imperative that students learn efficient methods for finding important values, as well as graphing functions, without the help of their calculator. One type of function that appears in many courses is the quadratic function, and one of the most critical points on the graph of a quadratic function is the vertex.

## Fundamental Concepts of the Graph of a Quadratic Function

For the function $f(x)=ax^2+bx+c$ with $a\not=0$, the graph is a smooth, continuous curve called a parabola. This parabola opens upward if $a > 0$ or opens downward if $a < 0$. The vertex $(h,k)$ of the graph is the only turning point on the parabola, which makes it a critical point. The $y$-coordinate $k$ of the vertex represents the minimum value of the function if $a>0$, or the maximum value of the function if $a<0$.

The point $(h,k)$ may be found using the formulas $h=\frac{-b}{2a}$ and $k=\frac{bh}{2}+c$, which begin to show the importance of the vertex. We give two examples:

Example 1. For $y=x^2+6x+3$, find the vertex $(h,k)$.

First find $h$ using $h=\frac{-b}{2a}=\frac{-6}{2(1)}=-3$.

Next find $k$ using $k=\frac{bh}{2}+c=\frac{(6)(-3)}{2}+3=-9+3=-6$.

So, the coordinates of the vertex of the parabola are $(-3, -6)$. Observe from the graph that this vertex is the lowest point on the parabola, which means that $k = -6$ is the minimum value of the function.

Example 2.For $y=-2x^2+8x-5$, find the vertex $(h,k)$.

First find h using $h=\frac{-b}{2a}=\frac{-8}{2(-2)}=2$.

Next find k using $k=\frac{bh}{2}+c=\frac{(8)(2)}{2}-5=8-5=3$.

So, the coordinates of the vertex of the parabola are $(2, 3)$. Note that this vertex is the highest point on the graph, which illustrates that $k = 3$ is the maximum value of this function.

# Semisimple and Jacobson Semisimple

Posted by Jason Polak on 27. August 2017 · Write a comment · Categories: math, modules

Let $R$ be an associative ring with identity. The Jacobson radical ${\rm Jac}(R)$ of $R$ is the intersection of all the left maximal ideals of $R$. So, ${\rm Jac}(R)$ is a left ideal of $R$. It turns out that the Jacobson radical of $R$ is also the intersection of all the right maximal ideals of $R$, and so ${\rm Jac}(R)$ is also an ideal!

The idea behind the Jacobson radical is that one might be able to explore the properties of a ring $R$ by first looking at the less complicated ring $R/{\rm Jac}(R)$. Since the ideals of $R$ containing ${\rm Jac}(R)$ correspond to the ideals of $R/{\rm Jac}(R)$, the ring $R/{\rm Jac}(R)$ has zero Jacobson radical. Often the rings $R$ for which ${\rm Jac}(R) = 0$ are called Jacobson semisimple.

This terminology might be a tad bit confusing because typically, a ring $R$ is called semisimple if every left $R$-module is projective, or equivalently, if every left $R$-module is injective. How does the notion of semisimple differ from Jacobson semisimple? The Wedderburn-Artin theorem gives a classic characterisation of semisimple rings: they are exactly the rings that are finite direct products of full matrix rings over division rings. Since a full matrix ring over a division ring has no nontrivial ideals, the product of such rings must have trivial Jacobson radical. Thus:

A semisimple ring is Jacobson semisimple.

The converse is false: there exists a ring that is Jacobson semisimple but not semisimple. For example, let $R$ be an infinite product of fields. Then ${\rm Jac}(R) = 0$. However, $R$ is not semisimple. Why not? If it were, by Wedderburn-Artin it could also be written as a finite product of full matrix rings over division rings, which must be a finite product of fields because $R$ is commutative. But a finite product of fields only has finitely many pairwise orthogonal idempotents, whereas $R$ has infinitely many.

Incidentally, because $R$ is not semisimple, there must exist $R$-modules that are not projective. However, $R$ does have the property that every $R$-module is flat!

# Are we running out of problems?

Posted by Jason Polak on 06. August 2017 · Write a comment · Categories: math, opinion

A senior mathematician who will remain nameless recently said in a talk, “there is nothing left to prove”. In context, he was referring to the possibility that we are running out of math problems. People who heard laughed, and first-year calculus students might disagree. Was it said as a joke?

Because of the infinite nature of mathematics, there will always be new problems. On the other hand, there are only finitely many theorems we’ll ever know; only finitely many that we’ll ever be interested in. Are we close to knowing all the interesting theorems? Is the increasing specialisation of the literature a sign of a future with a thousand subfields each with only one or two devotees?

Truthfully, I don’t think math is running out of problems at all. I think it’s more like good, nonspecialist exposition isn’t really keeping up with the rapid development of mathematics and so we know less and less about what our colleagues are doing. So we should attempt to prevent the future where every person is their own research field. Here are some ways we could do that:

1. Make part of your introduction in your paper understandable to a much wider range of mathematicians. This will encourage more collaboration and cross-disciplinary understanding. For example, once I was actually told by a journal to cut out a couple of pages from a paper because it was well-known to (probably ten) experts, even though that material was literally not written down anywhere else! Journals should actually encourage good exposition and not a wall of definition-theorem-proof.
2. Have the first twenty minutes of your talk understandable by undergraduates. Because frankly, this is the only way mathematicians (especially young ones) in other fields will actually understand the motivation of your work. How are we supposed to ask good questions when we can’t figure out where our research fits in with the research of others?
3. Use new avenues of mathematical exposition like blogs and nontechnical articles. Other fields like physics and biology appear in magazines like Scientific American and have an army of people working to make specialised work understandable to the nonspecialist.
4. Encourage new, simplified proofs or explanations of existing results. And by ‘encourage’, I mean count high-quality, expository papers on the level of original results in determining things like tenure and jobs! There are already journals that publish these types of papers. Chances are, any expository paper will actually help at least as many people as an original result, perhaps more. And there are still hundreds of important papers that are very difficult if not impossible to read (even by many experts), with no superior alternative exposition available.

I think it’s been a long-lived fashion in mathematics to hide the easy stuff in favour of appearing slick ever since one dude tried to hide how he solved the cubic from another dude, and it’s probably something we can give up now.

# Check out this preliminary text on cluster algebras

Posted by Jason Polak on 25. July 2017 · Write a comment · Categories: math

Fomin, Williams, and Zelevinsky (posth.) are preparing a new introductory text on cluster algebras. The first three chapters look elementary enough, and it’s worth a look for those interested in learning this topic.