Posted by Jason Polak on 21. September 2017 · Write a comment · Categories: modules · Tags:

Here is one characterisation of commutative rings of Krull dimension zero:

Theorem. A commutative ring $R$ has Krull dimension zero if and only if every element of the Jacobson radical ${\rm Jac}(R)$ of $R$ is nilpotent and the quotient ring $R/{\rm Jac}(R)$ is von Neumann regular.

Recall that a ring $R$ is von Neumann regular if for every $x\in R$ there exists a $y\in R$ such that $xyx = x$. This odd property is equivalent to saying that every $R$-module is flat.

Here are two examples of what happens when we drop various assumptions in the “if” direction of the theorem:

  1. The ring $\Z_{(p)}$ of integers localised away from the prime $(p)$ is an example of a ring such that $R/{\rm Jac}(R)$ is von Neumann regular but ${\rm Jac}(R)$ has no nontrivial nilpotent elements. The ring $\Z_{(p)}$ has Krull dimension one.
  2. Another type of example is given by $\Z[[t]]/t^n$ where $\Z[[t]]$ denotes the power series ring with integer coefficients. Unlike our first example, the Jacobson radical of this ring is the ideal $(t)$, which is also the nilradical (=set of nilpotent elements), but $R/{\rm Jac}(R) = \Z$, which is not von Neumann regular and has Krull dimension one.

Note that we were forced look for counterexamples to dropped assumptions in the class of infinite rings. That’s because every finite commutative ring has Krull dimension zero.

There are all sorts of notions of dimension that can be applied to rings. Whatever notion you use though, the ones with dimension zero are usually fairly simple compared with the rings of higher dimension. Here we’ll look at three types of dimension and state what the rings of zero dimension look like with respect to each type. Of course, several examples are included.

All rings are associative with identity but not necessarily commutative. Some basic homological algebra is necessary to understand all the definitions.

Global Dimension

The left global dimension of a ring $R$ is the supremum over the projective dimensions of all left $R$-modules. The right global dimension is the same with “left” replaced by “right”. And yes, there are rings where the left and right global dimensions differ.

However, $R$ has left global dimension zero if and only if it has right global dimension zero. So, it makes sense to say that such rings have global dimension zero. Here is their characterisation:

A ring $R$ has global dimension zero if and only if it is semisimple; that is, if and only if it is a finite direct product of full matrix rings over division rings.

Examples of such rings are easy to generate by this characterisation:

  1. Fields and finite products of fields
  2. $M_2(k)$, the ring of $2\times 2$ matrices over a division ring $k$
  3. etc.

More »

Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: homological-algebra, modules

Consider a field $k$. Define an action of $k[x,y]$ on $k[x]$ by $f*g = f(x,x)g(x)$ for all $f\in k[x,y]$ and $g\in k[x]$. In other words, the action is: multiply $f$ and $g$ and then replace every occurrence of $y$ by $x$.

Is $k[x]$ a projective $k[x,y]$-module? Consider first the map $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$. It’s easy to check that this map is in fact a $k[x,y]$-module homomorphism. It would be tempting to try and split this map with the inclusion map $k[x]\to k[x,y]$. But this doesn’t work: this inclusion is not a $k[x,y]$-module homomorphism.

In fact, the $k[x,y]$-module homomorphism $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$ cannot split simply because there are no nonzero $k[x,y]$-module homomorphisms $k[x]\to k[x,y]$. Therefore, $k[x]$ is not projective as a $k[x,y]$-module, using the module structure we gave it.

Here are two more ways to see this:

  1. Through the notion of separability: by definition, $k[x]$ being a projective $k[x,y]\cong k[x]\otimes_k k[x]$-module under the structure that we have defined means that $k[x]$ is a separable $k$-algebra. However, all separable $k$-algebras are finite-dimensional as vector spaces over $k$, whereas $k[x]$ is infinite-dimensional.
  2. Through Seshradi’s theorem: this theorem says that every finitely-generated projective module over $k[x,y]$ is actually free. Therefore, we just have to show that $k[x]$ is not free because $k[x]$ is certainly finitely-generated as a $k[x,y]$-module. But $(x^2y – xy^2)$ annihilates all elements of $k[x]$, which cannot happen in a free module.
Posted by guest on 27. August 2017 · Write a comment · Categories: math
A guest Post by Paul Pierce and Ashley Ross

With the advances in calculator technology, some developmental and college-level math courses are restricting the use of any type of graphing or programmable calculators. This is to help students avoid becoming dependent on their calculators for both simple arithmetic and graphing. So, some teachers are going “old school” and forbidding the use of calculators in the classroom. Therefore, it is imperative that students learn efficient methods for finding important values, as well as graphing functions, without the help of their calculator. One type of function that appears in many courses is the quadratic function, and one of the most critical points on the graph of a quadratic function is the vertex.

Fundamental Concepts of the Graph of a Quadratic Function

For the function $f(x)=ax^2+bx+c$ with $a\not=0$, the graph is a smooth, continuous curve called a parabola. This parabola opens upward if $a > 0$ or opens downward if $a < 0$. The vertex $(h,k)$ of the graph is the only turning point on the parabola, which makes it a critical point. The $y$-coordinate $k$ of the vertex represents the minimum value of the function if $a>0$, or the maximum value of the function if $a<0$.

The point $(h,k)$ may be found using the formulas $h=\frac{-b}{2a}$ and $k=\frac{bh}{2}+c$, which begin to show the importance of the vertex. We give two examples:

Example 1. For $y=x^2+6x+3$, find the vertex $(h,k)$.

First find $h$ using $h=\frac{-b}{2a}=\frac{-6}{2(1)}=-3$.

Next find $k$ using $k=\frac{bh}{2}+c=\frac{(6)(-3)}{2}+3=-9+3=-6$.

So, the coordinates of the vertex of the parabola are $(-3, -6)$. Observe from the graph that this vertex is the lowest point on the parabola, which means that $k = -6$ is the minimum value of the function.

Example 2.For $y=-2x^2+8x-5$, find the vertex $(h,k)$.

First find h using $h=\frac{-b}{2a}=\frac{-8}{2(-2)}=2$.

Next find k using $k=\frac{bh}{2}+c=\frac{(8)(2)}{2}-5=8-5=3$.

So, the coordinates of the vertex of the parabola are $(2, 3)$. Note that this vertex is the highest point on the graph, which illustrates that $k = 3$ is the maximum value of this function.

More »

Posted by Jason Polak on 27. August 2017 · Write a comment · Categories: math, modules · Tags: , ,

Let $R$ be an associative ring with identity. The Jacobson radical ${\rm Jac}(R)$ of $R$ is the intersection of all the left maximal ideals of $R$. So, ${\rm Jac}(R)$ is a left ideal of $R$. It turns out that the Jacobson radical of $R$ is also the intersection of all the right maximal ideals of $R$, and so ${\rm Jac}(R)$ is also an ideal!

The idea behind the Jacobson radical is that one might be able to explore the properties of a ring $R$ by first looking at the less complicated ring $R/{\rm Jac}(R)$. Since the ideals of $R$ containing ${\rm Jac}(R)$ correspond to the ideals of $R/{\rm Jac}(R)$, the ring $R/{\rm Jac}(R)$ has zero Jacobson radical. Often the rings $R$ for which ${\rm Jac}(R) = 0$ are called Jacobson semisimple.

This terminology might be a tad bit confusing because typically, a ring $R$ is called semisimple if every left $R$-module is projective, or equivalently, if every left $R$-module is injective. How does the notion of semisimple differ from Jacobson semisimple? The Wedderburn-Artin theorem gives a classic characterisation of semisimple rings: they are exactly the rings that are finite direct products of full matrix rings over division rings. Since a full matrix ring over a division ring has no nontrivial ideals, the product of such rings must have trivial Jacobson radical. Thus:

A semisimple ring is Jacobson semisimple.

The converse is false: there exists a ring that is Jacobson semisimple but not semisimple. For example, let $R$ be an infinite product of fields. Then ${\rm Jac}(R) = 0$. However, $R$ is not semisimple. Why not? If it were, by Wedderburn-Artin it could also be written as a finite product of full matrix rings over division rings, which must be a finite product of fields because $R$ is commutative. But a finite product of fields only has finitely many pairwise orthogonal idempotents, whereas $R$ has infinitely many.

Incidentally, because $R$ is not semisimple, there must exist $R$-modules that are not projective. However, $R$ does have the property that every $R$-module is flat!

Posted by Jason Polak on 06. August 2017 · Write a comment · Categories: math, opinion

A senior mathematician who will remain nameless recently said in a talk, “there is nothing left to prove”. In context, he was referring to the possibility that we are running out of math problems. People who heard laughed, and first-year calculus students might disagree. Was it said as a joke?

Because of the infinite nature of mathematics, there will always be new problems. On the other hand, there are only finitely many theorems we’ll ever know; only finitely many that we’ll ever be interested in. Are we close to knowing all the interesting theorems? Is the increasing specialisation of the literature a sign of a future with a thousand subfields each with only one or two devotees?

Truthfully, I don’t think math is running out of problems at all. I think it’s more like good, nonspecialist exposition isn’t really keeping up with the rapid development of mathematics and so we know less and less about what our colleagues are doing. So we should attempt to prevent the future where every person is their own research field. Here are some ways we could do that:

  1. Make part of your introduction in your paper understandable to a much wider range of mathematicians. This will encourage more collaboration and cross-disciplinary understanding. For example, once I was actually told by a journal to cut out a couple of pages from a paper because it was well-known to (probably ten) experts, even though that material was literally not written down anywhere else! Journals should actually encourage good exposition and not a wall of definition-theorem-proof.
  2. Have the first twenty minutes of your talk understandable by undergraduates. Because frankly, this is the only way mathematicians (especially young ones) in other fields will actually understand the motivation of your work. How are we supposed to ask good questions when we can’t figure out where our research fits in with the research of others?
  3. Use new avenues of mathematical exposition like blogs and nontechnical articles. Other fields like physics and biology appear in magazines like Scientific American and have an army of people working to make specialised work understandable to the nonspecialist.
  4. Encourage new, simplified proofs or explanations of existing results. And by ‘encourage’, I mean count high-quality, expository papers on the level of original results in determining things like tenure and jobs! There are already journals that publish these types of papers. Chances are, any expository paper will actually help at least as many people as an original result, perhaps more. And there are still hundreds of important papers that are very difficult if not impossible to read (even by many experts), with no superior alternative exposition available.

I think it’s been a long-lived fashion in mathematics to hide the easy stuff in favour of appearing slick ever since one dude tried to hide how he solved the cubic from another dude, and it’s probably something we can give up now.

Posted by Jason Polak on 25. July 2017 · Write a comment · Categories: math

Fomin, Williams, and Zelevinsky (posth.) are preparing a new introductory text on cluster algebras. The first three chapters look elementary enough, and it’s worth a look for those interested in learning this topic.

Posted by Jason Polak on 19. July 2017 · Write a comment · Categories: commutative-algebra · Tags: ,

Here’s a classic definition: let $R\subseteq S$ be commutative rings. An element $s\in S$ is called integral over $R$ if $f(s)=0$ for some monic polynomial $f\in R[x]$. It’s classic because appending the solutions of polynomials to base rings goes way back to the ancient pasttime of finding solutions to polynomial equations.

For example, consider $\Z\subseteq \Z[\sqrt{2}]$. Every element of $\Z[\sqrt{2}]$ is integral over $\Z$, which essentially comes down to the fact that $\sqrt{2}$ satisfies $x^2 – 2$. On the other hand, the only elements of $\Q$ integral over $\Z$ are the integers themselves.

The situation is much different for finite commutative rings. If $R\subseteq S$ are finite rings, then every element of $S$ is integral over $R$. Proof: suppose $s\in S$ and set $T = \{ f(s): f\in R[x]\}$. For each $t\in T$ fix a polynomial $f$ such that $f(s) = t$. The set of all such polynomials is finite so we can define $m$ as the maximum degree of all these polynomials. Then $s^{m+1}\in T$ and so there is an $f$ of degree at most $m$ such that $s^{m+1} – f(s) = 0$. Thus $s$ satisfies the monic polynomial $x^{m+1} – f(x)$. QED.

Cool right? However, this is just a more general case of the following theorem: let $R\subseteq S$ be commutative rings. Then $S$ is finitely generated as an $R$-module if and only if $S$ is finitely generated as an $R$-algebra and every element of $S$ is integral over $R$.

Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: math

The term earworm refers the phenomenon of having music stuck in your head. I don’t know about you, but often I get a mathworm: an idea or question that simply won’t go away. Sometimes a mathworm can take the form of a specific problem that needs solving. Other times it could just be a definition or idea that is particularly attractive. What does this have to do with research?

First, let me ask you a question: what’s the best way to find new problems and develop a research plan? I’ve received lots of advice on this, but there is one strategy that has helped more than any of this advice, and that is: listen to the mathworm!

If there’s something in the back of your mind that won’t go away, dig it up and satisfy your curiosity about it so you can put it to rest. This strategy has the following two consequences:

  1. Anything that seems to be a continual presence probably means it is the right kind of problem for your brain. This means you’ll actually be working on math you like.
  2. A nagging problem or phenomenon in the back of your mind is a distraction from doing other things and so getting rid of it will clear up space for something new. It’s sort of like actually listening to what you’re hearing in your head makes an earworm go away.

So go ahead, listen to the mathworm!

Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: elementary · Tags:

The $n$th harmonic number $h(n)$ is defined as
$$h(n) = \sum_{i=1}^n 1/i$$
The harmonic series is the associated series $\sum_{i=1}^\infty 1/i$ and it diverges. There are probably quite a few interesting ways to see this. My favourite is a simple comparison test:
$$1/1 + 1/2 + 1/3 + \cdots\\ \geq 1/2 + 1/2 + 1/4 + 1/4 + 1/4 + 1/4 + 1/8 + \cdots
\\= 1 + 1 + 1 + \cdots$$
and the series $1 + 1 + \cdots$ is divergent. But while the harmonic series diverges, it does so rather slowly. It does so slowly enough that if you were to numerically compute the harmonic numbers (the partial sums of the harmonic series), you might be unconvinced that it actually does diverge:

  • $h(10) = 2.92896825396825\dots$
  • $h(100) = 5.18737751763962\dots$
  • $h(1000) = 7.48547086055035\dots$
  • $h(10000) = 9.78760603604438\dots$
  • $h(100000) = 12.0901461298634\dots$
  • $h(1000000) = 14.3927267228657\dots$

These numbers were computed by actually summing $1 + 1/2 + \cdots + 1/n$ and then writing out a decimal approximation to that fraction, but that takes a while. How can we at least give an approximation to this series? The first thought surely must be to compare it to the integral
$$\int_1^n 1/x dx = \log(n)$$
Where $\log(-)$ denotes the natural logarithm. A moment’s consideration with Riemann sums shows that we have an inequality
$$\int_1^n 1/x dx \le h(n) \le \int_1^n 1/x dx + 1$$
So we’ve come up with a pretty good approximation to our harmonic series, which only gets better as $n$ gets bigger:
$$\log(n)\le h(n) \le \log(n) + 1$$
Which, incidentally is another explanation of why the harmonic series diverges. And it’s much faster to compute on a simple scientific calculator. Here is an example computation: we have already said that $h(1000000) = 14.3927267228657\dots$ But $\log(1000000) = 13.8155105579643\dots$ Pretty good right?