Posted by Jason Polak on 07. January 2018 · 1 comment · Categories: advice, math · Tags: ,

From browsing my publications, you might notice that my research area changed after my PhD. My thesis was on orbital integrals (Langlands-related), but now I’m working on more classical topics in associative ring theory like separable algebras and Grothendieck groups. In this short article I will explain why I switched areas, in the hope that it will help other young researchers make good decisions.

Let’s go way back to my PhD, in which I solved a problem in the Langlands program. For readers that may not know about Langlands, let’s just vaguely say that it studies generalisations of modular forms through representations of matrix groups and it is motivated by number theory and reciprocity laws. As a graduate student this sounded great because I like number theory and algebra. Writing a thesis was not overly difficult either and I think I did a good job with it.

So, I was excited to get a postdoc in Australia. Besides Australia being an awesome country with cool parrots, it would be a great place to further my research. I even came up with new problems connected to my thesis on which to work. But despite being well set up, I didn’t make much progress. Although I had one paper from my thesis published, when I tried to publish the second part of my thesis, it was rejected multiple times on the grounds that it was not significant enough. From a career perspective, I wasn’t worried because I already had a few other papers (in different areas) either sent out or in progress. But as a young researcher trying to interact with a forbiddingly technical area, it was undeniably discouraging.

Nevertheless, I continued working on more problems in Langlands. In the end however, my interest in the subject began to wane quickly. It mostly wasn’t because of the paper rejection; I’ve actually had one other paper rejected in associative rings and I still happily work on associative rings. It was the fact that although Langlands field is rooted in number theory going back to Gauss, to me it felt completely disconnected from its origins when I was actually ‘doing’ the math. It probably also didn’t help that tremendous amounts of ‘advanced’ algebraic geometry would be necessary for some of the problems I was thinking about, and after giving it a good try, perverse sheaves, stacks, ind-schemes and gerbes were really not to my liking. Hey, I’m happy that some people can enjoy it for what it is, but it turns out it’s not my style.

So I moved onto something else. Actually, one of my many current projects is dimly related to my thesis, but it is far more computational (in fact, it involves writing an actual algorithm in Python) and it is not at all in the style of the traditional Langlands literature. After this project is done however, I don’t plan on continuing in this field. With my new research areas, I am much more satisfied with my work. The only downside is that I now have to find a new community and in particular, new people who are willing to write me letters of recommendation, which has turned out to be much harder than I thought. Still, the switch was absolutely worth it, just because I believe in the math I do again.

There is a lesson in this story, and that is as a young researcher, you should not use a few chosen problems as representative of the general flavour of a research area. Such problems may be interesting on their own, but they are inevitably woven into a highly specialised research microcosm. And the whole research microcosm is something you should consider as well, which includes the general direction of the field and the community and attitude surrounding it. This applies especially to the highly abstract fields that seem to be in vogue these days such as geometric representation theory, Langlands, and higher category theory. In these fields, while a senior researcher can select and distill certain easier problems that would be suitable for many students, only a small fraction of those students will actually have the interest and personality to be successful in progressing to the serious problems of that field on their own. In this regard I’d like to emphasise that it absolutely does take more than just pure brains to succeed. Personality and style is at least as important, and these are things that you may not be fully aware of as a grad student.

So my advice to the young researcher is: know the field you are getting into. Look at some papers in the field and ask yourself if you want to write similar ones. Don’t just be captivated by the ultimate, overarching motivation and instead look at the actual nitty-gritty details of the math and culture. For it is the details you will be spending time with, not the motivation.

Posted by Jason Polak on 05. January 2018 · Write a comment · Categories: commutative-algebra · Tags:

Let $R$ be a commutative ring and $(p)$ be a principal prime ideal. What can be said about the intersection $\cap_{k=1}^\infty (p)^k$? Let’s abbreviate this $\cap (p)^k$ (I like to use the convention that when limits are not specified, then the operation like intersection is taken over all possible indices).

Let’s try an example. For the integers, every principal prime is of the form $(p)$ where $p$ is a prime number or zero. And $(p)^k = (p^k)$ so $\cap (p)^k = (0)$. In fact if $R$ is any Noetherian integral domain then $\cap (p)^k = 0$.

If $R$ is not an integral domain then $\cap (p)^k$ is not necessarily zero. For example, let $S$ be an integral domain and let $R = S\times S$. In $S\times S$, the prime ideal generated by the single element $p = (1,0)$ is its own $k$-th power for all $k$. So $\cap (p)^k = p$.

Of course, it is impossible that in an integral domain to have $(p) = (p)^2$ for some principal prime $p$ unless $p = 0$. Of course, it is possible in an integral domain to have $P = P^2$ for a nonzero prime ideal $P$ that is necessarily not principal. Just take a “polynomial” ring over a field where the powers are allowed to be all nonnegative rationals; that is, a ring of the form $k[\Q^+]$ where $\Q^+$ is the monoid of all nonnegative rational numbers under multiplication. In the case of $k[\Q^+]$, a prime such that $P^2 = P$ would be the prime $P$ generated by all elements of the form $x^q$ where $q \gt 0$ is a rational number.

I will leave the reader with the following question:

Does there exist an integral domain, necessarily non-Noetherian, that contains a principal prime $(p)$ with $\cap (p)^k\not= 0$?
Posted by Jason Polak on 04. January 2018 · Write a comment · Categories: commutative-algebra

For a commutative ring, what does the partially ordered set (=poset) of primes look like? I already talked a little about totally ordered sets of primes, but what about in general?

For a general partially ordered set $S$ there are two immediate questions that come to mind:

  1. Does there exist a commutative ring whose poset of primes is $S$?
  2. Does there exist a commutative ring whose poset of primes contains an embedded copy of $S$?

For example, consider this partially ordered set:

I draw the partially ordered sets so that “higher” is larger. This partially ordered set can be embedded into the poset of prime ideals of the integers

What about the totally ordered set $\Z$ itself? It cannot exist in any poset of primes, because it has no minimal or maximal element, whereas the both the intersection and union of a chain of primes are also primes.

Can the closed interval $[0,1]$ be embedded in a poset of primes? Alas, no. Even though $[0,1]$ now has a lower and upper bound, it is a dense ordered set, and a poset of primes cannot contain a “dense part”. More precisely, suppose that $P\subset Q$ are two distinct prime ideals and let $\{P_i\}$ be a maximal chain of prime ideals between $P$ and $Q$. Let $x\in Q – P$ and let
$$P’ = \cup \{ P_i : x\not\in P_i\}\\
Q’ = \cap \{ P_i : x\in P_i\}$$
Then $P’$ and $Q’$ are two distinct prime ideals such that $P’\subset Q’$ and such that there is no prime between $P’$ and $Q’$. So, $[0,1]$ indeed cannot appear in any poset of prime ideals of a commutative ring.

Posted by Jason Polak on 02. January 2018 · Write a comment · Categories: commutative-algebra · Tags: ,

A finitely-generated module over a principal ideal domain is always isomorphic to $R^n\oplus R/a_1\oplus\cdots\oplus R/a_n$ where $n$ is a nonnegative integer and $a_i\in R$ for $i=1,\dots,n$. This is called the structure theorem for modules over a principal ideal domain. Examples of principal ideal domains include fields, $\Z$, $\Z[\sqrt{2}]$, and the polynomial ring $k[x]$ when $k$ is a field.

If $a\in R$ is not a unit, then $R/a$ is not projective, since $a$ annihilates any element of $R/a$ and therefore $R/a$ cannot be the direct summand of any free module. Therefore, we can conclude from the structure theorem that any finitely-generated projective module over a principal ideal domain is a free module. Don’t get your hopes up though: there are many examples of non-free projective modules.

But let’s stick with principal ideal domains. It is actually true that every projective module over a principal ideal domain is free. Kaplansky in [1] proved the following even stronger theorem:

Theorem. If $R$ is an integral domain in which every finitely generated ideal is principal, then every projective $R$-module is free.

More »

Posted by Jason Polak on 30. December 2017 · Write a comment · Categories: commutative-algebra · Tags: ,

Imposing structure on the poset of prime ideals of a ring $R$ is one way to gain a hold onto its structure. The poset of prime ideals of $R$ is simply a fancy term for the set of prime ideals of $R$, partially ordered by inclusion. Usually this set is not totally ordered: in the ring of integers $\Z$ for instance, the prime ideals $(2)$ and $(3)$ cannot be compared by inclusion. It seems to me that requiring the poset of primes to be totally ordered is a strict condition indeed.

Here is one type of domain in which the prime ideals are totally ordered: the valuation domain.
More »

Posted by Jason Polak on 27. December 2017 · 2 comments · Categories: commutative-algebra · Tags:

Let $R$ be a commutative ring. Two idempotents $e$ and $f$ are called orthogonal if $ef = 0$. The archetypal example is $(0,1)$ and $(1,0)$ in a product ring $R\times S$.

Let $e$ and $f$ be orthogonal idempotents. Then the ideal $(e,f)$ is equal to the ideal $(e + f)$. To see, this first note that $(e + f)\subseteq (e,f)$. On the other hand:
$$(1-e)(e + f) = e + f – e – ef = f$$
Therefore $f \in (e + f)$. Switching $e$ and $f$ in this calculation shows that $e\in (e + f)$. Using the fact that $e + f$ is also an idempotent, we see that by induction, if $e_1,\dots,e_n$ are pairwise orthogonal idempotents, then the ideal $(e_1,\dots,e_n)$ is generated by the single element $e_1 + \dots e_n$.

Now suppose $e$ and $f$ are idempotents that are not necessarily orthogonal. Then $(e,f)$ is still a principal ideal. To see this, consider the element $e – ef$. The calculation
$$(e – ef)^2 = e – 2ef + ef = e – ef$$
shows that $e – ef$ is an idempotent. Furthermore, $(e,f) = (e – ef,f)$ and $e-ef$ and $f$ are orthogonal idempotents. By what we discussed in the previous paragraph, $(e,f) = (e-ef,f)$ is generated by $e – ef + f$.

Everything we did assumed $R$ was commutative. But what if we foray into the land of noncommutative rings? Is it still true that a left-ideal generated by finitely many idempotents is also generated by a single idempotent? Any ideas?

Posted by Jason Polak on 24. December 2017 · Write a comment · Categories: analysis

Series hold endless fascination. To converge or not to converge? That is the question.

Let’s take the series $1 + 1/2 + 1/3 + \cdots$. It’s called the harmonic series, and it diverges. That’s because it is greater than the series
$$1 + 1/2 + 1/4 + 1/4 + 1/8 + 1/8 + 1/8 + 1/8 + \cdots = 1 + 1/2 + \cdots$$

The harmonic series diverges rather slowly, however. In fact, by comparing with the integral of $1/x$, we see that $1 + 1/2 + \cdots + 1/N$ can never be more than $\log(N) + 1$. For example, the sum of the first two hundred million terms is about 19.691044.

On the other hand, the sum of the reciprocals of the squares $1 + 1/4 + 1/9 + 1/16 + \cdots$ converges, which can be seen by comparing it to the integral of $1/x^2$ from one to infinity. In fact, $1 + 1/4 + 1/9 + \cdots = \pi^2/6$ as proved by Leonhard Euler. Here’s a question for you: does $\sum_{n=1}^\infty 1/n^s$ converge or diverge for $1 < s < 2$?

Even though the sum of reciprocals of squares converges, the sum of reciprocals of primes $1/2 + 1/3 + 1/5 + 1/7 + 1/11 + \cdots$ diverges. One could say by the convergence-divergence metric that the primes are more numerous than the squares. Also, in a similar vein to the harmonic series, the sum $1/2 + 1/3 + 1/5 + \cdots + 1/p – \log\log p$ is bounded.

Let’s go back to that harmonic series: $1 + 1/2 + 1/3 + 1/4 + \cdots$. Take this series, and delete every term whose denominator has the digit “9” somewhere in its decimal expansion. The resulting series converges! Can you prove it?

Posted by Jason Polak on 23. December 2017 · Write a comment · Categories: analysis · Tags: ,

Did you know that the closed interval $[0,1]$ cannot be partitioned into two sets $A$ and $B$ such that $B = A + t$ for some real number $t$? Of course, the half-open interval $[0,1)$ can so be partitioned: $A = [0,1/2)$ and $t = 1/2$. Why is this? I will leave the full details to the reader but I am sure they can be reconstructed without much difficulty using the following sketch:

Assume such a partition can be so made, and assume without loss of generality that $t \gt 0$. Then we must have $[0,t)\subseteq A$ and $(1-t,1]\subseteq B$. This shows that $t \lt 1/2$. Now, by assumption, $B = A + t$. Therefore, $[t,2t)\subseteq B$ since $[0,t)\subseteq A$ and similarly $(1-2t,1-t]\subseteq A$. Because $A$ and $B$ are disjoint, this implies that $t \lt 1/4$. We can continue to play this game, which shows that $t$ is strictly less than $1/(2n)$ for any $n$ and hence $t = 0$. This shows that such a partition cannot be made.

Pretty good. But did you know that the closed interval $[0,1]$ cannot be partitioned into two nonempty, disjoint open sets? Neither can any interval, whether open, closed, or half-open. In the language of topology, intervals of real numbers are connected. Proof?

Posted by Jason Polak on 20. December 2017 · Write a comment · Categories: commutative-algebra · Tags: ,

Over a finite field, there are of course only finitely many irreducible monic polynomials. But how do you count them? Let $q = p^n$ be a power of a prime and let $N_q(d)$ denote the number of monic irreducible polynomials of degree $d$ over $\F_q$. The key to finding $N_q(d)$ is the following fact: the product of all the monic, irreducible polynomials of degree $d$ with $d \mid n$ in the finite field $\mathbb{F}_q$ is the polynomial
$$x^{q^n} – x.$$
So let’s say $f_1,f_2,\dots, f_k$ are all the irreducible monic polynomials of degree $d$ with $d\mid n$. By taking degrees on both sides of the equation $x^{q^n} -x = f_1f_2\cdots f_k$, we get the formula
$$q^n = \sum_{d\mid n} dN_q(d).$$
Hey this is pretty good! For example, if $q = 2$ and $n = 3$ then the formula reads
$$8 = N_2(1) + 3N_2(3)$$
Now, $N_q(1)$ is always easy to figure out. All monic linear polynomials are irreducible so $N_q(1) = q$. Therefore, $N_2(3) = 2$. In fact, these two polynomials are: $x^3 + x + 1$ and $x^3 + x^2 + 1$. Okay, what about if $q = 3$ and $n = 6$? Then our formula tells us that
$$3^6 = N_3(1) + 2N_3(2) + 3N_3(3) + 6N_3(6).$$
So we now have to recursively compute: if we did that we would get that $N_3(1) = 1$, and this gives $2N_3(2) = 6$. Finally, $3N_3(3) = 24$. Therefore, we would calculate that $N_3(6) = 116$. It would not be too hard to write such a recursive algorithm and I encourage the reader to try it.
More »

Posted by Jason Polak on 29. November 2017 · Write a comment · Categories: number-theory · Tags: , ,

Lately I’ve been thinking about primes, and I’ve plotted a few graphs to illustrate some beautiful ideas involving primes. Even though you might not always associate with primes, they are always haunting quietly in the background.

Abundance of primes in an arithmetic progression

Let’s start out with the oddest prime of all: 2. Get it? But after that, all the odd primes are either of the form $4k + 1$ or $4k + 3$. For fixed $x$, are there more primes less than $x$ of the form $4k + 1$ or the second form $4k + 3$? Let’s write $\pi(4k + r,x)$ for the number of primes less than or equal to $x$ of the form $4k + r$. Here is a graph of the difference $\pi(4k+3,x) – \pi(4k+1,x)$:

Pretty neat right? It looks like this difference is wildly erratic, reaching zero after a short while with a bit of a fight and then for a really good long while the primes of the form $4k +3$ win out. So you might be tempeted to think that primes of the form $4k + 3$ become more and more abundant as $x$ increases. That would be wrong. In fact, John E. Littlewood proved that $\pi(4k +3,x)-\pi(4k + 1,x)$ switches sign infinitely often!

Of course, that must mean there are infinitely many primes of both types, and that’s true and a special case of Dirichlet’s theorem: there are infinitely many primes in any arithmetic progression $ax + b$ whenever $a$ and $b$ are relatively prime.
More »