Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: advice

Choosing where to get your PhD is an important decision. If you continue onto academia, your PhD might be the longest time you spend at any one institution until you get a permanent position. The most obvious choice is apply to the high-ranking schools. However, you should consider far more than that. Here, we’ll look at some of the important factors to consider, with the context of mathematics in mind. However, most of what I say applies to some other fields as well.

Represented research areas

Unlike choosing an undergraduate program, where the curriculum doesn’t differ much around the world (though it certainly can vary greatly in strength or intensity), a PhD will be on a very specialised topic. So, if you go to a school where analysis and statistics are the main topics represented and you like algebra, you probably won’t like it. This can be worse for those places where you don’t have to choose an advisor until the second year. So I suggest you look at the represented research areas on departmental websites and see what catches your interest. Unfortunately, some math department websites look like they were coded on a Super Nintendo, if that were even possible. So:

Make sure someone is actually doing something you’re interested in at prospective schools!

If you’re at the undergraduate level and not sure of your interests yet, it could be a good idea to consider a masters program first before starting a PhD. I enjoyed doing a masters degree first, even though in the long run it is more expensive.

Total school atmosphere

If you’re lucky enough to be nearby some schools you’re interested in, you should visit them, meet some professors, and even sit in on some classes and departmental seminars. Just walk around and see what it’s like. Some schools have a much nicer atmosphere than others. You should also get a sense of the surrounding city. This is true especially if you are a very independent worker: having an enjoyable city will in fact make working much easier. Conversely, living in a place you dislike for several years is quite draining.

Sadly, living temporarily in cities you don’t like is very probable in at least one stage of climbing the academic ladder.
More »

Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: homological-algebra, modules

Consider a field $k$. Define an action of $k[x,y]$ on $k[x]$ by $f*g = f(x,x)g(x)$ for all $f\in k[x,y]$ and $g\in k[x]$. In other words, the action is: multiply $f$ and $g$ and then replace every occurrence of $y$ by $x$.

Is $k[x]$ a projective $k[x,y]$-module? Consider first the map $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$. It’s easy to check that this map is in fact a $k[x,y]$-module homomorphism. It would be tempting to try and split this map with the inclusion map $k[x]\to k[x,y]$. But this doesn’t work: this inclusion is not a $k[x,y]$-module homomorphism.

In fact, the $k[x,y]$-module homomorphism $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$ cannot split simply because there are no nonzero $k[x,y]$-module homomorphisms $k[x]\to k[x,y]$. Therefore, $k[x]$ is not projective as a $k[x,y]$-module, using the module structure we gave it.

Here are two more ways to see this:

  1. Through the notion of separability: by definition, $k[x]$ being a projective $k[x,y]\cong k[x]\otimes_k k[x]$-module under the structure that we have defined means that $k[x]$ is a separable $k$-algebra. However, all separable $k$-algebras are finite-dimensional as vector spaces over $k$, whereas $k[x]$ is infinite-dimensional.
  2. Through Seshradi’s theorem: this theorem says that every finitely-generated projective module over $k[x,y]$ is actually free. Therefore, we just have to show that $k[x]$ is not free because $k[x]$ is certainly finitely-generated as a $k[x,y]$-module. But $(x^2y – xy^2)$ annihilates all elements of $k[x]$, which cannot happen in a free module.
Posted by Jason Polak on 01. September 2017 · Write a comment · Categories: ring-theory · Tags:

In the previous post we saw the following definition for a ring $R$: An element $r\in R$ is called strongly nilpotent if every sequence $r = r_0,r_1,r_2,\dots$ such that $r_{n+1}\in r_nRr_n$ is eventually zero. Why introduce this notion?

Well, did you know that every finite integral domain is a field? If $R$ is an integral domain and $a\in R$ is nonzero, then the multiplication map $R\to R$ given by $x\mapsto ax$ is injective. If $R$ is finite, then it must also be surjective so $a$ is invertible!

Another way of stating this neat fact is that if $R$ is any ring and $P$ is a prime ideal of $R$ such that $R/P$ is finite, then $P$ is also a maximal ideal. A variation of this idea is that every prime ideal in a finite commutative ring is actually maximal. Yet another is that finite commutative rings have Krull dimension zero.
More »

Posted by Jason Polak on 31. August 2017 · Write a comment · Categories: ring-theory · Tags:

Let $R$ be an associative ring. An element $r\in R$ is called nilpotent if $r^n = 0$ for some $n$. There is a stronger notion: an element $r\in R$ is called strongly nilpotent if every sequence $r = r_0,r_1,r_2,\dots$ such that $r_{n+1}\in r_nRr_n$ is eventually zero.

How are these two related? It is always the case that a strongly nilpotent element is nilpotent, because if $r$ is strongly nilpotent then the sequence $r,r^2,r^4,r^8,\dots$ vanishes. However, the element
$$\begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix}$$
in any $2\times 2$ matrix ring is nilpotent but not strongly nilpotent. Notice how we had to use a noncommutative ring here—that’s because for commutative rings, a nilpotent element is strongly nilpotent!

Posted by guest on 27. August 2017 · Write a comment · Categories: math
A guest Post by Paul Pierce and Ashley Ross

With the advances in calculator technology, some developmental and college-level math courses are restricting the use of any type of graphing or programmable calculators. This is to help students avoid becoming dependent on their calculators for both simple arithmetic and graphing. So, some teachers are going “old school” and forbidding the use of calculators in the classroom. Therefore, it is imperative that students learn efficient methods for finding important values, as well as graphing functions, without the help of their calculator. One type of function that appears in many courses is the quadratic function, and one of the most critical points on the graph of a quadratic function is the vertex.

Fundamental Concepts of the Graph of a Quadratic Function

For the function $f(x)=ax^2+bx+c$ with $a\not=0$, the graph is a smooth, continuous curve called a parabola. This parabola opens upward if $a > 0$ or opens downward if $a < 0$. The vertex $(h,k)$ of the graph is the only turning point on the parabola, which makes it a critical point. The $y$-coordinate $k$ of the vertex represents the minimum value of the function if $a>0$, or the maximum value of the function if $a<0$.

The point $(h,k)$ may be found using the formulas $h=\frac{-b}{2a}$ and $k=\frac{bh}{2}+c$, which begin to show the importance of the vertex. We give two examples:

Example 1. For $y=x^2+6x+3$, find the vertex $(h,k)$.

First find $h$ using $h=\frac{-b}{2a}=\frac{-6}{2(1)}=-3$.

Next find $k$ using $k=\frac{bh}{2}+c=\frac{(6)(-3)}{2}+3=-9+3=-6$.

So, the coordinates of the vertex of the parabola are $(-3, -6)$. Observe from the graph that this vertex is the lowest point on the parabola, which means that $k = -6$ is the minimum value of the function.

Example 2.For $y=-2x^2+8x-5$, find the vertex $(h,k)$.

First find h using $h=\frac{-b}{2a}=\frac{-8}{2(-2)}=2$.

Next find k using $k=\frac{bh}{2}+c=\frac{(8)(2)}{2}-5=8-5=3$.

So, the coordinates of the vertex of the parabola are $(2, 3)$. Note that this vertex is the highest point on the graph, which illustrates that $k = 3$ is the maximum value of this function.

More »

Posted by Jason Polak on 27. August 2017 · Write a comment · Categories: math, modules · Tags: , ,

Let $R$ be an associative ring with identity. The Jacobson radical ${\rm Jac}(R)$ of $R$ is the intersection of all the left maximal ideals of $R$. So, ${\rm Jac}(R)$ is a left ideal of $R$. It turns out that the Jacobson radical of $R$ is also the intersection of all the right maximal ideals of $R$, and so ${\rm Jac}(R)$ is also an ideal!

The idea behind the Jacobson radical is that one might be able to explore the properties of a ring $R$ by first looking at the less complicated ring $R/{\rm Jac}(R)$. Since the ideals of $R$ containing ${\rm Jac}(R)$ correspond to the ideals of $R/{\rm Jac}(R)$, the ring $R/{\rm Jac}(R)$ has zero Jacobson radical. Often the rings $R$ for which ${\rm Jac}(R) = 0$ are called Jacobson semisimple.

This terminology might be a tad bit confusing because typically, a ring $R$ is called semisimple if every left $R$-module is projective, or equivalently, if every left $R$-module is injective. How does the notion of semisimple differ from Jacobson semisimple? The Wedderburn-Artin theorem gives a classic characterisation of semisimple rings: they are exactly the rings that are finite direct products of full matrix rings over division rings. Since a full matrix ring over a division ring has no nontrivial ideals, the product of such rings must have trivial Jacobson radical. Thus:

A semisimple ring is Jacobson semisimple.

The converse is false: there exists a ring that is Jacobson semisimple but not semisimple. For example, let $R$ be an infinite product of fields. Then ${\rm Jac}(R) = 0$. However, $R$ is not semisimple. Why not? If it were, by Wedderburn-Artin it could also be written as a finite product of full matrix rings over division rings, which must be a finite product of fields because $R$ is commutative. But a finite product of fields only has finitely many pairwise orthogonal idempotents, whereas $R$ has infinitely many.

Incidentally, because $R$ is not semisimple, there must exist $R$-modules that are not projective. However, $R$ does have the property that every $R$-module is flat!

Posted by Jason Polak on 06. August 2017 · Write a comment · Categories: math, opinion

A senior mathematician who will remain nameless recently said in a talk, “there is nothing left to prove”. In context, he was referring to the possibility that we are running out of math problems. People who heard laughed, and first-year calculus students might disagree. Was it said as a joke?

Because of the infinite nature of mathematics, there will always be new problems. On the other hand, there are only finitely many theorems we’ll ever know; only finitely many that we’ll ever be interested in. Are we close to knowing all the interesting theorems? Is the increasing specialisation of the literature a sign of a future with a thousand subfields each with only one or two devotees?

Truthfully, I don’t think math is running out of problems at all. I think it’s more like good, nonspecialist exposition isn’t really keeping up with the rapid development of mathematics and so we know less and less about what our colleagues are doing. So we should attempt to prevent the future where every person is their own research field. Here are some ways we could do that:

  1. Make part of your introduction in your paper understandable to a much wider range of mathematicians. This will encourage more collaboration and cross-disciplinary understanding. For example, once I was actually told by a journal to cut out a couple of pages from a paper because it was well-known to (probably ten) experts, even though that material was literally not written down anywhere else! Journals should actually encourage good exposition and not a wall of definition-theorem-proof.
  2. Have the first twenty minutes of your talk understandable by undergraduates. Because frankly, this is the only way mathematicians (especially young ones) in other fields will actually understand the motivation of your work. How are we supposed to ask good questions when we can’t figure out where our research fits in with the research of others?
  3. Use new avenues of mathematical exposition like blogs and nontechnical articles. Other fields like physics and biology appear in magazines like Scientific American and have an army of people working to make specialised work understandable to the nonspecialist.
  4. Encourage new, simplified proofs or explanations of existing results. And by ‘encourage’, I mean count high-quality, expository papers on the level of original results in determining things like tenure and jobs! There are already journals that publish these types of papers. Chances are, any expository paper will actually help at least as many people as an original result, perhaps more. And there are still hundreds of important papers that are very difficult if not impossible to read (even by many experts), with no superior alternative exposition available.

I think it’s been a long-lived fashion in mathematics to hide the easy stuff in favour of appearing slick ever since one dude tried to hide how he solved the cubic from another dude, and it’s probably something we can give up now.

Posted by Jason Polak on 25. July 2017 · Write a comment · Categories: math

Fomin, Williams, and Zelevinsky (posth.) are preparing a new introductory text on cluster algebras. The first three chapters look elementary enough, and it’s worth a look for those interested in learning this topic.

Posted by Jason Polak on 19. July 2017 · Write a comment · Categories: commutative-algebra · Tags: ,

Here’s a classic definition: let $R\subseteq S$ be commutative rings. An element $s\in S$ is called integral over $R$ if $f(s)=0$ for some monic polynomial $f\in R[x]$. It’s classic because appending the solutions of polynomials to base rings goes way back to the ancient pasttime of finding solutions to polynomial equations.

For example, consider $\Z\subseteq \Z[\sqrt{2}]$. Every element of $\Z[\sqrt{2}]$ is integral over $\Z$, which essentially comes down to the fact that $\sqrt{2}$ satisfies $x^2 – 2$. On the other hand, the only elements of $\Q$ integral over $\Z$ are the integers themselves.

The situation is much different for finite commutative rings. If $R\subseteq S$ are finite rings, then every element of $S$ is integral over $R$. Proof: suppose $s\in S$ and set $T = \{ f(s): f\in R[x]\}$. For each $t\in T$ fix a polynomial $f$ such that $f(s) = t$. The set of all such polynomials is finite so we can define $m$ as the maximum degree of all these polynomials. Then $s^{m+1}\in T$ and so there is an $f$ of degree at most $m$ such that $s^{m+1} – f(s) = 0$. Thus $s$ satisfies the monic polynomial $x^{m+1} – f(x)$. QED.

Cool right? However, this is just a more general case of the following theorem: let $R\subseteq S$ be commutative rings. Then $S$ is finitely generated as an $R$-module if and only if $S$ is finitely generated as an $R$-algebra and every element of $S$ is integral over $R$.

Posted by Jason Polak on 02. July 2017 · 1 comment · Categories: paper

I’ve submitted paper! The results stem from a pretty simple question that can be understood with an first course in abstract algebra. This post will explain the question and give a teaser of some of the results.

Let $R$ be a ring. A polynomial $f\in R[x]$ induces a function $R\to R$ given by $a\mapsto f(a)$. It turns out that this function is sometimes bijective. When this happens, we say that $f$ is a permutation polynomial. There are some easy examples: $f(x) = x + a$ for $a\in R$ is always injective, and always bijective if $R$ is finite. But there are less trivial examples as well. For instance, the polynomial $f(x) = x^6 + x^4 + x^2 + x$ permutes $\Z/27$.

Permutation polynomials are perhaps most well-known when $R$ is a finite field. In this case, every function $R\to R$ can be represented by a polynomial. In particular, every permutation can so be represented. This result is not particularly deep. More interesting for finite fields is to determine which polynomials are permutation polynomials, and to find certain classes of them.

More interesting things happen when $R$ is a finite ring that is not a field. Then it is not necessarily true that all functions $R\to R$ can be represented by polynomials. Can all permutations be represented by polynomials? The answer is in fact no! So, it makes perfect sense to define a group ${\rm Pgr}(R)$ as the subgroup of the symmetric group on $R$ generated by all permutations represented by polynomials. Let’s call it the polypermutation group of $R$.

Under this notation, ${\rm Pgr}(R)$ is the symmetric group on $R$ when $R$ is a finite field. What about other rings? This is what brings us to the topic of my latest paper: The Polypermutation Group of an Associative Ring. This paper started out by asking the simple question:

What is ${\rm Pgr}(R)$ for some common finite rings?

In my paper I’ve concentrated on $\Z/p^k$ where $p$ is a prime. The general case of $\Z/n$ for an integer $n$ reduces to this case via the Chinese Remainder Theorem.

Upon my initial investigations I found that ${\rm Pgr}(\Z/p^k)$ is actually a little complicated. It turns out to be easier when $p \geq k$. In this case I wrote down an explicit formula for the cardinality of ${\rm Pgr}(\Z/p^k)$. I already mentioned that when $k = 1$ the result is classical and is $p!$ because then $\Z/p$ is a finite field. One of my results is:

Theorem (P-). Let $p$ be a prime and $k\geq 2$ be an integer with $p\geq k$. Then:
$$|{\rm Pgr}(\Z/p^k)|= p![(p-1)p^{(k^2 + k-4)/2}]^p.$$

Whoa, that’s complicated. But it’s not hard to see that this is going to be less than $(p^k)!$, showing that there are indeed some permutations that cannot be represented by polynomials in this case. In fact, one can be more precise in the case of $k=2$. In this case, one can compute the group ${\rm Pgr}(\Z/p^2)$ itself, though I’ll leave you to read the paper to find out what it is!