Posted by Jason Polak on 19. July 2017 · Write a comment · Categories: commutative-algebra · Tags: ,

Here’s a classic definition: let $R\subseteq S$ be commutative rings. An element $s\in S$ is called integral over $R$ if $f(s)=0$ for some monic polynomial $f\in R[x]$. It’s classic because appending the solutions of polynomials to base rings goes way back to the ancient pasttime of finding solutions to polynomial equations.

For example, consider $\Z\subseteq \Z[\sqrt{2}]$. Every element of $\Z[\sqrt{2}]$ is integral over $\Z$, which essentially comes down to the fact that $\sqrt{2}$ satisfies $x^2 – 2$. On the other hand, the only elements of $\Q$ integral over $\Z$ are the integers themselves.

The situation is much different for finite commutative rings. If $R\subseteq S$ are finite rings, then every element of $S$ is integral over $R$. Proof: suppose $s\in S$ and set $T = \{ f(s): f\in R[x]\}$. For each $t\in T$ fix a polynomial $f$ such that $f(s) = t$. The set of all such polynomials is finite so we can define $m$ as the maximum degree of all these polynomials. Then $s^{m+1}\in T$ and so there is an $f$ of degree at most $m$ such that $s^{m+1} – f(s) = 0$. Thus $s$ satisfies the monic polynomial $x^{m+1} – f(x)$. QED.

Cool right? However, this is just a more general case of the following theorem: let $R\subseteq S$ be commutative rings. Then $S$ is finitely generated as an $R$-module if and only if $S$ is finitely generated as an $R$-algebra and every element of $S$ is integral over $R$.

Posted by Jason Polak on 02. July 2017 · 1 comment · Categories: paper

I’ve submitted paper! The results stem from a pretty simple question that can be understood with an first course in abstract algebra. This post will explain the question and give a teaser of some of the results.

Let $R$ be a ring. A polynomial $f\in R[x]$ induces a function $R\to R$ given by $a\mapsto f(a)$. It turns out that this function is sometimes bijective. When this happens, we say that $f$ is a permutation polynomial. There are some easy examples: $f(x) = x + a$ for $a\in R$ is always injective, and always bijective if $R$ is finite. But there are less trivial examples as well. For instance, the polynomial $f(x) = x^6 + x^4 + x^2 + x$ permutes $\Z/27$.

Permutation polynomials are perhaps most well-known when $R$ is a finite field. In this case, every function $R\to R$ can be represented by a polynomial. In particular, every permutation can so be represented. This result is not particularly deep. More interesting for finite fields is to determine which polynomials are permutation polynomials, and to find certain classes of them.

More interesting things happen when $R$ is a finite ring that is not a field. Then it is not necessarily true that all functions $R\to R$ can be represented by polynomials. Can all permutations be represented by polynomials? The answer is in fact no! So, it makes perfect sense to define a group ${\rm Pgr}(R)$ as the subgroup of the symmetric group on $R$ generated by all permutations represented by polynomials. Let’s call it the polypermutation group of $R$.

Under this notation, ${\rm Pgr}(R)$ is the symmetric group on $R$ when $R$ is a finite field. What about other rings? This is what brings us to the topic of my latest paper: The Polypermutation Group of an Associative Ring. This paper started out by asking the simple question:

What is ${\rm Pgr}(R)$ for some common finite rings?

In my paper I’ve concentrated on $\Z/p^k$ where $p$ is a prime. The general case of $\Z/n$ for an integer $n$ reduces to this case via the Chinese Remainder Theorem.

Upon my initial investigations I found that ${\rm Pgr}(\Z/p^k)$ is actually a little complicated. It turns out to be easier when $p \geq k$. In this case I wrote down an explicit formula for the cardinality of ${\rm Pgr}(\Z/p^k)$. I already mentioned that when $k = 1$ the result is classical and is $p!$ because then $\Z/p$ is a finite field. One of my results is:

Theorem (P-). Let $p$ be a prime and $k\geq 2$ be an integer with $p\geq k$. Then:
$$|{\rm Pgr}(\Z/p^k)|= p![(p-1)p^{(k^2 + k-4)/2}]^p.$$

Whoa, that’s complicated. But it’s not hard to see that this is going to be less than $(p^k)!$, showing that there are indeed some permutations that cannot be represented by polynomials in this case. In fact, one can be more precise in the case of $k=2$. In this case, one can compute the group ${\rm Pgr}(\Z/p^2)$ itself, though I’ll leave you to read the paper to find out what it is!

Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: math

The term earworm refers the phenomenon of having music stuck in your head. I don’t know about you, but often I get a mathworm: an idea or question that simply won’t go away. Sometimes a mathworm can take the form of a specific problem that needs solving. Other times it could just be a definition or idea that is particularly attractive. What does this have to do with research?

First, let me ask you a question: what’s the best way to find new problems and develop a research plan? I’ve received lots of advice on this, but there is one strategy that has helped more than any of this advice, and that is: listen to the mathworm!

If there’s something in the back of your mind that won’t go away, dig it up and satisfy your curiosity about it so you can put it to rest. This strategy has the following two consequences:

  1. Anything that seems to be a continual presence probably means it is the right kind of problem for your brain. This means you’ll actually be working on math you like.
  2. A nagging problem or phenomenon in the back of your mind is a distraction from doing other things and so getting rid of it will clear up space for something new. It’s sort of like actually listening to what you’re hearing in your head makes an earworm go away.

So go ahead, listen to the mathworm!

Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: elementary · Tags:

The $n$th harmonic number $h(n)$ is defined as
$$h(n) = \sum_{i=1}^n 1/i$$
The harmonic series is the associated series $\sum_{i=1}^\infty 1/i$ and it diverges. There are probably quite a few interesting ways to see this. My favourite is a simple comparison test:
$$1/1 + 1/2 + 1/3 + \cdots\\ \geq 1/2 + 1/2 + 1/4 + 1/4 + 1/4 + 1/4 + 1/8 + \cdots
\\= 1 + 1 + 1 + \cdots$$
and the series $1 + 1 + \cdots$ is divergent. But while the harmonic series diverges, it does so rather slowly. It does so slowly enough that if you were to numerically compute the harmonic numbers (the partial sums of the harmonic series), you might be unconvinced that it actually does diverge:

  • $h(10) = 2.92896825396825\dots$
  • $h(100) = 5.18737751763962\dots$
  • $h(1000) = 7.48547086055035\dots$
  • $h(10000) = 9.78760603604438\dots$
  • $h(100000) = 12.0901461298634\dots$
  • $h(1000000) = 14.3927267228657\dots$

These numbers were computed by actually summing $1 + 1/2 + \cdots + 1/n$ and then writing out a decimal approximation to that fraction, but that takes a while. How can we at least give an approximation to this series? The first thought surely must be to compare it to the integral
$$\int_1^n 1/x dx = \log(n)$$
Where $\log(-)$ denotes the natural logarithm. A moment’s consideration with Riemann sums shows that we have an inequality
$$\int_1^n 1/x dx \le h(n) \le \int_1^n 1/x dx + 1$$
So we’ve come up with a pretty good approximation to our harmonic series, which only gets better as $n$ gets bigger:
$$\log(n)\le h(n) \le \log(n) + 1$$
Which, incidentally is another explanation of why the harmonic series diverges. And it’s much faster to compute on a simple scientific calculator. Here is an example computation: we have already said that $h(1000000) = 14.3927267228657\dots$ But $\log(1000000) = 13.8155105579643\dots$ Pretty good right?

Posted by Jason Polak on 18. May 2017 · Write a comment · Categories: math · Tags:

Here are some common LaTeX typesetting errors that you can avoid to make your document look good:

The wrong inner product

Ever seen inner products written like this?

This horrible inner product typesetting was made using signs that should only be used as relation symbols such as less than or greater than. If you want to typeset a proper inner product using angle brackets, use \langle and \rangle.

Using multiple fonts

When the ability to use other fonts are discovered, it leads to using them. Unfortunately, sometimes they get mixed when they shouldn’t:

This fragment shows a sans-serif font being used to emphasise a definition word in a sentence that is typeset with a serif font. Truly shocking, though it’s not as bad as using Times New Roman for the body text and Computer Modern Roman for the math font. Now that’s really bad. Take my advice: stick to one font.

Not using the right-sized parentheses

This is one of the most common mistakes. Ever see something like this:

Yeah. This was acceptable when we were using typewriters. Not any more. For correctly sized parentheses and other delimters, use \left before the left delimiter and \right before the right delimiter.

Keep weird fraktur letters to a minimum

Ever see a document with hundreds of fraktur letters?

Seriously? What’s wrong with $I$ and $J$?

Use numbering that makes sense

What about those definitions, theorems, lemmas, and corollaries that all use separate numbering?

This makes for finding stuff in a document frustrating, and impossible in a long one. Use one numbering system to help everyone keep their sanity.
More »

Posted by Jason Polak on 06. May 2017 · 2 comments · Categories: group-theory

When I first saw convolution I happily used it without thinking about why we should define it in the first place. Here’s a post that might tell you why it makes sense from an algebraic viewpoint. Let’s recall the convolution product, as it is for functions $f:\R\to\R$. If $f,g:\R\to\R$ are two such functions, then the convolution $f*g$ is defined as
$$
(f*g)(y) := \int_\R f(y-x)g(x){\rm d}x$$
provided the integral exists. Why do we do this? For convolution, the explanation starts with polynomials. Multiplying polynomials is simplicity itself. For example, here is the product of two real-valued polynomials:
$$(1 + 2t + 5t^2)(1 + 3t) = 1 + 5t + 11t^2 + 15y^3$$
But a real-valued polynomial could also be thought of as a function $f:\Z\to \R$, where $f(x)$ is zero for all but finitely many $x$ and nonzero only if $x \geq 0$. In this setup, $f(x)$ represents the coefficient of $t^x$. Addition of polynomials becomes pointwise addition of functions. Now here’s the good part: polynomial multiplication becomes convolution: if $f,g:\Z\to\R$ are two polynomials then
$$(fg)(y) = \sum_{x\in \Z} f(y-x)g(x).$$
For a finite group $G$ then, complex representations of $G$ correspond to representations of the algebra of complex-valued functions on $G$ where the multiplication is the convolution product.

Posted by Jason Polak on 15. April 2017 · Write a comment · Categories: math

I’ve made a small organizational change to this blog. I realize it might not have been easy to find series of posts on one topic, or the “post series”. They used to be on a page called “features” but I think that was too obscure. Now there’s the page “post series”, which is found above on the header. Along with the current series I have going on Non-unique Factorisation, it also lists five posts on Waldhausen categories.

Perhaps the most exciting in my mind is the six posts of “Wild Spectral Sequences”, which give little toy problems that can be solved using spectral sequences. I came up with all the proofs myself, and some of them might be published nowhere else. Though I admit some of them are quite basic, I think they are quite instructive!

What are you waiting for? Visit the Post Series page today for these topics!

Posted by Jason Polak on 15. April 2017 · Write a comment · Categories: commutative-algebra · Tags:

We are continuing the series on non-unique factorisation. For a handy table of contents, visit the Post Series directory.

In Part 1 of this series, we introduced for a commutative ring three types of relations:

  1. Associaties: $a\sim b$ means that $(a) = (b)$
  2. Strong associates: $a\approx b$ means that $a = ub$ for $u\in U(R)$
  3. Very strong associates: $a\cong B$ means that $a\sim b$ and either $a = b=0$ or $a = rb$ implies that $r\in U(R)$

Here, $U(R)$ denotes the group of units of $R$. We have already seen in a ring with nontrivial idempotents like $\Z\times \Z$, a nontrivial idempotent $e$ will be satisfy $e\sim e$ and $e\approx e$, but $e\not\cong e$ because $e = ee$ and yet $e$ is not a unit and nonzero.

Therefore, $\cong$ is not an equivalence relation for all commutative rings. But it is symmetric:

Proof. Suppose $a\cong b$. Then $a\sim b$ and so $b\sim a$. If $a$ and $b$ are not both zero, write $a = sb, b = ta$. If $b = ra$ then $a = sra = s^2rb$. Since $a\cong b$, this implies that $s^2r$ is a unit and so $r$ is a unit. Hence $b\cong a$.

Guess what? The relation $\cong$ is also transitive. Since the proof is similarly short I’ll leave the proof to the reader. So, $\cong$ is just missing being reflexive for all rings to be an equivalence relation for all rings. If $\cong$ is an equivalence relation for a ring $R$, then we say that $R$ is presimplifiable. We introduced this type of ring last time.
More »

Posted by Jason Polak on 11. April 2017 · Write a comment · Categories: commutative-algebra · Tags:

If $F$ is a field then the polynomial ring $F[x]$ is a unique factorisation domain: that is, every nonunit can be written uniquely as a product of irreducible elements up to a unit multiple. So in $\Q[x]$ for example, you can be sure that the polynomial $x^2 – 2 = (x-2)(x+2)$ can’t be factored any other way, and thus the only zeros of $x^2 – 2$ really are $\pm 2$.

If $F$ is not a field, then a polynomial might have a bunch of different factorisations. For example, in the ring $\Z/4[x]$ we can write $x^2 = (x)(x) = (x+2)(x+2)$. How can we make sense of factorisations in rings that are not unique factorisation domains? In order to do so, we first should make sure we understand what irreducible means in this context.

In the next several posts we’ll look at non-unique factorisation more closely, following a paper of Anderson and Valdes-Leon [1], but keeping the posts self-contained. We’ll start by looking at the concept of associates. One can in fact look at several different variations of associates. Here are three:
More »

Posted by Jason Polak on 21. March 2017 · Write a comment · Categories: elementary, exposition

Characteristic functions have magical properties. For example, consider a double summation:
$$\sum_{k=1}^M\sum_{r=1}^k a_{r,k}.$$
How do you switch the order of summation here? A geometric way to think of this is arrange the terms out and “see” that this sum must be equal to
$$\sum_{r=1}^M\sum_{k=r}^M a_{r,k}.$$
I find this unsatisfactory because the whole point of good notation is that you shouldn’t have to think about what it actually means. I do think it’s very important to understand what the notation means, but in doing formal manipulations it’s nice not to have to do that all the time.

A superior proof that these two sums are equal would be to write
$$\sum_{k=1}^M\sum_{r=1}^k a_{r,k} = \sum_{k=1}^M\sum_{r=1}^M a_{r,k}\delta(r\leq k)$$
where $\delta(r\leq k)$ is the function of the variables $r$ and $k$ that equals one exactly when $r\leq k$ and zero otherwise. Then we can happily switch the order of summation to get
$$\sum_{r=1}^M\sum_{k=1}^M a_{r,k}\delta(r\leq k).$$
Now, it’s trivial to get rid of the $\delta(r\leq k)$ function by writing
$$\sum_{r=1}^M\sum_{k=r}^M a_{r,k}.$$