Posted by Jason Polak on 19. July 2017 · Write a comment · Categories: commutative-algebra · Tags: ,

Here’s a classic definition: let $R\subseteq S$ be commutative rings. An element $s\in S$ is called integral over $R$ if $f(s)=0$ for some monic polynomial $f\in R[x]$. It’s classic because appending the solutions of polynomials to base rings goes way back to the ancient pasttime of finding solutions to polynomial equations.

For example, consider $\Z\subseteq \Z[\sqrt{2}]$. Every element of $\Z[\sqrt{2}]$ is integral over $\Z$, which essentially comes down to the fact that $\sqrt{2}$ satisfies $x^2 – 2$. On the other hand, the only elements of $\Q$ integral over $\Z$ are the integers themselves.

The situation is much different for finite commutative rings. If $R\subseteq S$ are finite rings, then every element of $S$ is integral over $R$. Proof: suppose $s\in S$ and set $T = \{ f(s): f\in R[x]\}$. For each $t\in T$ fix a polynomial $f$ such that $f(s) = t$. The set of all such polynomials is finite so we can define $m$ as the maximum degree of all these polynomials. Then $s^{m+1}\in T$ and so there is an $f$ of degree at most $m$ such that $s^{m+1} – f(s) = 0$. Thus $s$ satisfies the monic polynomial $x^{m+1} – f(x)$. QED.

Cool right? However, this is just a more general case of the following theorem: let $R\subseteq S$ be commutative rings. Then $S$ is finitely generated as an $R$-module if and only if $S$ is finitely generated as an $R$-algebra and every element of $S$ is integral over $R$.

Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: math

The term earworm refers the phenomenon of having music stuck in your head. I don’t know about you, but often I get a mathworm: an idea or question that simply won’t go away. Sometimes a mathworm can take the form of a specific problem that needs solving. Other times it could just be a definition or idea that is particularly attractive. What does this have to do with research?

First, let me ask you a question: what’s the best way to find new problems and develop a research plan? I’ve received lots of advice on this, but there is one strategy that has helped more than any of this advice, and that is: listen to the mathworm!

If there’s something in the back of your mind that won’t go away, dig it up and satisfy your curiosity about it so you can put it to rest. This strategy has the following two consequences:

  1. Anything that seems to be a continual presence probably means it is the right kind of problem for your brain. This means you’ll actually be working on math you like.
  2. A nagging problem or phenomenon in the back of your mind is a distraction from doing other things and so getting rid of it will clear up space for something new. It’s sort of like actually listening to what you’re hearing in your head makes an earworm go away.

So go ahead, listen to the mathworm!

Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: elementary · Tags:

The $n$th harmonic number $h(n)$ is defined as
$$h(n) = \sum_{i=1}^n 1/i$$
The harmonic series is the associated series $\sum_{i=1}^\infty 1/i$ and it diverges. There are probably quite a few interesting ways to see this. My favourite is a simple comparison test:
$$1/1 + 1/2 + 1/3 + \cdots\\ \geq 1/2 + 1/2 + 1/4 + 1/4 + 1/4 + 1/4 + 1/8 + \cdots
\\= 1 + 1 + 1 + \cdots$$
and the series $1 + 1 + \cdots$ is divergent. But while the harmonic series diverges, it does so rather slowly. It does so slowly enough that if you were to numerically compute the harmonic numbers (the partial sums of the harmonic series), you might be unconvinced that it actually does diverge:

  • $h(10) = 2.92896825396825\dots$
  • $h(100) = 5.18737751763962\dots$
  • $h(1000) = 7.48547086055035\dots$
  • $h(10000) = 9.78760603604438\dots$
  • $h(100000) = 12.0901461298634\dots$
  • $h(1000000) = 14.3927267228657\dots$

These numbers were computed by actually summing $1 + 1/2 + \cdots + 1/n$ and then writing out a decimal approximation to that fraction, but that takes a while. How can we at least give an approximation to this series? The first thought surely must be to compare it to the integral
$$\int_1^n 1/x dx = \log(n)$$
Where $\log(-)$ denotes the natural logarithm. A moment’s consideration with Riemann sums shows that we have an inequality
$$\int_1^n 1/x dx \le h(n) \le \int_1^n 1/x dx + 1$$
So we’ve come up with a pretty good approximation to our harmonic series, which only gets better as $n$ gets bigger:
$$\log(n)\le h(n) \le \log(n) + 1$$
Which, incidentally is another explanation of why the harmonic series diverges. And it’s much faster to compute on a simple scientific calculator. Here is an example computation: we have already said that $h(1000000) = 14.3927267228657\dots$ But $\log(1000000) = 13.8155105579643\dots$ Pretty good right?

Posted by Jason Polak on 18. May 2017 · Write a comment · Categories: math · Tags:

Here are some common LaTeX typesetting errors that you can avoid to make your document look good:

The wrong inner product

Ever seen inner products written like this?

This horrible inner product typesetting was made using signs that should only be used as relation symbols such as less than or greater than. If you want to typeset a proper inner product using angle brackets, use \langle and \rangle.

Using multiple fonts

When the ability to use other fonts are discovered, it leads to using them. Unfortunately, sometimes they get mixed when they shouldn’t:

This fragment shows a sans-serif font being used to emphasise a definition word in a sentence that is typeset with a serif font. Truly shocking, though it’s not as bad as using Times New Roman for the body text and Computer Modern Roman for the math font. Now that’s really bad. Take my advice: stick to one font.

Not using the right-sized parentheses

This is one of the most common mistakes. Ever see something like this:

Yeah. This was acceptable when we were using typewriters. Not any more. For correctly sized parentheses and other delimters, use \left before the left delimiter and \right before the right delimiter.

Keep weird fraktur letters to a minimum

Ever see a document with hundreds of fraktur letters?

Seriously? What’s wrong with $I$ and $J$?

Use numbering that makes sense

What about those definitions, theorems, lemmas, and corollaries that all use separate numbering?

This makes for finding stuff in a document frustrating, and impossible in a long one. Use one numbering system to help everyone keep their sanity.
More »

Posted by Jason Polak on 06. May 2017 · 2 comments · Categories: group-theory

When I first saw convolution I happily used it without thinking about why we should define it in the first place. Here’s a post that might tell you why it makes sense from an algebraic viewpoint. Let’s recall the convolution product, as it is for functions $f:\R\to\R$. If $f,g:\R\to\R$ are two such functions, then the convolution $f*g$ is defined as
$$
(f*g)(y) := \int_\R f(y-x)g(x){\rm d}x$$
provided the integral exists. Why do we do this? For convolution, the explanation starts with polynomials. Multiplying polynomials is simplicity itself. For example, here is the product of two real-valued polynomials:
$$(1 + 2t + 5t^2)(1 + 3t) = 1 + 5t + 11t^2 + 15y^3$$
But a real-valued polynomial could also be thought of as a function $f:\Z\to \R$, where $f(x)$ is zero for all but finitely many $x$ and nonzero only if $x \geq 0$. In this setup, $f(x)$ represents the coefficient of $t^x$. Addition of polynomials becomes pointwise addition of functions. Now here’s the good part: polynomial multiplication becomes convolution: if $f,g:\Z\to\R$ are two polynomials then
$$(fg)(y) = \sum_{x\in \Z} f(y-x)g(x).$$
For a finite group $G$ then, complex representations of $G$ correspond to representations of the algebra of complex-valued functions on $G$ where the multiplication is the convolution product.

Posted by Jason Polak on 15. April 2017 · Write a comment · Categories: math

I’ve made a small organizational change to this blog. I realize it might not have been easy to find series of posts on one topic, or the “post series”. They used to be on a page called “features” but I think that was too obscure. Now there’s the page “post series”, which is found above on the header. Along with the current series I have going on Non-unique Factorisation, it also lists five posts on Waldhausen categories.

Perhaps the most exciting in my mind is the six posts of “Wild Spectral Sequences”, which give little toy problems that can be solved using spectral sequences. I came up with all the proofs myself, and some of them might be published nowhere else. Though I admit some of them are quite basic, I think they are quite instructive!

What are you waiting for? Visit the Post Series page today for these topics!

Posted by Jason Polak on 15. April 2017 · Write a comment · Categories: commutative-algebra · Tags:

We are continuing the series on non-unique factorisation. For a handy table of contents, visit the Post Series directory.

In Part 1 of this series, we introduced for a commutative ring three types of relations:

  1. Associaties: $a\sim b$ means that $(a) = (b)$
  2. Strong associates: $a\approx b$ means that $a = ub$ for $u\in U(R)$
  3. Very strong associates: $a\cong B$ means that $a\sim b$ and either $a = b=0$ or $a = rb$ implies that $r\in U(R)$

Here, $U(R)$ denotes the group of units of $R$. We have already seen in a ring with nontrivial idempotents like $\Z\times \Z$, a nontrivial idempotent $e$ will be satisfy $e\sim e$ and $e\approx e$, but $e\not\cong e$ because $e = ee$ and yet $e$ is not a unit and nonzero.

Therefore, $\cong$ is not an equivalence relation for all commutative rings. But it is symmetric:

Proof. Suppose $a\cong b$. Then $a\sim b$ and so $b\sim a$. If $a$ and $b$ are not both zero, write $a = sb, b = ta$. If $b = ra$ then $a = sra = s^2rb$. Since $a\cong b$, this implies that $s^2r$ is a unit and so $r$ is a unit. Hence $b\cong a$.

Guess what? The relation $\cong$ is also transitive. Since the proof is similarly short I’ll leave the proof to the reader. So, $\cong$ is just missing being reflexive for all rings to be an equivalence relation for all rings. If $\cong$ is an equivalence relation for a ring $R$, then we say that $R$ is presimplifiable. We introduced this type of ring last time.
More »

Posted by Jason Polak on 11. April 2017 · Write a comment · Categories: commutative-algebra · Tags:

If $F$ is a field then the polynomial ring $F[x]$ is a unique factorisation domain: that is, every nonunit can be written uniquely as a product of irreducible elements up to a unit multiple. So in $\Q[x]$ for example, you can be sure that the polynomial $x^2 – 2 = (x-2)(x+2)$ can’t be factored any other way, and thus the only zeros of $x^2 – 2$ really are $\pm 2$.

If $F$ is not a field, then a polynomial might have a bunch of different factorisations. For example, in the ring $\Z/4[x]$ we can write $x^2 = (x)(x) = (x+2)(x+2)$. How can we make sense of factorisations in rings that are not unique factorisation domains? In order to do so, we first should make sure we understand what irreducible means in this context.

In the next several posts we’ll look at non-unique factorisation more closely, following a paper of Anderson and Valdes-Leon [1], but keeping the posts self-contained. We’ll start by looking at the concept of associates. One can in fact look at several different variations of associates. Here are three:
More »

Posted by Jason Polak on 21. March 2017 · Write a comment · Categories: elementary, exposition

Characteristic functions have magical properties. For example, consider a double summation:
$$\sum_{k=1}^M\sum_{r=1}^k a_{r,k}.$$
How do you switch the order of summation here? A geometric way to think of this is arrange the terms out and “see” that this sum must be equal to
$$\sum_{r=1}^M\sum_{k=r}^M a_{r,k}.$$
I find this unsatisfactory because the whole point of good notation is that you shouldn’t have to think about what it actually means. I do think it’s very important to understand what the notation means, but in doing formal manipulations it’s nice not to have to do that all the time.

A superior proof that these two sums are equal would be to write
$$\sum_{k=1}^M\sum_{r=1}^k a_{r,k} = \sum_{k=1}^M\sum_{r=1}^M a_{r,k}\delta(r\leq k)$$
where $\delta(r\leq k)$ is the function of the variables $r$ and $k$ that equals one exactly when $r\leq k$ and zero otherwise. Then we can happily switch the order of summation to get
$$\sum_{r=1}^M\sum_{k=1}^M a_{r,k}\delta(r\leq k).$$
Now, it’s trivial to get rid of the $\delta(r\leq k)$ function by writing
$$\sum_{r=1}^M\sum_{k=r}^M a_{r,k}.$$

Posted by Jason Polak on 21. March 2017 · Write a comment · Categories: commutative-algebra · Tags:

This post is a continuation of this previous one, though I repeat the main definitions for convenience.

Let $R$ be a commutative ring and $A$ and $R$-module. We say that $x_1,\dots,x_n\in R$ is a regular sequence on $A$ if $(x_1,\dots,x_n)A\not = A$ and $x_i$ is not a zero divisor on $A/(x_1,\dots,x_{i-1})A$ for all $i$. Last time, we looked at the following theorem:

Theorem. Let $A$ and $B$ be $R$-modules and $x_1,\dots,x_n$ a regular sequence on $A$. If $(x_1,\dots,x_n)B = 0$ then
$$
{\rm Ext}_R^n(B,A) \cong {\rm Hom}_R(B,A/(x_1,\dots,x_n)A)$$

When $R$ is a Noetherian ring, $I$ a proper ideal of $R$, and $A$ a finitely-generated $R$-module, this theorem for $B = R/I$ says that the least integer $n$ such that ${\rm Ext}_R^n(R/I,A)\not= 0$ is exactly the length of a maximal regular sequence in $I$ on $A$.

The Noetherian and finitely generated hypotheses are crucial. Why is this? It’s because you need to have control over zero divisors. In fact you can see this by looking at the case $n = 0$:

Theorem. Let $R$ be a Noetherian ring, $I$ a proper ideal of $R$, and $A$ a finitely-generated $R$-module. Then every element of $I$ is a zero divisor on $A$ if and only if ${\rm Hom}_R(R/I,A)\not= 0$.
Proof. Since $A$ is a finitely generated $R$-module, that every element of $I$ is a zero divisor on $A$ is equivalent to $I$ being contained in the annihilator of a single nonzero element $a\in A$, which is in turn equivalent to every element of $I$ being sent to zero under the homomorphism
$$R\to A\\ 1\mapsto a.$$
Such homomorphisms are the same as nonzero homomorphisms $R/I\to A$. QED.

Here we are using this crucial fact:

Cool Theorem. For a finitely generated module $A$ over a Noetherian ring $R$, the zero divisors $Z(A)$ of $A$ in $R$ are a union of prime ideals of $R$, each of which are ideals maximal with respect to the property of being in $Z(A)$. Furthermore, each such prime is the annihilator of a single nonzero element of $A$.

In general, primes that are equal to the annihilator of a single element of a module $M$ are called the associated primes of $M$, and of course the theory of associated primes and primary decomposition is much more vast than this simple ‘Cool Theorem’, as is evident from Eisenbud’s 30-page treatment of them in his book Commutative Algebra. In practice however, I only ever seem to need this simple version of the ‘Cool Theorem’.