Posted by Jason Polak on 03. October 2017 · Write a comment · Categories: elementary

Let $F$ be a finite field. Did you know that given any function $\varphi:F\to F$, there exists a polynomial $p\in F[x]$ such that $\varphi(a) = p(a)$ for all $a\in F$? It’s not hard to produce such the required polynomial:
$$ p(x) = \sum_{a\in F} \left( \varphi(a)\prod_{b\not= a}(x – b)\prod_{b\not=a}(a-b)^{-1} \prod \right)$$
This works because every nonzero element of $F$ is not a zerodivisor.

The same cannot be said of infinite fields. If $F$ is infinite, then there are functions $\varphi:F\to F$ that cannot be represented as polynomials. That’s because the cardinality of $F[x]$ is the same as that of $F$ when $F$ is infinite. However, the number of functions $F\to F$ is greater than the cardinality of $F$. Therefore, there simply aren’t enough polynomials.

But, one does not have to go to infinite fields. For any prime $q$, there are functions $\Z/q^2\to \Z/q^2$ that cannot be represented as a polynomial. This is true because if $\varphi$ is a polynomial function, then $\varphi(x + q)\equiv \varphi(x)$ modulo $q$. Therefore, any of the $q^{q-2}$ functions $\varphi:\Z/q^2\to\Z/q^2$ satisfying $\varphi(0) = 0$ and $\varphi(q) = 1$ cannot be represented by polynomial.

Posted by Jason Polak on 28. September 2017 · 2 comments · Categories: commutative-algebra, homological-algebra, modules

Here is an interesting question involving free, projective, and flat modules that I will leave to the readers of this blog for now.

First, consider free modules. If $R$ is a ring, then every $R$-module is free if and only if $R$ is a division ring. The property of $R$ being a division ring can be expressed in terms of first-order logic in the language of rings: $\forall x[x\not=0 \rightarrow \exists y(xy = 1)]$.

The meat of this first-order statement is the equation $xy = 1$. Now, multiply by $x$ on the right to get the equation $xyx = x$. Now we can put this in a first-order sentence: $\forall x\exists y[xyx = x]$. Notice how we removed the condition $x\not=0$ from this one. That’s because $x=0$ satisfies $xyx = x$ for any $y$ in all rings. Rings that model $\forall x\exists y[xyx = x]$ are called von Neumann regular. More importantly, these are exactly the rings for which every $R$-module is flat.

By weakening the statement that $R$ is a division ring, we got a statement equivalent to the statement that every $R$-module is flat. One might wonder: where did the projective modules go? Is there a first-order sentence (or set of sentences perhaps) in the language of rings whose models are exactly those rings $R$ for which every $R$-module is projective? Diagrammatically:

Can we replace the question mark with a first-order sentence, or a set of them?

My initial thoughts are no because of ultraproducts, but I have not yet come up with a rigorous argument.

Posted by Jason Polak on 23. September 2017 · Write a comment · Categories: opinion

I’ve decided to add more interactivity to this blog. As a first step, I’d like to know what kinds of posts readers would like to see. So click an option and vote!


What kinds of posts would you like to see in the future on this blog?

View Results

Loading ... Loading ...

Posted by Jason Polak on 21. September 2017 · Write a comment · Categories: modules · Tags:

Here is one characterisation of commutative rings of Krull dimension zero:

Theorem. A commutative ring $R$ has Krull dimension zero if and only if every element of the Jacobson radical ${\rm Jac}(R)$ of $R$ is nilpotent and the quotient ring $R/{\rm Jac}(R)$ is von Neumann regular.

Recall that a ring $R$ is von Neumann regular if for every $x\in R$ there exists a $y\in R$ such that $xyx = x$. This odd property is equivalent to saying that every $R$-module is flat.

Here are two examples of what happens when we drop various assumptions in the “if” direction of the theorem:

  1. The ring $\Z_{(p)}$ of integers localised away from the prime $(p)$ is an example of a ring such that $R/{\rm Jac}(R)$ is von Neumann regular but ${\rm Jac}(R)$ has no nontrivial nilpotent elements. The ring $\Z_{(p)}$ has Krull dimension one.
  2. Another type of example is given by $\Z[[t]]/t^n$ where $\Z[[t]]$ denotes the power series ring with integer coefficients. Unlike our first example, the Jacobson radical of this ring is the ideal $(t)$, which is also the nilradical (=set of nilpotent elements), but $R/{\rm Jac}(R) = \Z$, which is not von Neumann regular and has Krull dimension one.

Note that we were forced look for counterexamples to dropped assumptions in the class of infinite rings. That’s because every finite commutative ring has Krull dimension zero.

Posted by Jason Polak on 20. September 2017 · Write a comment · Categories: homological-algebra, model-theory

There are all sorts of notions of dimension that can be applied to rings. Whatever notion you use though, the ones with dimension zero are usually fairly simple compared with the rings of higher dimension. Here we’ll look at three types of dimension and state what the rings of zero dimension look like with respect to each type. Of course, several examples are included.

All rings are associative with identity but not necessarily commutative. Some basic homological algebra is necessary to understand all the definitions.

Global Dimension

The left global dimension of a ring $R$ is the supremum over the projective dimensions of all left $R$-modules. The right global dimension is the same with “left” replaced by “right”. And yes, there are rings where the left and right global dimensions differ.

However, $R$ has left global dimension zero if and only if it has right global dimension zero. So, it makes sense to say that such rings have global dimension zero. Here is their characterisation:

A ring $R$ has global dimension zero if and only if it is semisimple; that is, if and only if it is a finite direct product of full matrix rings over division rings.

Examples of such rings are easy to generate by this characterisation:

  1. Fields and finite products of fields
  2. $M_2(k)$, the ring of $2\times 2$ matrices over a division ring $k$
  3. etc.

More »

Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: advice

Choosing where to get your PhD is an important decision. If you continue onto academia, your PhD might be the longest time you spend at any one institution until you get a permanent position. The most obvious choice is apply to the high-ranking schools. However, you should consider far more than that. Here, we’ll look at some of the important factors to consider, with the context of mathematics in mind. However, most of what I say applies to some other fields as well.

Represented research areas

Unlike choosing an undergraduate program, where the curriculum doesn’t differ much around the world (though it certainly can vary greatly in strength or intensity), a PhD will be on a very specialised topic. So, if you go to a school where analysis and statistics are the main topics represented and you like algebra, you probably won’t like it. This can be worse for those places where you don’t have to choose an advisor until the second year. So I suggest you look at the represented research areas on departmental websites and see what catches your interest. Unfortunately, some math department websites look like they were coded on a Super Nintendo, if that were even possible. So:

Make sure someone is actually doing something you’re interested in at prospective schools!

If you’re at the undergraduate level and not sure of your interests yet, it could be a good idea to consider a masters program first before starting a PhD. I enjoyed doing a masters degree first, even though in the long run it is more expensive.

Total school atmosphere

If you’re lucky enough to be nearby some schools you’re interested in, you should visit them, meet some professors, and even sit in on some classes and departmental seminars. Just walk around and see what it’s like. Some schools have a much nicer atmosphere than others. You should also get a sense of the surrounding city. This is true especially if you are a very independent worker: having an enjoyable city will in fact make working much easier. Conversely, living in a place you dislike for several years is quite draining.

Sadly, living temporarily in cities you don’t like is very probable in at least one stage of climbing the academic ladder.
More »

Posted by Jason Polak on 19. September 2017 · Write a comment · Categories: homological-algebra, modules

Consider a field $k$. Define an action of $k[x,y]$ on $k[x]$ by $f*g = f(x,x)g(x)$ for all $f\in k[x,y]$ and $g\in k[x]$. In other words, the action is: multiply $f$ and $g$ and then replace every occurrence of $y$ by $x$.

Is $k[x]$ a projective $k[x,y]$-module? Consider first the map $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$. It’s easy to check that this map is in fact a $k[x,y]$-module homomorphism. It would be tempting to try and split this map with the inclusion map $k[x]\to k[x,y]$. But this doesn’t work: this inclusion is not a $k[x,y]$-module homomorphism.

In fact, the $k[x,y]$-module homomorphism $k[x,y]\to k[x]$ given by $f\mapsto f(x,x)$ cannot split simply because there are no nonzero $k[x,y]$-module homomorphisms $k[x]\to k[x,y]$. Therefore, $k[x]$ is not projective as a $k[x,y]$-module, using the module structure we gave it.

Here are two more ways to see this:

  1. Through the notion of separability: by definition, $k[x]$ being a projective $k[x,y]\cong k[x]\otimes_k k[x]$-module under the structure that we have defined means that $k[x]$ is a separable $k$-algebra. However, all separable $k$-algebras are finite-dimensional as vector spaces over $k$, whereas $k[x]$ is infinite-dimensional.
  2. Through Seshradi’s theorem: this theorem says that every finitely-generated projective module over $k[x,y]$ is actually free. Therefore, we just have to show that $k[x]$ is not free because $k[x]$ is certainly finitely-generated as a $k[x,y]$-module. But $(x^2y – xy^2)$ annihilates all elements of $k[x]$, which cannot happen in a free module.
Posted by Jason Polak on 01. September 2017 · Write a comment · Categories: ring-theory · Tags:

In the previous post we saw the following definition for a ring $R$: An element $r\in R$ is called strongly nilpotent if every sequence $r = r_0,r_1,r_2,\dots$ such that $r_{n+1}\in r_nRr_n$ is eventually zero. Why introduce this notion?

Well, did you know that every finite integral domain is a field? If $R$ is an integral domain and $a\in R$ is nonzero, then the multiplication map $R\to R$ given by $x\mapsto ax$ is injective. If $R$ is finite, then it must also be surjective so $a$ is invertible!

Another way of stating this neat fact is that if $R$ is any ring and $P$ is a prime ideal of $R$ such that $R/P$ is finite, then $P$ is also a maximal ideal. A variation of this idea is that every prime ideal in a finite commutative ring is actually maximal. Yet another is that finite commutative rings have Krull dimension zero.
More »

Posted by Jason Polak on 31. August 2017 · Write a comment · Categories: ring-theory · Tags:

Let $R$ be an associative ring. An element $r\in R$ is called nilpotent if $r^n = 0$ for some $n$. There is a stronger notion: an element $r\in R$ is called strongly nilpotent if every sequence $r = r_0,r_1,r_2,\dots$ such that $r_{n+1}\in r_nRr_n$ is eventually zero.

How are these two related? It is always the case that a strongly nilpotent element is nilpotent, because if $r$ is strongly nilpotent then the sequence $r,r^2,r^4,r^8,\dots$ vanishes. However, the element
$$\begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix}$$
in any $2\times 2$ matrix ring is nilpotent but not strongly nilpotent. Notice how we had to use a noncommutative ring here—that’s because for commutative rings, a nilpotent element is strongly nilpotent!

Posted by guest on 27. August 2017 · Write a comment · Categories: math
A guest Post by Paul Pierce and Ashley Ross

With the advances in calculator technology, some developmental and college-level math courses are restricting the use of any type of graphing or programmable calculators. This is to help students avoid becoming dependent on their calculators for both simple arithmetic and graphing. So, some teachers are going “old school” and forbidding the use of calculators in the classroom. Therefore, it is imperative that students learn efficient methods for finding important values, as well as graphing functions, without the help of their calculator. One type of function that appears in many courses is the quadratic function, and one of the most critical points on the graph of a quadratic function is the vertex.

Fundamental Concepts of the Graph of a Quadratic Function

For the function $f(x)=ax^2+bx+c$ with $a\not=0$, the graph is a smooth, continuous curve called a parabola. This parabola opens upward if $a > 0$ or opens downward if $a < 0$. The vertex $(h,k)$ of the graph is the only turning point on the parabola, which makes it a critical point. The $y$-coordinate $k$ of the vertex represents the minimum value of the function if $a>0$, or the maximum value of the function if $a<0$.

The point $(h,k)$ may be found using the formulas $h=\frac{-b}{2a}$ and $k=\frac{bh}{2}+c$, which begin to show the importance of the vertex. We give two examples:

Example 1. For $y=x^2+6x+3$, find the vertex $(h,k)$.

First find $h$ using $h=\frac{-b}{2a}=\frac{-6}{2(1)}=-3$.

Next find $k$ using $k=\frac{bh}{2}+c=\frac{(6)(-3)}{2}+3=-9+3=-6$.

So, the coordinates of the vertex of the parabola are $(-3, -6)$. Observe from the graph that this vertex is the lowest point on the parabola, which means that $k = -6$ is the minimum value of the function.

Example 2.For $y=-2x^2+8x-5$, find the vertex $(h,k)$.

First find h using $h=\frac{-b}{2a}=\frac{-8}{2(-2)}=2$.

Next find k using $k=\frac{bh}{2}+c=\frac{(8)(2)}{2}-5=8-5=3$.

So, the coordinates of the vertex of the parabola are $(2, 3)$. Note that this vertex is the highest point on the graph, which illustrates that $k = 3$ is the maximum value of this function.

More »