# A partition identity

Posted by Jason Polak on 07. April 2018 · Write a comment · Categories: number-theory · Tags: ,

There is a cool way to express 1 as a sum of unit fractions using partitions of a fixed positive integer. What do we mean by partition? If $n$ is such an integer then a partition is just a sum $e_1d_1 + \cdots + e_kd_k = n$ where $d_i$ are positive integers. For example,

7 = 4 + 1 + 1 = 4 + 2(1)

The order of the partition is not very interesting and so we identify partitions up to order. Thus, here are all the 15 partitions of 7:

7
6+1
5+2
5+1+1
4+3
4+2+1
4+1+1+1
3+3+1
3+2+2
3+2+1+1
3+1+1+1+1
2+2+2+1
2+2+1+1+1
2+1+1+1+1+1
1+1+1+1+1+1+1

Group the same numbers together so that each partition is written as $n = \sum e_id_i$ where there are $e_i$ appearances of $d_i$ (or vice-versa, it’s symmetric). Then it’s a theorem that:
$$1 = \sum (e_1!\cdots e_k!\cdot d_1^{e_1}\cdots d_k^{e_k})^{-1}.$$
This partition identity has a bunch of proofs. A neat one appears in the paper “Using Factorizations to Prove a Partition Identity” by David Dobbs and Timothy Kilbourn. In their proof, they used an asympotitc expression for the number of irreducible polynomials over a finite field of a given degree $n$ (the same $n$ that appears in the partition).

Here are some examples of this identity. For n=5, we have:

1 = 1/5 + 1/4 + 1/6 + 1/6 + 1/8 + 1/12 + 1/120

For n=7:

1 = 1/7 + 1/6 + 1/10 + 1/10 + 1/12 + 1/8 + 1/24 + 1/18
+ 1/24 + 1/12 + 1/72 + 1/48 + 1/48 + 1/240 + 1/5040

And for n=11

1 = 1/11 + 1/10 + 1/18 + 1/18 + 1/24 + 1/16 + 1/48 + 1/28 + 1/21
+ 1/56 + 1/28 + 1/168 + 1/30 + 1/24 + 1/36
+ 1/36 + 1/48 + 1/72 + 1/720 + 1/50 + 1/40 + 1/40
+ 1/90 + 1/30 + 1/90 + 1/240 + 1/80 + 1/240
+ 1/3600 + 1/96 + 1/64 + 1/192 + 1/72 + 1/96 + 1/48
+ 1/288 + 1/192 + 1/192 + 1/960 + 1/20160 + 1/324
+ 1/324 + 1/144 + 1/216 + 1/2160 + 1/1152 + 1/288 + 1/576
+ 1/4320 + 1/120960 + 1/3840 + 1/2304
+ 1/5760 + 1/40320 + 1/725760 + 1/39916800

# Stably free and the Eilenberg swindle

Posted by Jason Polak on 29. March 2018 · 2 comments · Categories: modules · Tags:

I already mentioned the idea of stably isomorphic for a ring $R$: two $R$-modules $A$ and $B$ are stably isomorphic if there exists a natural number $n$ such that $A\oplus R^n\cong B\oplus R^n$.

Let’s examine a specific case: if $A$ is stably isomorphic to a free module, then let’s call it stably free.

So, to reiterate: a module $A$ is called stably free if there exists a natural number $n$ such that $A\oplus R^n$ is free. We already saw an example of a stably free module, where $R = \mathbb{H}[x,y]$, the two variable polynomial ring over the quaternions.

One might wonder: why don’t we allow infinite $n$ in this definition? It’s because of the Eilenberg swindle, named after mathematician Samuel Eilenberg.

The Eilenberg swindle goes like this: suppose $P$ is a projective $R$-module. Then, there exists a module $Q$ such that $P\oplus Q \cong F$ where $F$ is a free module. Now, let $E = \oplus_{i=1}^\infty F$.

Then:
$$P\oplus E \cong P\oplus (P\oplus Q)\oplus (P\oplus Q)\cdots\\ P\oplus (Q\oplus P)\oplus (Q\oplus P)\oplus\\ F\oplus F\oplus F\oplus\cdots$$
Therefore, $P\oplus F$ is free. Hence, if we allowed infinite $n$ in the definition of stably free, every projective module would be stably free and there wouldn’t be much point in the definition ‘stably free’.

Here is an exercise for you:

Show that if $A$ is a stably free $R$ module that is not finitely generated, then $A$ is free.

Posted by Jason Polak on 29. March 2018 · Write a comment · Categories: probability · Tags:

It is said that Markov originally invented Markov processes to understand how some letters follow other letters in poetry.

Recall that a Markov process is a probability random process that models moving from one state to another state, where the possible states is some set. There is a fixed probability from moving from each state to each other state.

Writing sentences could be modeled with a Markov process. To do so, we can take in some text, and compute the experimental frequency that one word will follow another. Then using a random number generator, we can output new sentences generated from this Markov process.

This doesn’t use any deep theory but nevertheless is fun to try. It’s not hard to write a program like this, and it would be interesting to do so, but for demonstration purposes I will now use a premade one: dadadodo. It’s available for Linux. Basically, you run it like this:

This command will generate N sentences in a Markov process, using computed transition probabilities from the file inputfile.txt. According to dadadodo’s documentation,

Sometimes these sentences are nonsense, but sometimes they cut right through to the heart of the matter and reveal hidden meanings.

Here is one amusing one generated from my PhD thesis:

I am grateful to do not actually integral.

Sounds an awful lot like “I am grateful to not do the actual integral,” which is funny because I computed a rather tricky p-adic integral in my thesis. Here’s one feeding in a cover letter I wrote in my current job application process:

I believe I have the next generation of my a number theory.

Sounds a lot like “I am the next generation of number theory.” Here is a pretty funny one I got from feeding in a job posting:

Traditionally, enterprises have spent countless hours over the latest technology.

Now that sounds about right! The University of Melbourne sends out an email with news of interest to staff. Here is a random sentence that appeared from dadadodo using this news text:

The Research Fellowships on experiences of the bike racks.

I should apply for one of those.

Most of the outputted sentences are nonsense but are not too far away from meaningful, if you think about it.

# Stable Isomorphisms, Grothendieck Groups: Example

Posted by Jason Polak on 22. March 2018 · Write a comment · Categories: modules · Tags: ,

If $a$ and $b$ are two real numbers and $ax = bx$, then we can’t conclude that $a = b$ because $x$ may be zero. The same is true for tensor products of modules: if $A$ and $B$ are two left $R$-modules and $X$ is a right $R$-module, then an isomorphism $X\otimes_R A\cong A\otimes_R B$ does not necessarily mean that $A\cong B$. Of course, $X$ not even need be zero for this to happen.

Addition for real numbers is a little different. If $a$ and $b$ are two real numbers then $x + a = x + b$ is equivalent to $a = b$. What about for direct sums? If $A$ and $B$ are two $R$-modules, and $X$ is a third $R$ module, what if $X\oplus A\cong A\oplus B$? Is it true that $A\cong B$?

The answer is no. Perhaps this is surprising from the way direct sums work. After all, in a direct sum $X\oplus A$, it “feels like” what happens in $X$ is independent from what happens in $A$. And for vector spaces, this is true: for $k$-modules where $k$ is a field, if $X\oplus A\cong X\oplus B$, then certainly $A\cong B$, because $A$ and $B$ have the same dimension.
More »

# The Prisoner’s Dilemma

Posted by Jason Polak on 08. March 2018 · 2 comments · Categories: math · Tags:

Suppose the police suspect you and your friend of robbing one billion dollars in bitcoin. The police can only charge you and your friend with the small crime of possession of a sawed-off shotgun, however, and want both of you to confess to the robbery. So, you and your friend are put in separate rooms and given two decisions:

1. Stay silent, in which case you’ll only be charged with weapons possession
2. Confess

If both of you stay silent, then both of you will get a minor prison term. If you confess and your partner stays silent, then the police will let you go in exchange for prosecuting your partner, and vice-verson. Since your partner stayed silent, they will get the maximum sentence. If both of you confess, both of you get serious prison time but a little less than the maximum because both of you cooperated.

Since each of you is in separate rooms, you have to make the decision without knowing what your partner decides. What is the best strategy?

Game theorists like to put all the possibilities in a ‘payoff matrix’, which describes the possible outcome for each combination of decisions. Here is the payoff matrix in this case:

 Confess Stay Silent Confess (-3,-3) (0,-4) Stay Silent (4,0) (-1,-1)

# Cereal box prizes and transition matrices

Posted by Jason Polak on 04. March 2018 · Write a comment · Categories: probability · Tags: ,

If you don’t know what a transition matrix is, you might want to read the transition matrix post before reading this one.

Transition matrices can be used to solve some classic probability problems. For example, consider the following problem:

Suppose in each cereal box you buy there is one number in the set $\{1,2,3,4,5\}$. You get a prize if you collect all five numbers. What is the expected number of boxes you have to buy (or steal) before you get all five numbers?

I found this problem in Frederick Mosteller’s book ‘Fifty Challenging Problems in Probability with Solutions’. I had to think about this problem for a few minutes, and you should too before going on.
More »

# Transition matrices

Posted by Jason Polak on 01. March 2018 · Write a comment · Categories: probability · Tags: ,

Imagine $n$ states of a system in a discrete-time stochastic system. For each pair of states $i$ and $j$, there is a probability $p_{ij}$ of moving to state $j$ in the next time step, given that the system is in state $i$. Each of these probabilities can be put in a matrix, known as the transition matrix.

Let’s take an example. Such a discrete time system can be easily represented by a graph (in the sense of graph theory). Here is one:

(Incidentally, this is not how I would draw this graph by hand, but not having to do so by using Graphviz is worth not drawing it exactly the way I want it.) In this system, there are two states: in each of these states, there is a 1/2 probability of leaving the state and a 1/2 probability of remaining in it. It is a system with a lot of symmetry and whose transition matrix is
$$T_1 = \begin{pmatrix}1/2 & 1/2\\1/2 & 1/2\end{pmatrix}$$
The cool thing about a transition matrix is that you can raise it to some power. What does this mean? If $T$ is a transition matrix, then it follows from the definition of matrix multiplication and induction that the $i,j$ entry is the probabiltiy of moving to state $j$ given that the system is in state $i$. But it follows pretty much from the definition of matrix multiplication that the $i,j$-entry of $T^2$ is the probability from moving to state $j$ in two steps from state $i$. In general, the $i,j$ entry of $T^n$ is the probability of moving to state $j$ in $n$ steps starting from state $i$.
More »

# Book Review: Levy’s ‘Crypto’

Posted by Jason Polak on 01. March 2018 · Write a comment · Categories: book · Tags: ,

Today, public-key cryptography is everywhere, offering some measure of security for virtually all internet commerce transactions and secure shell connections. It’s hard to imagine life without it, even though most people aren’t aware of it.

Steven Levy’s Crypto is a great book to explain it and how it all came about. In fact, I first read it almost twenty years ago when I was in high school, and just read it again the other day.

How did encryption in the digital age start out? How did public-key cryptography get invented? And how did public-key crypto get into the hands of pretty much everyone with a web browser all over the world despite the attempts of the U.S. government to control it? These questions are answered in Levy’s book.
More »

# Expected iterations for a finite random walk

Posted by Jason Polak on 25. February 2018 · Write a comment · Categories: probability · Tags:

Consider three cells as so:

A player (the blue disc) starts out in the left-most cell, and discrete time starts. At each step in time, the player has a 1/2 probability of moving left and a 1/2 probability of moving right. If the player chooses to move left but cannot because it is in the left-most cell, then it does nothing, though that still counts as a move. The game ends when the player reaches the right-most cell.

What is the expected number of moves in this game?
More »

# Python’s “map” method and permutations of lists

Posted by Jason Polak on 18. February 2018 · Write a comment · Categories: computer-science · Tags: , ,

Let’s look at Python’s map function. What does this function do? Here is the syntax:

It takes a function called function and applies it to each element of iterable, returning the result as an iterable. For example:

The output of this program is:

More »