# Symmetric+RSA vs. RSA and Davida’s Attack

Posted by Jason Polak on 06. May 2018 · Write a comment · Categories: number-theory · Tags:

Alice wants her friends to send her stuff only she can read. RSA public-key encryption allows her to do that: she chooses huge primes $p$ and $q$ and releases $N = pq$ along with an encryption exponent $e$ such that ${\rm gcd}(e,(p-1)(q-1)) = 1$. If Bob wants to send Alice a message $m$, he sends $c = m^e$ modulo $N$ to Alice.

Computing $m$ from $m^e$ is hard if you only know $N$, but becomes easy with the prime factors $p$ and $q$. And, only Alice knows $p$ and $q$. The word "hard" is relative, and maybe someone will find an easy way to do it in the future, perhaps with quantum computers.

However, if you know $p$ and $q$, then you can compute $d$ such that $de = 1$ modulo $(p-1)(q-1)$. That is, you're computing the inverse of $e$ in the multiplicative group $(\mathbb{Z}/N)^\times$. You can use Fermat's little theorem: $x^{p-1} = 1$ in $\mathbb{Z}/p^\times$ for an $x\in (\mathbb{Z}/p)^\times$ to compute that $c^d = m^{ed} = m$ modulo $N$. Therefore, $c^d$ in $\mathbb{Z}/N$ is the decrypted message.
More »

# The Discrete Log, Part 2: Shanks' Algorithm

Posted by Jason Polak on 03. May 2018 · Write a comment · Categories: number-theory

In a group $G$, the discrete logarithm problem is to solve for $x$ in the equation $g^x =h$ where $g,h\in G$. In Part 1, we saw that solving the discrete log problem for finite fields would imply that we could solve the Diffie-Hellman problem and crack the ElGamal encryption scheme.

Obviously, the main question at this point is: how hard is the discrete logarithm problem (DLP)? In this post, we will look at various ways to approach the problem. We'll focus on solving the DLP in the cyclic group $\F_p^\times$, though some of the ideas generalise to more general groups as well.

## The Brute Force Approach

So, we've got this equation: $g^x = h$ in $\F_p^\times$. Of course, we're assuming that there exists an $x$ that solves this equation, because in practice a solution exists.
More »

# The Discrete Logarithm, Part 1

Posted by Jason Polak on 30. April 2018 · Write a comment · Categories: number-theory · Tags:

Given a group $G$, and an element $g\in G$ (the "base"), the discrete logarithm $\log_g(h)$ of an $h\in G$ is an element $x\in G$ such that $g^x = h$ if it exists. Its name "discrete logarithm" essentially means that we are only allowed to use integer powers in the group, rather than extending the definition of exponentiation as with the usual logarithm for the real numbers.

Because of cryptographic applications, usually the discrete logarithm is seen in the world of finite fields or elliptic curves. Here, given a finite field $\F_q$, the discrete logarithm is considered in the multiplicative group $\F_q^\times$. It so happens that this group is cyclic, and hence it must be isomorphic to $\Z/(q-1)$. So, given a $g\in \F_q$ that generates $\F_q^\times$ as a multiplicative group, the discrete logarithm always exists and thus defines a function
$$\log_g: \F_q^\times\longrightarrow \Z/(q-1).$$
More »

# A partition identity

Posted by Jason Polak on 07. April 2018 · Write a comment · Categories: number-theory · Tags: ,

There is a cool way to express 1 as a sum of unit fractions using partitions of a fixed positive integer. What do we mean by partition? If $n$ is such an integer then a partition is just a sum $e_1d_1 + \cdots + e_kd_k = n$ where $d_i$ are positive integers. For example,

7 = 4 + 1 + 1 = 4 + 2(1)

The order of the partition is not very interesting and so we identify partitions up to order. Thus, here are all the 15 partitions of 7:

7
6+1
5+2
5+1+1
4+3
4+2+1
4+1+1+1
3+3+1
3+2+2
3+2+1+1
3+1+1+1+1
2+2+2+1
2+2+1+1+1
2+1+1+1+1+1
1+1+1+1+1+1+1

Group the same numbers together so that each partition is written as $n = \sum e_id_i$ where there are $e_i$ appearances of $d_i$ (or vice-versa, it's symmetric). Then it's a theorem that:
$$1 = \sum (e_1!\cdots e_k!\cdot d_1^{e_1}\cdots d_k^{e_k})^{-1}.$$
This partition identity has a bunch of proofs. A neat one appears in the paper "Using Factorizations to Prove a Partition Identity" by David Dobbs and Timothy Kilbourn. In their proof, they used an asympotitc expression for the number of irreducible polynomials over a finite field of a given degree $n$ (the same $n$ that appears in the partition).

Here are some examples of this identity. For n=5, we have:

1 = 1/5 + 1/4 + 1/6 + 1/6 + 1/8 + 1/12 + 1/120

For n=7:

1 = 1/7 + 1/6 + 1/10 + 1/10 + 1/12 + 1/8 + 1/24 + 1/18
+ 1/24 + 1/12 + 1/72 + 1/48 + 1/48 + 1/240 + 1/5040

And for n=11

1 = 1/11 + 1/10 + 1/18 + 1/18 + 1/24 + 1/16 + 1/48 + 1/28 + 1/21
+ 1/56 + 1/28 + 1/168 + 1/30 + 1/24 + 1/36
+ 1/36 + 1/48 + 1/72 + 1/720 + 1/50 + 1/40 + 1/40
+ 1/90 + 1/30 + 1/90 + 1/240 + 1/80 + 1/240
+ 1/3600 + 1/96 + 1/64 + 1/192 + 1/72 + 1/96 + 1/48
+ 1/288 + 1/192 + 1/192 + 1/960 + 1/20160 + 1/324
+ 1/324 + 1/144 + 1/216 + 1/2160 + 1/1152 + 1/288 + 1/576
+ 1/4320 + 1/120960 + 1/3840 + 1/2304
+ 1/5760 + 1/40320 + 1/725760 + 1/39916800

# Stably free and the Eilenberg swindle

Posted by Jason Polak on 29. March 2018 · 2 comments · Categories: modules · Tags:

I already mentioned the idea of stably isomorphic for a ring $R$: two $R$-modules $A$ and $B$ are stably isomorphic if there exists a natural number $n$ such that $A\oplus R^n\cong B\oplus R^n$.

Let's examine a specific case: if $A$ is stably isomorphic to a free module, then let's call it stably free.

So, to reiterate: a module $A$ is called stably free if there exists a natural number $n$ such that $A\oplus R^n$ is free. We already saw an example of a stably free module, where $R = \mathbb{H}[x,y]$, the two variable polynomial ring over the quaternions.

One might wonder: why don't we allow infinite $n$ in this definition? It's because of the Eilenberg swindle, named after mathematician Samuel Eilenberg.

The Eilenberg swindle goes like this: suppose $P$ is a projective $R$-module. Then, there exists a module $Q$ such that $P\oplus Q \cong F$ where $F$ is a free module. Now, let $E = \oplus_{i=1}^\infty F$.

Then:
$$P\oplus E \cong P\oplus (P\oplus Q)\oplus (P\oplus Q)\cdots\\ P\oplus (Q\oplus P)\oplus (Q\oplus P)\oplus\\ F\oplus F\oplus F\oplus\cdots$$
Therefore, $P\oplus F$ is free. Hence, if we allowed infinite $n$ in the definition of stably free, every projective module would be stably free and there wouldn't be much point in the definition 'stably free'.

Here is an exercise for you:

Show that if $A$ is a stably free $R$ module that is not finitely generated, then $A$ is free.

Posted by Jason Polak on 29. March 2018 · Write a comment · Categories: probability · Tags:

It is said that Markov originally invented Markov processes to understand how some letters follow other letters in poetry.

Recall that a Markov process is a probability random process that models moving from one state to another state, where the possible states is some set. There is a fixed probability from moving from each state to each other state.

Writing sentences could be modeled with a Markov process. To do so, we can take in some text, and compute the experimental frequency that one word will follow another. Then using a random number generator, we can output new sentences generated from this Markov process.

This doesn't use any deep theory but nevertheless is fun to try. It's not hard to write a program like this, and it would be interesting to do so, but for demonstration purposes I will now use a premade one: dadadodo. It's available for Linux. Basically, you run it like this:

This command will generate N sentences in a Markov process, using computed transition probabilities from the file inputfile.txt. According to dadadodo's documentation,

Sometimes these sentences are nonsense, but sometimes they cut right through to the heart of the matter and reveal hidden meanings.

Here is one amusing one generated from my PhD thesis:

I am grateful to do not actually integral.

Sounds an awful lot like "I am grateful to not do the actual integral," which is funny because I computed a rather tricky p-adic integral in my thesis. Here's one feeding in a cover letter I wrote in my current job application process:

I believe I have the next generation of my a number theory.

Sounds a lot like "I am the next generation of number theory." Here is a pretty funny one I got from feeding in a job posting:

Traditionally, enterprises have spent countless hours over the latest technology.

Now that sounds about right! The University of Melbourne sends out an email with news of interest to staff. Here is a random sentence that appeared from dadadodo using this news text:

The Research Fellowships on experiences of the bike racks.

I should apply for one of those.

Most of the outputted sentences are nonsense but are not too far away from meaningful, if you think about it.

# Stable Isomorphisms, Grothendieck Groups: Example

Posted by Jason Polak on 22. March 2018 · Write a comment · Categories: modules · Tags: ,

If $a$ and $b$ are two real numbers and $ax = bx$, then we can't conclude that $a = b$ because $x$ may be zero. The same is true for tensor products of modules: if $A$ and $B$ are two left $R$-modules and $X$ is a right $R$-module, then an isomorphism $X\otimes_R A\cong A\otimes_R B$ does not necessarily mean that $A\cong B$. Of course, $X$ not even need be zero for this to happen.

Addition for real numbers is a little different. If $a$ and $b$ are two real numbers then $x + a = x + b$ is equivalent to $a = b$. What about for direct sums? If $A$ and $B$ are two $R$-modules, and $X$ is a third $R$ module, what if $X\oplus A\cong A\oplus B$? Is it true that $A\cong B$?

The answer is no. Perhaps this is surprising from the way direct sums work. After all, in a direct sum $X\oplus A$, it "feels like" what happens in $X$ is independent from what happens in $A$. And for vector spaces, this is true: for $k$-modules where $k$ is a field, if $X\oplus A\cong X\oplus B$, then certainly $A\cong B$, because $A$ and $B$ have the same dimension.
More »

# The Prisoner's Dilemma

Posted by Jason Polak on 08. March 2018 · 2 comments · Categories: math · Tags:

Suppose the police suspect you and your friend of robbing one billion dollars in bitcoin. The police can only charge you and your friend with the small crime of possession of a sawed-off shotgun, however, and want both of you to confess to the robbery. So, you and your friend are put in separate rooms and given two decisions:

1. Stay silent, in which case you'll only be charged with weapons possession
2. Confess

If both of you stay silent, then both of you will get a minor prison term. If you confess and your partner stays silent, then the police will let you go in exchange for prosecuting your partner, and vice-verson. Since your partner stayed silent, they will get the maximum sentence. If both of you confess, both of you get serious prison time but a little less than the maximum because both of you cooperated.

Since each of you is in separate rooms, you have to make the decision without knowing what your partner decides. What is the best strategy?

Game theorists like to put all the possibilities in a 'payoff matrix', which describes the possible outcome for each combination of decisions. Here is the payoff matrix in this case:

 Confess Stay Silent Confess (-3,-3) (0,-4) Stay Silent (4,0) (-1,-1)

# Cereal box prizes and transition matrices

Posted by Jason Polak on 04. March 2018 · Write a comment · Categories: probability · Tags: ,

If you don't know what a transition matrix is, you might want to read the transition matrix post before reading this one.

Transition matrices can be used to solve some classic probability problems. For example, consider the following problem:

Suppose in each cereal box you buy there is one number in the set $\{1,2,3,4,5\}$. You get a prize if you collect all five numbers. What is the expected number of boxes you have to buy (or steal) before you get all five numbers?

I found this problem in Frederick Mosteller's book 'Fifty Challenging Problems in Probability with Solutions'. I had to think about this problem for a few minutes, and you should too before going on.
More »

# Transition matrices

Posted by Jason Polak on 01. March 2018 · Write a comment · Categories: probability · Tags: ,

Imagine $n$ states of a system in a discrete-time stochastic system. For each pair of states $i$ and $j$, there is a probability $p_{ij}$ of moving to state $j$ in the next time step, given that the system is in state $i$. Each of these probabilities can be put in a matrix, known as the transition matrix.

Let's take an example. Such a discrete time system can be easily represented by a graph (in the sense of graph theory). Here is one:

(Incidentally, this is not how I would draw this graph by hand, but not having to do so by using Graphviz is worth not drawing it exactly the way I want it.) In this system, there are two states: in each of these states, there is a 1/2 probability of leaving the state and a 1/2 probability of remaining in it. It is a system with a lot of symmetry and whose transition matrix is
$$T_1 = \begin{pmatrix}1/2 & 1/2\\1/2 & 1/2\end{pmatrix}$$
The cool thing about a transition matrix is that you can raise it to some power. What does this mean? If $T$ is a transition matrix, then it follows from the definition of matrix multiplication and induction that the $i,j$ entry is the probabiltiy of moving to state $j$ given that the system is in state $i$. But it follows pretty much from the definition of matrix multiplication that the $i,j$-entry of $T^2$ is the probability from moving to state $j$ in two steps from state $i$. In general, the $i,j$ entry of $T^n$ is the probability of moving to state $j$ in $n$ steps starting from state $i$.
More »