# Fibonacci sequence modulo m

The Fibonacci sequence is an infinite sequence of integers $f_0,f_1,f_2,\dots$ defined by the initial values $f_0 = f_1 = 1$ and the rule
$$f_{n+1} = f_n + f_{n-1}$$
In other words, to get the next term you take the sum of the two previous terms. For example, it starts off with:
$$1,1,2,3,5,8,13,21,34,55,\dots$$
You can define all sorts of recursive sequences this way and get a whole slew of interesting patterns. The Fibonacci sequence is one of the most well-known, as it is one of the simplest and has connections to the golden ratio:
$$\lim_{n\to \infty} f_n/f_{n-1} = \frac{1 + \sqrt{5}}{2}.$$
In fact, without too great an effort you could derive an exact, closed formula for $f_n$ (i.e. a formula that does not involve recursion, or calculating previous terms to get to the next one) in which appears the golden ratio.

A theme that’s been running through my research lately is looking at things modulo $m$ for some integer $m$. Reduction from $\Z$ to $\Z/m$ often happens in mathematics because there is some dictionary that tells you something about an object over $\Z$ if you know something mod $m$. So let’s look at the Fibonacci sequence in $\Z/5$. It goes like this:

1, 1, 2, 3, 0, 3, 3, 1, 4, 0, 4, 4, 3, 2, 0, 2, 2, 4, 1, 0, 1, 1,…

Very good looking sequence if you ask me. Notice how it starts to repeat, going right back to the beginning. Of course, it has to repeat eventually, because there are only $m^2$ elements in $\Z/m\times\Z/m$. What’s a little more interesting is that it goes right back to the beginning $1,1,\dots$, rather than repeating with some new pair that appears a little later.

Why is this? It is because of the recursive definition $f_{n+1} = f_n + f_{n-1}$. We could also write it $f_{n-1} = f_{n+1} – f_n$. Therefore, if two pairs are equal $(f_s,f_{s+1}) = (f_t,f_{t+1})$ with $s \gt t$ then $f_{s-1} = f_{t-1}$. Continuing backwards we see that $f_{s – t} = f_0$ and $f_{s-t+1} = f_1$. This fact was first proved by D.D. Wall of IBM.
More »

# Where does convolution come from?

Posted by Jason Polak on 06. May 2017 · 2 comments · Categories: group-theory

When I first saw convolution I happily used it without thinking about why we should define it in the first place. Here’s a post that might tell you why it makes sense from an algebraic viewpoint. Let’s recall the convolution product, as it is for functions $f:\R\to\R$. If $f,g:\R\to\R$ are two such functions, then the convolution $f*g$ is defined as
$$(f*g)(y) := \int_\R f(y-x)g(x){\rm d}x$$
provided the integral exists. Why do we do this? For convolution, the explanation starts with polynomials. Multiplying polynomials is simplicity itself. For example, here is the product of two real-valued polynomials:
$$(1 + 2t + 5t^2)(1 + 3t) = 1 + 5t + 11t^2 + 15y^3$$
But a real-valued polynomial could also be thought of as a function $f:\Z\to \R$, where $f(x)$ is zero for all but finitely many $x$ and nonzero only if $x \geq 0$. In this setup, $f(x)$ represents the coefficient of $t^x$. Addition of polynomials becomes pointwise addition of functions. Now here’s the good part: polynomial multiplication becomes convolution: if $f,g:\Z\to\R$ are two polynomials then
$$(fg)(y) = \sum_{x\in \Z} f(y-x)g(x).$$
For a finite group $G$ then, complex representations of $G$ correspond to representations of the algebra of complex-valued functions on $G$ where the multiplication is the convolution product.

# Homomorphisms from G_a to G_m

Posted by Jason Polak on 23. August 2016 · Write a comment · Categories: group-theory · Tags: ,

Let $k$ be a commutative ring. Let $\G_a$ be group functor $\G_a(R) = R$ and $\G_m$ be the group functor $\G_m(R) = R^\times$, both over the base ring $k$. What are the homomorphisms $\G_a\to \G_m$? In other words, what are the characters of $\G_a$? This depends on the ring, of course!

The representing Hopf algebra for $\G_a$ is $k[x]$. And, the representing Hopf algebra for $\G_m$ is $k[x,x^{-1}]$. Homomorphisms $\G_a\to \G_m$ correspond to Hopf algebra maps $k[x,x^{-1}]\to k[x]$. Such a map is a $k$-algebra homomorphism that satisfies the additional conditions for being a Hopf algebra homomorphism.
More »

# The Torsion Subgroup of an Abelian Group

Posted by Jason Polak on 19. December 2015 · Write a comment · Categories: group-theory, homological-algebra · Tags:

Let $A$ be an abelian group. We call an element $a\in A$ torsion if there exists a natural number $n$ such that $na = 0$. The set of all torsion elements $T(A)$ of $A$ form a subgroup of $A$, and we can think of $T$ as an endofunctor on the category of abelian groups. Here are some examples:

• $T(\Z) = 0$
• $T(\Z\oplus \Z/n) = 0\oplus\Z/n\cong\Z/n$
• $T(\Q) = 0$
• $T(\Q/\Z) = \Q/\Z$

Finding the set of torsion points of an abelian group isn’t always easy as in these examples, since abelian groups may not always be written out in such an explicit way. A fascinating and nontrivial result of Barry Mazur is that the torsion subgroup of $E(\Q)$ for an elliptic curve $E$ over $\Q$ is one of fifteen possibilities: $\Z/n$ for $1\leq n\leq 10$ or $n=12$ or $\Z/2\times \Z/2n$ for $1\leq n\leq 4$. Such strange numerology!
More »

# From Rational Canonical Form to The Kostant Section

Suppose we have a $2\times 2$ matrix
$$M = \begin{pmatrix} x_{11} & x_{12}\\ x_{21} & x_{22} \end{pmatrix}$$
with entries in a field $F$. The characteristic polynomial of this matrix is $p(t) := {\rm det}(tI_2 – M) = t^2 – (x_{11} + x_{22})t + x_{11}x_{22} – x_{21}x_{12}$. One might ask: how can we produce a matrix with a given characteristic polynomial? This can be accomplished using the rational canonical form:
$$t^2 + at + b\mapsto \begin{pmatrix} 0 & -b\\ 1 & -a \end{pmatrix}.$$
We can calculate that the characteristic polynomial of this matrix to be $t^2 + at + b$. This map gives a bijection between quadratic monic polynomials in $F[t]$ and matrices of the above form. One way to understand this phenomenon is through algebraic groups. To explain, let’s stick with $F$ having characteristic zero, though much of what we do can be done in characteristic $p$ for $p$ sufficiently large as well using very different techniques.
More »

# Guess The Algebraic Group

Posted by Jason Polak on 23. August 2014 · Write a comment · Categories: algebraic-geometry, group-theory · Tags:

Suppose one day you run into the following algebraic group, defined on $\mathbb{Z}$-algebras $R$ by
$$G(R) = \left\{ \begin{pmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{12} & a_{11} & -a_{14} & -a_{13} \\ a_{13} & -a_{14} & a_{11} & -a_{12} \\ a_{14} & -a_{13} & -a_{12} & a_{11} \end{pmatrix} \in\mathrm{GL}_4(R) \right\}$$
Can you figure out what this group is? I actually did run into this group one day, but luckily I discovered its true identity. Today we’ll see one way to do so.

One thing to try to identify unknown algebraic groups in practice is to check out their matrix multiplication. If $A = (a_{ij}), B = (b_{ij})$ have the above form then $C = AB$ has first row
$$c_{11} = a_{11} b_{11} + a_{12} b_{12} + a_{13} b_{13} + a_{14} b_{14} \\c_{12}= a_{12} b_{11} + a_{11} b_{12} – a_{14} b_{13} – a_{13} b_{14} \\c_{13}= a_{13} b_{11} – a_{14} b_{12} + a_{11} b_{13} – a_{12} b_{14} \\c_{14}= a_{14} b_{11} – a_{13} b_{12} – a_{12} b_{13} + a_{11} b_{14}$$
(We only need to specify the first row to get the whole matrix in this group). Can you tell what the group is yet? At first, I couldn’t, so let’s try something else. One can check now that this group is actually commutative, and this would be enough to determine what it is given the context of where it came from, but let’s assume we don’t know that. Instead, let’s take the determinant:
$$\mathrm{det}(A) = (a_{11} + a_{12} + a_{13} – a_{14})(a_{11} + a_{12} – a_{13} + a_{14})\\\times(a_{11} – a_{12} + a_{13} + a_{14})(a_{11} – a_{12} – a_{13} – a_{14})$$
More »

# Determinants, Permutations and the Lie Algebra of SL(n)

Here is an old classic from linear algebra: given an $n\times n$ matrix $A = (a_{ij})$, the determinant of $A$ can be calculated using the permuation formula for the determinant:

$\det(A) = \sum_{\sigma\in S_n} (-1)^\sigma a_{1\sigma(1)}\cdots a_{n\sigma(n)}$.

Here $S_n$ denotes the permutation group on $n$ symbols and $(-1)^\sigma$ denotes the sign of the permutation $\sigma$. The computation of the determinant is much more easily done via the ‘minor expansion method’, but this just reduces to the above formula. Now, typically we would never use the above formula to calculate the determinant, but that doesn’t mean it isn’t useful!

### The Lie Algebra of an Algebraic Group

In fact, computating the Lie algebra of $\mathrm{SL}_n$ over a fixed commutative ring $k$ happens to be a quick and interesting application of the above determinant formula. First, let us recall the definition of the Lie algebra, or at least one of the many equivalent definitions. Consider the algebra of dual numbers, $k[\tau]/(\tau^2)$. Each element in this algebra can be written uniquely as $a + b\tau$: this gives a $k$-algebra map $C: k[\tau]/(\tau^2)\to k$ by sending $a + b\tau$ to $a$.
More »

# 2 Dimensional Connected Algebraic Groups are Solvable

Posted by Jason Polak on 03. August 2013 · Write a comment · Categories: algebraic-geometry, group-theory · Tags: ,

Conventions: $G$ is an algebraic group over an algebraically closed field $k$ and we identify $G$ with $G(k)$.

Consider the algebraic groups $\mathbb{G}_a$ and $\mathbb{G}_m$. They are the only one-dimensional connected groups and they are both solvable. What about two-dimensional connected groups? It turns out that if $\mathrm{dim} G\leq 2$ then $G$ is solvable.

For $\mathrm{dim} G= 3$, this is no longer true, for instance $\mathrm{SL}_2$ is $3$-dimensional but not solvable since it is perfect, i.e. equal to its commutator subgroup. So let’s prove our theorem:

Theorem. Let $G$ be a connected algebraic group over $k$. If $\mathrm{dim} G = 2$ then $G$ is solvable.

# Visualising the Real Orthogonal Group

Posted by Jason Polak on 09. June 2013 · Write a comment · Categories: group-theory · Tags: ,

In the post Can You See in Four Dimensions?, we saw some ways of visualising functions plotted in four-dimensional ‘space’ in various ways. Of course, we used colour and time for two dimensions because it is a bit difficult to plot in four actual spatial dimensions!

Here is another example: the orthogonal group $\mathrm{O}_2(\mathbb{R})$ over the real numbers $\mathbb{R}$. This is the group of all matrices $A\in \mathrm{GL}_2(\mathbb{R})$ such that $AA^t = I_2$, where $I_2$ is the $2\times 2$ identity. Let

$A = \begin{pmatrix}a & b\\ c & d\end{pmatrix}$

be such a matrix. Then $a = \pm\sqrt{1 – b^2}$ and $c = \pm\sqrt{1 – d^2}$. Finally, $ac + bd =0$. As long as these conditions are satisfied, then $A$ will indeed be in the orthogonal group.
More »

# Highlights in Linear Algebraic Groups 14: Singular Tori

Posted by Jason Polak on 07. June 2013 · Write a comment · Categories: algebraic-geometry, group-theory · Tags: , ,

From Highlights 12 and Highlights 13, we have gained quite a bit of information on connected reductive groups $G$ of semisimple rank 1. Recall, this means that $G/R(G)$ has rank 1 where $R(G)$ is the radical of $G$, which is in turn the connected component of the unique maximal normal solvable subgroup of $G$.

But wait, why have we been looking at groups of semisimple rank 1 at all? Let’s take a quick look at how we can get a good source of this groups inside a general group $G$.

### Tori

Let $G$ be a connected algebraic group over $k=\overline{k}$. We are not assuming that $G$ is reductive or anything else besides this. We start by dividing up the tori of $G$ into two kinds: the regular tori and the singular tori. Both of these species will be important in our study of algebraic groups.
More »