The determinant of a matrix is calculated recursively as follows. The determinant of a $1\times 1$ matrix is just the number itself. For, example, $\det([3]) = 3$ and $\det([-2.5]) = -2.5$. Here is what you do if you have a larger matrix: take the matrix, and write $+$ or $-1$ beside each entry, starting with $+$ for the first $1,1$-entry of the matrix and alternating from there. For example, a $2\times 2$ matrix will get the following arrangement of signs:

$$\begin{pmatrix} + & -\\ -& +\end{pmatrix}.$$ Now, select any row or column. It doesn’t matter which row or column you select as you will get the same answer in the end. For example, let’s take:

$$A = \begin{pmatrix} 1 & 2\\ 3& 4\end{pmatrix}.$$ Now, I select the first column. We now form a sum, one term for each entry. Start with the $1$, and cross out the row and column that contains it. Doing so will give you a smaller matrix. Multiply the entry $1$ by the determinant of that smaller matrix (this is the recursive part), taking into account the signs as in the first step of assigning $+$ and $-1$. This sum is the determinant. So,

$$\det(A) = 1\det(4)-3\det(2) = 1\cdot 4-3\cdot 2 = -2.$$ Now, this is quite easy for a $2\times 2$ matrix and in fact gives you the general formula

$$\det\left(\begin{pmatrix}a & b\\c & d\end{pmatrix}\right) = ad-bc.$$ Since this formula is so simple it really pays to memorize it. But basically your brain can’t help but memorize it after you do a few calculations anyway.

…read the rest of this post!

## Shortcuts to diagonalize a 2×2 matrix

Matrix diagonalization is one of my favourite topics in linear algebra. I still remember learning it for the first time and being quite awed by it. What is diagonalization? If $A$ is a square matrix, the diagonalization is finding an invertible matrix $P$ such that $P^{-1}AP = D$ where $D$ is a diagonal matrix. It’s important to note that diagonalization depends on the field or ring that we are working in. In this post, we will just be working over fields, though diagonalization of course makes sense over rings, but the procedure is more complicated there.

Not all matrices can be diagonalized. For example, the upper triangular matrix

$$\begin{pmatrix}1 & 1\\ 0 & 1\end{pmatrix}$$ actually cannot be diagonalized over any field. Though I guess it is already a diagonal matrix over the zero ring, which is quite perverse if you ask me. Over a field, the procedure for diagonalizing a square $n\times n$ matrix $A$ involves the following procedure:

…read the rest of this post!

## My 20 favorite math jokes

I hope my readers don’t mind, but I’ve been writing serious math posts here for ten years so I think it’s high time I inserted some humour onto this blog. Rest assured, serious math posts won’t go away.

Without adieu, my 20 favourite math jokes. Most of them I heard elsewhere, and a few I invented myself.

Why did the chicken cross the Mobius strip?

To get to the other…uh…

Two constants were found to be having an argument:

**i**: Be rational.

**$\pi$**: Get real!

Old Macdonald had a form, $e_i\wedge e_i = 0$.

You might be a mathematician if you think fog is a composition.

Professor: Give an example of a vector space.

Student: V.

From the obituary of Samuel Eilenberg:

When someone once asked Professor Eilenberg if he could eat Chinese food with three chopsticks, he answered, “Of course,” according to

Professor Morgan. The questioner asked, “How are you going to do it?”

and Professor Eilenberg replied, “I’ll take the three chopsticks, I’ll

put one of them aside on the table, and I’ll use the other two.”

## A quick guide to Python’s random number generator

If you are doing applied math and want to run a computer simulation, then chances are you will need random numbers. I write most of my simulations in Python, and luckily Python has a great random number library. Let’s see how to use it. The first step is to import the library:

1 2 |
import random random.seed() |

Notice that I called the `random.seed()`

function here. Python generates random numbers by starting with a seed value $x$ and the random numbers from the generator are $f(x), f(f(x)),…$. The function $f$ is the Mersenne twister algorithm. You can also specify the seed yourself by using `random.seed(a)`

where `a`

is an integer. This would be useful for testing, so you can get the same random numbers each time the program is run.

…read the rest of this post!

## 5 popular math books that inspired me to study math

When I was a teenager, I used to read a lot of popular science and math books. Of course, by “popular”, I just mean books meant for a nonspecialist audience (like a kid without a degree). A lot of those books inspired me to actually go on to study mathematics at university. Here are five of my favourites. Some of them are old now but because of the timeless nature of mathematics, they are not out of date.

## 1. Ian Stewart’s “Does God Play Dice”

Chaos, or sensitive dependence on initial conditions, is a profound idea in science that goes against the classic Newtonian reductionism and is the concept that one cannot necessarily predict the outcome of complex systems. Ian Stewart explains chaos in his book “Does God Play Dice”. The ideas in this book are a constant influence in my life to this day.

Buy Does God Play Dice on Amazon

## 2. William Dunham, “The Mathematical Universe”

I read this book in my final year of high school (and I still remember the day a math teacher commenting that he liked the book also). This book takes an A-Z tour on various mathematical topics. It is more classical than some of the other books here, covering topics such as angle trisection and the foundations of probability. I like that each chapter is short and readable, isn’t shy about including mathematical details and equations, and yet contains a good amount of historical storytelling as well.

Buy The Mathematical Universe on Amazon

…read the rest of this post!

## Math book sale, North American shipping

Over time, I have accumulated many mathematics books. Inevitably, there were a few I acquired but were not really my style of book. So, I collected these up and am selling them here in the hope that other people can use them. To set up a purchase, email:

Payment can be made via Interac e-Transfer in Canada or Paypal in North America (if using Paypal add 3%). North American flat shipping 15 dollars, 10% discount if you buy more than one. Free shipping over 80 dollars. All in excellent condition, some near mint, with the exception of a few which are noted below.

All prices are in Canadian dollars. *10% of profits will be donated to Birdlife, an organization that works towards the ecological preservation of birds*. If the item is listed here, it is still available.

- Berlinghoff, William P.; Gouvea, Fernando Q., “Math Throughout the Ages”. Hardcover. MAA 2015 Hardcover. $45
- Jacob Lurie, “Higher Topos Theory”. Softcover. $70
- Steven H. Weintraub, “Linear Algebra for the Young Mathematician”. Hardcover. $50
- Edwin H. Spanier, “Algebraic Topology”. 1966 Hardcover. $40
- Joseph H. Silverman, “The Arithmetic of Elliptic Curves” 2nd Edition, Springer Hardcover. $60.
- Johnathan D. Rogawski, “Automorphic Representations of Unitary Groups in Three Variables”. Softcover. Slight crease in cover. $15.
- Engelbert J. Dockner, Steffen Jorgensen, Ngo Van Long, “Differential Games in Economics and Management Science”. Paperback. Cover is a little scratched and edges of pages have some markings. Inside clean. $40
- Bailey and Knapp, “Representation theory and automorphic forms”. Hardcover. $90.
- Goro Shimura, “Automorphic functions and number theory”. Ex-library hardcover, some library markings. $15.
- Armand Borel, “Linear Algebraic Groups” WA Benjamin 1969 Hardcover. $15
- Magnus, Karrass, and Solitar, “Combinatorial Group Theory”. Revised Second Dover Edition Softcover. $40.
- Kenneth S. Brown, “Cohomology of Groups”. GTM Hardcover. $60.
- Fulton and MacPherson, “Categorical Framework for the study of singular spaces.”. $60.
- Albert, “Structure of Algebras”. Hardcover 1939. $15
- Dirk van Dalen, “Logic and Structure”. 2004 Softcover printing. $60
- Nathan Jacobson, “Lie Algebras”. Wiley 1962 ex-library copy. Some wear on cover and library markings. $10

## Urbanization and the Pileated Woodpecker

As cities grow and take over forests, the forest cover diminishes and reduces the capacity to sustain many organisms. Sometimes, creatures like the House Sparrow (*Passer domesticus*) or the European Starling (*Sturnus vulgaris*) actually benefit from human habitation. To make matters worse in this particular case, these birds are invasive species in many places like North America, so our proliferation is their proliferation and thus fuels the competition between these species and native species. Of course, native species such as the Black-capped Chickadee benefit from human habitation.

Many species suffer due to our expansion. Extinctions such as the Ivory-billed Woodpecker have gone away due to our destruction of forests.

Although we can quickly gather rough population estimates in some cases and see the decline of populations, the exact nature of the response due to urbanization is unknown in many cases. Knowing the details of how species respond to growing cities may help us conserve them by keeping the right kind of green spaces. Does a certain distribution of trees promote the use of them as homes for creatures like bats or insects?

I came across an interesting paper by Jorge A. Tomasevic and John M. Marzluff [1] on the Pileated Woodpecker (*Dryocopus pileatus*). This woodpecker is one of the world’s largest woodpeckers, and unlike smaller woodpeckers such as the Downy Woodpecker or Hairy Woodpecker, the Pileated Woodpecker is not so easily found in highly urbanized areas with few trees. This paper observed 13 Pileated Woodpeckers, each for about one year in the Washington area with radio tracking.

They found that Pileated Woodpeckers prefer habitat with larger tree diameters, but that they can still use lightly urbanized areas. Heavily urbanized areas with less than 20% of tree cover were rarely used. They also found that tree diameter was positively associated with land use, though the confidence interval $[0.00,0.056]$ for their model coefficient $\beta= 0.28$ for tree diameter is rather large. The interested reader should read the paper for more details.

I found it interesting how the authors giv evidence for the birding intuition that Pileated Woodpeckers don’t like heavily urbanized areas. Intuition is not a good guide, since sometimes animals do actually use certain types of land but are just difficult to find. Studies like this are important because it contributes to our fine-grained knowledge about how the organisms around us respond to changes in land characteristic. For example, if a land plan for a suburban area kept sufficient tree cover, that would help maintain areas usable by Pileated Woodpeckers.

Earlier, I mentioned the Ivory-billed Woodpecker. That woodpecker is most likely extinct due to intense forest-clearing. Let us hope the same fate will not befall the Pileated Woodpecker.

If you’d like to learn a little more about the Pileated Woodpecker and see what it looks like, I made a short video on it:

References

1. Tomasevic, Jorge A., and John M. Marzluff. “Use of suburban landscapes by the Pileated Woodpecker (Dryocopus pileatus).” The Condor: Ornithological Applications 120.4 (2018): 727-738.

## Recorded my productivity for 1000 days, here’s what I found

In 2011, I started my PhD and also started recording every minute I spent working line-by-line through graduate texts and research papers in mathematics. At that time, the majority of my work was spent understanding various branches of mathematics necessary to do research.

I recorded this data in a spreadsheet. It’s important to realize what kind of time I recorded. I recorded **only** time spent going through books and papers line-by-line. I did not record any time searching the literature, skimming through papers I needed to read, or going to class. In other words, I only recorded highly focused time where I was actively engaging with the material.

I wanted to understand the limits of my mind in terms of how quickly I can absorb new mathematics. Other work is important too, but in my experience tasks like searching the literature or typesetting my thesis require very little effort compared to reading new mathematics line-by-line.

In my measurement, I tried to be precise. During the time intervals I measured, I worked constantly. I did not take any breaks to check email or do anything else. In fact, I did not even have the internet at home during my entire PhD. I always worked at home with no distractions. If any break was required, I stopped the timer, but usually I worked in short intervals of 45-90 minutes where no breaks were required.

Here are the results in terms of hours spent over the past fourteen days in a graph, recorded for about a thousand days (click to enlarge):

Here is what I learned in this exercise of fairly precise measurement:

- Periods of high mental productivity come in cycles. There would always come a period of
**2-3 weeks**where I just wasn’t very productive. I tried to recognize when that happened and just do other things instead of being upset that I could not work. - There is a hard limit to how much work your mind can do. In the long run (that is averaged over a period of years including days where I did nothing), I averaged
**one hour a day**. - On any given day I could do a very hard maximum of 3 hours of intense reading, usually less.

The bottom line is that work comes in phases, and the mind has an upper limit to what it can do. It is like lifting weights. After a certain number of reps at the bench press, you just won’t able to do any more. And moreover, after doing an intense weight workout, you need to take a break to let your muscles recover. It does not make sense to have it uniformly done day after day. When your mind is at its peak power, it will do a lot more than the average per day. *Intense mental work like understanding math is much more like anaerobic exercise, rather than aerobic exercise like running.*

Everyone has a different level of intensity they can handle. For me, it seemed to come out to a long-term average of an hour a day but other people are different and could average two hours a day. (Remember, this average is over all days of the year.) Also, one hour for some person might mean getting farther in the book with even more understanding than another person can do in two hours.

The mind is a complex machine and if you are going to use it for very intense intellectual work, you need to understand the subtlety of its power, and how to use it well. It needs a combination of a strong effort together without any harshness and it can do amazing things.

*Want to support this blog? Consider watching my Youtube video on the Snow Bunting and subscribing to my channel:*

## Paper on polypermutations published!

I am pleased to announce that my latest paper, *The Polypermutation Group of an Associative Ring* (PDF, Journal Page) has been published with the peer-reviewed journal Contemporary Mathematics.

Briefly, this paper is a study of the group of permutations on a finite commutative ring $R$, generated by the permutations that are induced by polynomials in $R[x]$. I call it the *polypermutation group of $R$*. For a finite field $F$, it is easy to prove that *every function* $F\to F$ is of the form $a\mapsto f(a)$ where $f\in F[x]$. That is not so with all finite rings, like $\Z/4$.

In my paper, I give a presentation of this group for $\Z/p^2$ with $p$ a prime, as a semidirect product. I also derive a formula for the number of elements in this group for $\Z/p^k$ with $p\geq k\geq 2$. Although this formula is already known, I gave a new proof.

What about the journal *Contemporary Mathematics*? Although this journal started in 2019 and I didn’t know what to expect, the peer review process went well and the communication with the journal was excellent. As a bonus, for this particular journal, for now the publisher is waiving the open access publishing fee, so my paper is open access!

## Where does the factorial symbol come from?

The factorial of a natural number $n$ is the number of permutations of $n$ elements, or equivalently the number of bijections from an $n$-element set to itself. It is denoted $n!$. For example, $7! = 5040$. But where does the notation come from?

In 1827, Thomas Jarrett tried to introduce the notation

for the factorial of $n$. This notation has some problems. First, it is inconvenient to draw a line under the character. Also, if you are writing it by hand, then you have to move your pencil backwards, assuming you start at the top of the vertical line, which is how most people would probably draw it.

Luckily, it was Christian Kamp who introduced a better notation, the exclamation point $n!$ that we use today. He introduced it in his book Élémens d’arithmétique universelle a little earlier than Jarrett in 1808. He wrote:

Je me sers de la notation trés simple $n!$ pour désigner le produit de nombres décroissans depuis n jusqu’à l’unité, savoir $n(n-1)(n-2) … 3.2.1$. L’emploi continuel de l’analyse combinatoire que je fais dans la plupart de mes démonstrations, a rendu cette notation indispensable.

Here is my translation:

I am using the very simple notation $n!$ to designate the product of numbers decreasing from $n$ to $1$, namely $n(n-1)(n-2) … 3.2.1$. The constant use of combinatorial analysis in most of my demonstrations makes this notation indispensable.

This notation still has some oddities. If factorials appear at the end of a sentence, sometimes they look unusual, like 5!. Also, if you want to include an exclamation point like: I love 5!! In this case there could be some ambiguity between the regular factorial and the *double factorial*, which is another function altogether. Good thing mathematicians don’t make much use of exclamation marks in writing.

In any case, the factorial notation is here to stay. Personally, I think it is pretty good.