Posted by Jason Polak on 28. June 2017 · Write a comment · Categories: elementary · Tags:

The $n$th harmonic number $h(n)$ is defined as
$$h(n) = \sum_{i=1}^n 1/i$$
The harmonic series is the associated series $\sum_{i=1}^\infty 1/i$ and it diverges. There are probably quite a few interesting ways to see this. My favourite is a simple comparison test:
$$1/1 + 1/2 + 1/3 + \cdots\\ \geq 1/2 + 1/2 + 1/4 + 1/4 + 1/4 + 1/4 + 1/8 + \cdots
\\= 1 + 1 + 1 + \cdots$$
and the series $1 + 1 + \cdots$ is divergent. But while the harmonic series diverges, it does so rather slowly. It does so slowly enough that if you were to numerically compute the harmonic numbers (the partial sums of the harmonic series), you might be unconvinced that it actually does diverge:

  • $h(10) = 2.92896825396825\dots$
  • $h(100) = 5.18737751763962\dots$
  • $h(1000) = 7.48547086055035\dots$
  • $h(10000) = 9.78760603604438\dots$
  • $h(100000) = 12.0901461298634\dots$
  • $h(1000000) = 14.3927267228657\dots$

These numbers were computed by actually summing $1 + 1/2 + \cdots + 1/n$ and then writing out a decimal approximation to that fraction, but that takes a while. How can we at least give an approximation to this series? The first thought surely must be to compare it to the integral
$$\int_1^n 1/x dx = \log(n)$$
Where $\log(-)$ denotes the natural logarithm. A moment’s consideration with Riemann sums shows that we have an inequality
$$\int_1^n 1/x dx \le h(n) \le \int_1^n 1/x dx + 1$$
So we’ve come up with a pretty good approximation to our harmonic series, which only gets better as $n$ gets bigger:
$$\log(n)\le h(n) \le \log(n) + 1$$
Which, incidentally is another explanation of why the harmonic series diverges. And it’s much faster to compute on a simple scientific calculator. Here is an example computation: we have already said that $h(1000000) = 14.3927267228657\dots$ But $\log(1000000) = 13.8155105579643\dots$ Pretty good right?

Posted by Jason Polak on 21. March 2017 · Write a comment · Categories: elementary, exposition

Characteristic functions have magical properties. For example, consider a double summation:
$$\sum_{k=1}^M\sum_{r=1}^k a_{r,k}.$$
How do you switch the order of summation here? A geometric way to think of this is arrange the terms out and “see” that this sum must be equal to
$$\sum_{r=1}^M\sum_{k=r}^M a_{r,k}.$$
I find this unsatisfactory because the whole point of good notation is that you shouldn’t have to think about what it actually means. I do think it’s very important to understand what the notation means, but in doing formal manipulations it’s nice not to have to do that all the time.

A superior proof that these two sums are equal would be to write
$$\sum_{k=1}^M\sum_{r=1}^k a_{r,k} = \sum_{k=1}^M\sum_{r=1}^M a_{r,k}\delta(r\leq k)$$
where $\delta(r\leq k)$ is the function of the variables $r$ and $k$ that equals one exactly when $r\leq k$ and zero otherwise. Then we can happily switch the order of summation to get
$$\sum_{r=1}^M\sum_{k=1}^M a_{r,k}\delta(r\leq k).$$
Now, it’s trivial to get rid of the $\delta(r\leq k)$ function by writing
$$\sum_{r=1}^M\sum_{k=r}^M a_{r,k}.$$

Posted by Jason Polak on 19. December 2015 · Write a comment · Categories: elementary

Let $G$ be a group. Define $\varphi:G\to G$ by $\varphi(x) = x^2$. When is $\varphi$ a homomorphism? Clearly, $\varphi$ is a homomorphism whenever $G$ is abelian. Conversely, if $\varphi$ is a homomorphism then for any $x,y\in G$, we get $xyxy = \varphi(xy) = \varphi(x)\varphi(y) = x^2y^2$.

So, $xyxy = xxyy$. Canceling the $x$ from the left and the $y$ from the right gives $yx = xy$. Hence, $G$ is abelian!

One might wonder about higher power maps. What about $\varphi(x) = x^3$. Again, $\varphi$ is a homomorphism if $G$ is abelian. What about the converse? We claim that the converse holds if $\varphi$ is injective.
More »

Consider the good old Pascal’s triangle:


Except for the first row, take the alternating sum of the entries. So for the second row we have $ 1 – 1 = 0$. For the third row we have $ 1 – 3 + 3 – 1 = 0$. For the fourth row we have $ 1 -4 + 6 – 4 + 1 = 0$, etc. So it seems that we have the following for $ n > 0$:

$ \sum_{i=0}^n (-1)^n\binom{n}{i} = 0$

One can use the binomial theorem to prove this. In the process of reviewing some material on regular sequences, I came up with a slightly different proof that could be the most pretentious and ridiculous proof of this fact. (However, even more ridiculous proofs using heavy machinery would be welcome in the comments.) The reader may wish to consult the previous post describing the Koszul resolution before reading onwards.
More »

Posted by Jason Polak on 13. June 2013 · Write a comment · Categories: analysis, elementary · Tags: , ,

A class of fractals known as Mandelbrot sets, named after Benoit Mandelbrot, have pervaded popular culture and are now controlling us. Well, perhaps not quite, but have you ever wondered how they are drawn? Here is an approximation of one:


From now on, Mandelbrot set will refer to the following set: for any complex number $ c$, consider the function $ f:\mathbb{C}\to\mathbb{C}$ defined by $ f_c(z) = z^2 + c$. We define the Mandelbrot set to be the set of complex numbers $ c\in\mathbb{C}$ such that the sequence of numbers $ f_c(0), f_c(f_c(0)),f_c(f_c(f_c(0))),\dots$ is bounded.
More »

Posted by Jason Polak on 02. April 2013 · Write a comment · Categories: elementary · Tags: , , ,

I sometimes am a teaching assistant for MATH 133 at McGill, and introductory linear algebra course that covers linear systems, diagonalisation, geometry in two and three dimensional Euclidean space, and the vector space $ \mathbb{R}^n$, and I’ve collected a few theoretical questions here that I like to use in the hope that they may be useful to people studying this kind of material. I made up all of these questions, although obviously many of them in form are the same as elsewhere. Some of the questions are a bit unusual and curious, and none of them need special tricks to solve, just an understanding of the concepts in the course. They focus mostly on understanding the theory, and there are very few straight computational-type questions here.

Note. None of these questions are official in the sense that I do not write the final exams. The exact syllabus of the course should always be taken to be the class material and official course material on the course website. These are more for extra practice, but do cover the material of the course fairly closely.
More »

Posted by Jason Polak on 29. January 2013 · 2 comments · Categories: elementary, math · Tags: , ,

A few nights ago as I was drifting off to sleep I thought of the following puzzle: suppose you go out for ice cream and there are three flavours to choose from: passionfruit, coconut, and squid ink. You like all three equally, but can only choose one, and so you decide you want to make the choice randomly and with equal probability to each.

However, the only device you have to generate random numbers is a fair coin. So, how you do use your fair coin to choose between the three options of ice cream?

Of course, you can only use coin flips to make your choice. For instance, cutting the coin into three equal pieces, putting them in a bag to create a new stochastic process does not count.

More »

Posted by Jason Polak on 11. December 2011 · Write a comment · Categories: elementary · Tags: , ,

For any $ n\times n$ matrix $ A$ with real entries, is it possible to make the sum of each row and each column nonnegative just by multiplying rows and columns by $ -1$? In other words, you are allowed to multiply any row or column by $ -1$ and repeat a finite number of times.

My fellow office mate Kirill, who also has a math blog, gave me this problem a few weeks ago and I thought about it for a few minutes here and there. The solution is in the fourth paragraph, so if you’d like to think about it yourself stop here before you get close.
More »