## Is math art? Part 1

Is mathematics an art? In this series of posts I will attempt to explore and answer this question. I encourage readers to also contribute their ideas in the comments.

The term art itself has undergone a metamorphoses over the centuries, so we should start by examining what humanity has meant by art and see if mathematics might fall under any of the definitions used over time. But beyond these definitions, we should examine whether we feel that mathematics is an art. Whether it is or not, what definition should art have in order to be consistent with the existence of mathematics? Only be examining these two angles will we gain insight into both art and mathematics, and how we as humans fit into both.

One of the earliest descriptions of art is the Platonic one, which says that art is imitation [1]. This view is echoed by Leon Battista Alberti (1404-1472), who thought that painting should be as faithful a reproduction or imitation of the real scene being constructed. Obviously this is a very limited definition by today's standards, but nonetheless worth looking at, as the germ of creating a reproduction of a scene using any media is still alive today, at least as a motivating factor to create art.
…read the rest of this post!

## Summing the first $n$ powers and generating functions

One of the classic and most used sums in all of mathematics is the sum of the first $n$ natural numbers
$$1 + 2 + \cdots + n = \frac{n(n+1)}{2}.$$ Gauss's classic derivation of this formula involves observing that if we duplicate this sum, write it backwards under the first sum, and add termwise, we get $(n+1) + (n+1) + \cdots (n+1)$, whence the original sum is half $n(n+1)$.

Similar formulas work for higher powers. For example,
$$1^2 + 2^2 + \cdots + n^2 = \frac{n(n+1)(2n+1)}{6}.$$ But is it a priori clear that the sum of the $a$-powers of the first $n$ natural numbers is a polynomial in $n$ with degree $a+1$? The sum of the first $n$ natural numbers is a quadratic polynomial in $n$, and the sum of the first $n$ squares is a cubic polynomial in $n$.

It is actually true that for any natural number $a$,
$$\sum_{k=0}^n k^a$$ can be written as a polynomial function of $n$ of degree $a+1$. Of course, one way to see this is to derive in a brute force manner a formula that actually works for all powers $a$. That train of thought was actually carried out by Johann Faulhaber (1580-1635) and completed by Jacobi. The resulting formula is now known as Faulhaber's formula.
…read the rest of this post!

## The binomial's variance through generating functions

In the post Binomial distribution: mean and variance, I proved that if $X$ is a binomial random variable with parameters $n$ (trials) and $p$ (probability of success) then the variance of $X$ is $np(1-p)$. If you'll notice my proof was by induction. You might ask, why would I do that? It's certainly one of the most roundabout and lengthy proofs of this fairly simple fact. Well, I think it's an interesting proof. Today, however, let's look at some shorter ones.

One shorter proof is to recall that I originally defined $X$ as
$$X = X_1 + \cdots + X_n$$ where $X_1,\dots, X_n$ are independent and identically distributed Bernoulli random variables; that is, a random variable that takes either the value 1 with probability $p$ or 0 with probability $1-p$. I used this fact to calculate the mean. Well, if you square this expression and calculate the expectation, it will already provide a much shorter calculation of $E(X^2)$, which is the main ingredient to showing that the variance of $X$ is $np(1-p)$.

But there's another proof that I like even better, and it uses generating functions. But besides liking the use of generating functions, I am also introducing them because I will need them for more advanced material on stochastic processes that I will talk about in the near future.
…read the rest of this post!

## Inductive formula for binomial coefficients

In the last post on the mean and variance of a binomial random variable, we used the following formula:
$$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}.$$ Let's just take a moment to prove this formula. Of course, how we prove it depends on what definition you use of the binomial coefficients. We have to start somewhere, after all. So, we'll start with the definition that $\binom{n}{k}$ is the number of ways of choosing $k$ objects from $n$ objects, where the order of the objects do not matter. If $k > n$ of course this is impossible so $\binom{n}{k}$ is zero and similarly for $k < 0$. Basic combinatorial reasoning allows us to write down the formula $$\binom{n}{k} = \frac{n!}{(n-k)!k!}$$ for $0\leq k\leq n$. For example, $$\binom{329}{4} = 479318126.$$ In actually computing this formula of course, you don't compute the factorials individually. It's much more efficient to compute as $$\binom{n}{k} = n(n-1)\cdots (n-k+1)/k!$$ or reverse the formula by replacing $k$ with $n-k$. …read the rest of this post!

## Binomial distribution: mean and variance

A Bernoulli random variable with parameter $p$ is a random variable that takes on the values $0$ and $1$, where $1$ happens with probability $p$ and $0$ with a probability of $1-p$. If $X_1,\dots,X_n$ are $n$ independent Bernoulli random variables, we define
$$X = X_1 + \cdots + X_n.$$ The random variable $X$ is said to have a binomial distribution with parameters $n$ and $p$, or a $B(n,p)$ distribution. The probability mass function of a $B(n,p)$ random variable $X$ is
$$f(k) = P(X = k) = \sum_{k=0}^n\binom{n}{k}p^k(1-p)^{n-k}.$$

What is the expected value of the $B(n,p)$ variable $X$? Expectation is linear so we can use the definition of $X$ in terms of a sum of $n$ Bernoulli random variables
$$E(X) = E(X_1) + \cdots + E(X_n).$$ The expectation $E(X_i) = 0(1-p) + 1(p) = p$. Therefore:

Theorem. The expected value of a binomial $B(n,p)$ random variable is $np$.

…read the rest of this post!

## Sum of the first factorials a perfect power?

Consider the following sum:
$$F(n) = 1! + 2! + 3! + \cdots + n!$$ Can this sum ever be a perfect power? A perfect power here is defined as $x^n$ for some natural number $x$ and some natural number $n\geq 2$. Some observations immediately come to mind: $F(1) = 1$, and that's trivially a perfect power. But $F(2) = 3$, which is a prime number and of course not a perfect power. Then there's $F(3) = 1 + 2 + 6 = 9$, which is a perfect square.

Are there others? Let's check a few more. $F(4) = 33 = 3\times 11$ and $F(5) = 153 = 3^2\times 17$, so those are not perfect powers. In fact, $F(n)$ is not a perfect power for any natural number $n > 3$. Why is this?

First, if $F(n)$ is ever a perfect power, it must be a perfect square. Why is that? If $n\geq 6$ then we have
\begin{align*}F(n) &= 1! + 2! + \cdots + 5! + 6! + \cdots + n!\\ &= 153 + 6! + \cdots + n!\\ &= 9(17 + 9(6!/9 + \cdots + n!/9)).\end{align*} Here, we have used that $n!/9$ will be an integer for $n\geq 6$. Since $17$ is not a multiple of $9$, we see that $F(n)$ has at most two factors of $3$. Therefore, $F(n)$ is at most a perfect square.

Another thing we see is that $F(n)$ for $n\geq 4$ always ends in $3$. That is because $F(4) = 33$ and $n!$ ends in zero whenever $n\geq 5$. Now, a square integer can never end in $3$. The only possible endings are $0,1,4,5,6,9$. Therefore, $F(n)$ can never be a perfect power whenever $n\geq 4$. So $F(1)$ and $F(3)$ are the only perfect powers in the sequence $F(1), F(2), F(3),\dots$

Note: I found this problem in the interesting book 104 Number Theory Problems by Andreescu, Andrica, and Feng. I did not look at their solution until solving this problem, but upon looking at it, their solution is different and handles even and odd powers separately.

## Projectivity and the double dual: Part 2

Let $R$ be an associative ring. If $M$ is a left $R$-module, then the dual module $M^*$ is the right $R$-module ${\rm Hom}(M,R)$. The action of $R$ on this module is given by $(fr)(m) = f(m)r$. We give it a right $R$-module structure since $f$ is a homomorphism of left $R$-modules. Trying to give it a structure of a left $R$-module will interfere with this action (try it!), so it doesn't work in general unless $R$ is commutative.

If $M$ instead were a right $R$-module, then we could form the dual $M^*$ as a left $R$-module of course. In particular, if $M$ is a left $R$-module, then $M^{**}$ is again a left $R$-module, and we have a canonical map
$$\sigma:M\longrightarrow M^{**}$$ defined for $x\in M$ by $\sigma(x)(f) = f(x)$ for $f\in{\rm Hom}(M,R)$.

In any introductory linear algebra class, we learn that if $M$ is a finite-dimensional vector space then $\sigma$ is an isomorphism. Of course, this can't be true for modules in general and already fails for $M = \Z/n$ as a $\Z$-module, since $\Hom(\Z/n,\Z) = 0$. Four years I wrote a post about this, called Projectivity and the Double Dual. There, we already saw that $\sigma:M\to M^{**}$ already fails to be an isomorphism when $M$ is an infinite-dimensional vector space, simply because ${\rm Hom}(\oplus_I k,k)\cong\prod_I k$ for any set $I$. Hence, if $I$ is infinite, then $\prod_I k$ will always have larger cardinality than $\oplus_I k$, so the two cannot be isomorphic.
…read the rest of this post!

## Birds: how does egg mass vary with body mass?

One can imagine that the larger the bird, the larger the egg. This is not always true. Consider the dataset for [1]. Examining it, we see that the Wild Turkey female has an average mass of 4222g and an egg mass of 78.8g, whereas the Malleefowl female has an average mass of 1830g and egg mass of 175g, a relatively much larger egg! The relative sizes of the egg can be visualized by this diagram:

The reason why bigger birds do not always produce bigger eggs is due to the variation in bird strategies in producing successful adults and the interactions with bird physiology. A major point here is that of precocity. You may have seen birds that feed their young in the nest such as this Barn Swallow family:

## Pace's derivation of Euler's sum of reciprocals of squares

One of my favourite identities in mathematics is the sum of the reciprocal of the squares
$$1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \cdots = \frac{\pi^2}{6}.$$ This summation, first derived by Euler, is known as the Basel problem. It is perhaps the most natural sum to consider after the harmonic sum $1 + 1/2 + 1/3 + \cdots$, which diverges. Another way to write the sum of reciprocal of squares is with the Greek letter $\zeta$, as in
$$\zeta(2) = \sum_{k=1}^\infty\frac{1}{k^2}$$ because the sum of reciprocal of squares is in fact the value of the famous Riemann zeta function at $2$.

The sum of reciprocal of squares does not diverge, since we can bound the partial sums as
$$\sum_{k=1}^n \frac{1}{k^2} \leq \int_1^n \frac{1}{x^2}{\rm d}x + 1.$$
This integral inequality shows that
$$1\leq \zeta(2)\leq 3.$$ This is a pretty crude bound as $\pi^2/6 = 1.6449…$. So how do we go about showing that? One interesting proof was shown by Luigi Pace in [1], published in the American Mathematical Monthly, which we now explain.
…read the rest of this post!

## Book Review: Behavioral Ecology of Tropical Birds

I am currently working on a paper on some quantitative relationships involving birds. As some readers might know, this is part of my foray into some new areas of applied mathematics. In order to get to know the subject a little better, I recently read "Behavioral Ecology of Tropical Birds" by Bridget J.M. Stutchbury and Eugene S. Morton.

The questions asked in this book fall under the umbrella question, "why are tropical birds so different than temperate zone birds?" There are two motivating reasons to ask this question: first, tropic birds are quite different in their behavior compared to temperate zone species. Therefore, it is natural to try and figure out how these more 'typical' group fits into what we know about birds. The second reason is that many 'general' conclusions have been made about birds based on temperate zone studies, and some of these generalizations might give an incorrect view of all birds, because of this temperate zone bias.
…read the rest of this post!