## Preprint: group theory and chords

A while ago I wrote a paper called A Group-theoretical Classification of Three-tone and Four-tone Harmonic Chords (arXiv link). The main contribution is an analysis of four-tone harmonic chords in music theory. These are: major-major, minor-major, augmented-major, major-minor, diminished-minor, minor-minor, and diminished-diminished. X-Y in this scheme means X is the basic triad (so major, minor, augmented, or diminished), and Y is the quality of the seventh factor. These four-tone chords can be represented as a quadruple $(0,a,b,c)$ where the numbers $a,b,c$ are the distance from the root $0$ in semitones. I define three fundamental operations on these chords: inversion (usual inversion on a keyboard instrument) denoted $i$, major-minor duality denoted $d$, and augmented-diminished duality denoted $a$. These three operators are elements of the symmetric group on four letters.

I show how all the harmonic chords are related under these operators, and each of these relations reveals the harmonic relations between these chords as we hear them in music. The highlight of the paper is this diagram, which shows all these relations (M=major, m=minor, A=augmented, d=diminished):

Unfortunately, I recently found out that it was rejected :( It was suggested that I could resubmit on the condition that I significantly expand its relation to the rest of the literature and make the notation more consistent with some other parts of the literature. I am going to have to figure out what to do about that because I think too much notational change might actually make this paper more confusing and hard to read. So, I am sharing it with you so you can see this cool diagram. I am not sure if I can really satisfy the requirements of the field of mathematics of music so I might have to leave this one as a preprint, especially since I would like to keep working on ecology.

## Some misuses of science

Ever since I was a child, I was fascinated by science. Back then, science was reading about astronomy, physics, chemistry, and biology and the wonderful facts and theories proposed to explain those facts. The method that is used by scientists in their journey is the scientific method. Although the exact nature of the scientific method has been examined by philosophers for hundreds of years, its core nature of falsifiable hypotheses and experimentation holds a great beauty of thought.

However, science is carried out by individuals and societies and in turn that knowledge affects those individuals and societies. Thus, the actual discoveries and processes of science in the short term can be far more tumultuous than the overarching successes presented in history. As intelligent beings, we need to be aware of the dangers in science, so that we can more quickly discover the truth. We need to be wary of the misuse of science for means other than discovering the truth via genuine curiosity about the world.
…read the rest of this post!

## Is math art? Part 1

Is mathematics an art? In this series of posts I will attempt to explore and answer this question. I encourage readers to also contribute their ideas in the comments.

The term art itself has undergone a metamorphoses over the centuries, so we should start by examining what humanity has meant by art and see if mathematics might fall under any of the definitions used over time. But beyond these definitions, we should examine whether we feel that mathematics is an art. Whether it is or not, what definition should art have in order to be consistent with the existence of mathematics? Only be examining these two angles will we gain insight into both art and mathematics, and how we as humans fit into both.

One of the earliest descriptions of art is the Platonic one, which says that art is imitation [1]. This view is echoed by Leon Battista Alberti (1404-1472), who thought that painting should be as faithful a reproduction or imitation of the real scene being constructed. Obviously this is a very limited definition by today's standards, but nonetheless worth looking at, as the germ of creating a reproduction of a scene using any media is still alive today, at least as a motivating factor to create art.
…read the rest of this post!

## Summing the first $n$ powers and generating functions

One of the classic and most used sums in all of mathematics is the sum of the first $n$ natural numbers
$$1 + 2 + \cdots + n = \frac{n(n+1)}{2}.$$ Gauss's classic derivation of this formula involves observing that if we duplicate this sum, write it backwards under the first sum, and add termwise, we get $(n+1) + (n+1) + \cdots (n+1)$, whence the original sum is half $n(n+1)$.

Similar formulas work for higher powers. For example,
$$1^2 + 2^2 + \cdots + n^2 = \frac{n(n+1)(2n+1)}{6}.$$ But is it a priori clear that the sum of the $a$-powers of the first $n$ natural numbers is a polynomial in $n$ with degree $a+1$? The sum of the first $n$ natural numbers is a quadratic polynomial in $n$, and the sum of the first $n$ squares is a cubic polynomial in $n$.

It is actually true that for any natural number $a$,
$$\sum_{k=0}^n k^a$$ can be written as a polynomial function of $n$ of degree $a+1$. Of course, one way to see this is to derive in a brute force manner a formula that actually works for all powers $a$. That train of thought was actually carried out by Johann Faulhaber (1580-1635) and completed by Jacobi. The resulting formula is now known as Faulhaber's formula.
…read the rest of this post!

## The binomial's variance through generating functions

In the post Binomial distribution: mean and variance, I proved that if $X$ is a binomial random variable with parameters $n$ (trials) and $p$ (probability of success) then the variance of $X$ is $np(1-p)$. If you'll notice my proof was by induction. You might ask, why would I do that? It's certainly one of the most roundabout and lengthy proofs of this fairly simple fact. Well, I think it's an interesting proof. Today, however, let's look at some shorter ones.

One shorter proof is to recall that I originally defined $X$ as
$$X = X_1 + \cdots + X_n$$ where $X_1,\dots, X_n$ are independent and identically distributed Bernoulli random variables; that is, a random variable that takes either the value 1 with probability $p$ or 0 with probability $1-p$. I used this fact to calculate the mean. Well, if you square this expression and calculate the expectation, it will already provide a much shorter calculation of $E(X^2)$, which is the main ingredient to showing that the variance of $X$ is $np(1-p)$.

But there's another proof that I like even better, and it uses generating functions. But besides liking the use of generating functions, I am also introducing them because I will need them for more advanced material on stochastic processes that I will talk about in the near future.
…read the rest of this post!

## Inductive formula for binomial coefficients

In the last post on the mean and variance of a binomial random variable, we used the following formula:
$$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}.$$ Let's just take a moment to prove this formula. Of course, how we prove it depends on what definition you use of the binomial coefficients. We have to start somewhere, after all. So, we'll start with the definition that $\binom{n}{k}$ is the number of ways of choosing $k$ objects from $n$ objects, where the order of the objects do not matter. If $k > n$ of course this is impossible so $\binom{n}{k}$ is zero and similarly for $k < 0$. Basic combinatorial reasoning allows us to write down the formula $$\binom{n}{k} = \frac{n!}{(n-k)!k!}$$ for $0\leq k\leq n$. For example, $$\binom{329}{4} = 479318126.$$ In actually computing this formula of course, you don't compute the factorials individually. It's much more efficient to compute as $$\binom{n}{k} = n(n-1)\cdots (n-k+1)/k!$$ or reverse the formula by replacing $k$ with $n-k$. …read the rest of this post!

## Binomial distribution: mean and variance

A Bernoulli random variable with parameter $p$ is a random variable that takes on the values $0$ and $1$, where $1$ happens with probability $p$ and $0$ with a probability of $1-p$. If $X_1,\dots,X_n$ are $n$ independent Bernoulli random variables, we define
$$X = X_1 + \cdots + X_n.$$ The random variable $X$ is said to have a binomial distribution with parameters $n$ and $p$, or a $B(n,p)$ distribution. The probability mass function of a $B(n,p)$ random variable $X$ is
$$f(k) = P(X = k) = \sum_{k=0}^n\binom{n}{k}p^k(1-p)^{n-k}.$$

What is the expected value of the $B(n,p)$ variable $X$? Expectation is linear so we can use the definition of $X$ in terms of a sum of $n$ Bernoulli random variables
$$E(X) = E(X_1) + \cdots + E(X_n).$$ The expectation $E(X_i) = 0(1-p) + 1(p) = p$. Therefore:

Theorem. The expected value of a binomial $B(n,p)$ random variable is $np$.

…read the rest of this post!

## Sum of the first factorials a perfect power?

Consider the following sum:
$$F(n) = 1! + 2! + 3! + \cdots + n!$$ Can this sum ever be a perfect power? A perfect power here is defined as $x^n$ for some natural number $x$ and some natural number $n\geq 2$. Some observations immediately come to mind: $F(1) = 1$, and that's trivially a perfect power. But $F(2) = 3$, which is a prime number and of course not a perfect power. Then there's $F(3) = 1 + 2 + 6 = 9$, which is a perfect square.

Are there others? Let's check a few more. $F(4) = 33 = 3\times 11$ and $F(5) = 153 = 3^2\times 17$, so those are not perfect powers. In fact, $F(n)$ is not a perfect power for any natural number $n > 3$. Why is this?

First, if $F(n)$ is ever a perfect power, it must be a perfect square. Why is that? If $n\geq 6$ then we have
\begin{align*}F(n) &= 1! + 2! + \cdots + 5! + 6! + \cdots + n!\\ &= 153 + 6! + \cdots + n!\\ &= 9(17 + 9(6!/9 + \cdots + n!/9)).\end{align*} Here, we have used that $n!/9$ will be an integer for $n\geq 6$. Since $17$ is not a multiple of $9$, we see that $F(n)$ has at most two factors of $3$. Therefore, $F(n)$ is at most a perfect square.

Another thing we see is that $F(n)$ for $n\geq 4$ always ends in $3$. That is because $F(4) = 33$ and $n!$ ends in zero whenever $n\geq 5$. Now, a square integer can never end in $3$. The only possible endings are $0,1,4,5,6,9$. Therefore, $F(n)$ can never be a perfect power whenever $n\geq 4$. So $F(1)$ and $F(3)$ are the only perfect powers in the sequence $F(1), F(2), F(3),\dots$

Note: I found this problem in the interesting book 104 Number Theory Problems by Andreescu, Andrica, and Feng. I did not look at their solution until solving this problem, but upon looking at it, their solution is different and handles even and odd powers separately.

## Projectivity and the double dual: Part 2

Let $R$ be an associative ring. If $M$ is a left $R$-module, then the dual module $M^*$ is the right $R$-module ${\rm Hom}(M,R)$. The action of $R$ on this module is given by $(fr)(m) = f(m)r$. We give it a right $R$-module structure since $f$ is a homomorphism of left $R$-modules. Trying to give it a structure of a left $R$-module will interfere with this action (try it!), so it doesn't work in general unless $R$ is commutative.

If $M$ instead were a right $R$-module, then we could form the dual $M^*$ as a left $R$-module of course. In particular, if $M$ is a left $R$-module, then $M^{**}$ is again a left $R$-module, and we have a canonical map
$$\sigma:M\longrightarrow M^{**}$$ defined for $x\in M$ by $\sigma(x)(f) = f(x)$ for $f\in{\rm Hom}(M,R)$.

In any introductory linear algebra class, we learn that if $M$ is a finite-dimensional vector space then $\sigma$ is an isomorphism. Of course, this can't be true for modules in general and already fails for $M = \Z/n$ as a $\Z$-module, since $\Hom(\Z/n,\Z) = 0$. Four years I wrote a post about this, called Projectivity and the Double Dual. There, we already saw that $\sigma:M\to M^{**}$ already fails to be an isomorphism when $M$ is an infinite-dimensional vector space, simply because ${\rm Hom}(\oplus_I k,k)\cong\prod_I k$ for any set $I$. Hence, if $I$ is infinite, then $\prod_I k$ will always have larger cardinality than $\oplus_I k$, so the two cannot be isomorphic.
…read the rest of this post!

## Birds: how does egg mass vary with body mass?

One can imagine that the larger the bird, the larger the egg. This is not always true. Consider the dataset for [1]. Examining it, we see that the Wild Turkey female has an average mass of 4222g and an egg mass of 78.8g, whereas the Malleefowl female has an average mass of 1830g and egg mass of 175g, a relatively much larger egg! The relative sizes of the egg can be visualized by this diagram:

The reason why bigger birds do not always produce bigger eggs is due to the variation in bird strategies in producing successful adults and the interactions with bird physiology. A major point here is that of precocity. You may have seen birds that feed their young in the nest such as this Barn Swallow family: