You are currently browsing the category archive for the ‘Real life maths’ category.

**Time dependent gravity and cosmology!**

In our universe we have a gravitational constant – i.e gravity is not dependent on time. If gravity changed with respect to time then the gravitational force exerted by the Sun on Earth would lessen (or increase) over time with all other factors remaining the same.

Interestingly time-dependent gravity was first explored by Dirac and some physicists have tried to incorporate time dependent gravity into cosmological models. As yet we have no proof that gravity is not constant, but let’s imagine a university where it is dependent on time.

**Inversely time dependent gravity**

The standard models for cosmology use G, where G is the gravitational constant. This fixes the gravitational force as a constant. However if gravity is inversely proportional to time we could have a relationship such as:

Where a is a constant. Let’s look at a very simple model, where we have a piecewise function as below:

This would create the graph at the top of the page. This is one (very simplistic) way of explaining the Big Bang. In the first few moments after t = 0, gravity would be negative and thus repulsive [and close to infinitely strong], which could explain the initial incredible universal expansion before “regular” attractive gravity kicked in (after t = 1). The Gravitational constant has only been measured to 4 significant figures:

G = 6.674 x 10^{-11}m^{3}kg^{-1}s^{-2}.

Therefore if there is a very small variation over time it is *possible* that we simply haven’t the accuracy to test this yet.

**Universal acceleration with a time dependent gravitational force**

Warning: This section is going to touch on some seriously complicated maths – not for the faint hearted! We’re going to explore whether having a gravitational force which decreases over time still allows us to have an accelerating expansion of the universe.

We can start with the following equation:

To work through an example:

This would show that when t = 1 the universe had an expansion scale factor of 2. Now, based on current data measured by astronomers we have evidence that the universe is both expanding and accelerating in its expansion. If the universal scale factor is accelerating in expansion that requires that we have:

**Modelling our universe**

We’re going to need 4 equations to model what happens when gravity is time dependent rather than just a constant.

**Equation 1**

This equation models a relationship between pressure and density in our model universe. We assume that our universe is homogenous (i.e the same) throughout.

**Equation 2**

This is one of the Friedmann equations for governing the expansion of space. We will take c =1 [i.e we will choose units such that we are in 1 light year etc]

**Equation 3**

This is another one of the Friedmann equations for governing the expansion of space. The original equation has P/(c squared) – but we we simplify again by taking c = 1.

**Equation 4**

This is our time dependent version of gravity.

**Finding alpha**

We can separate variables to solve equation (3).

**Substitution**

We can use this result, along with the equations (1) and (4) to substitute into equation (2).

**Our result**

Now, remember that if the second differential of r is positive then the universal expansion rate is accelerating. If Lamba is negative then we will have the second differential of r positive. However, all our constants G_0, a, B, t, r are greater than 0. Therefore in order for lamda to be negative we need:

What this shows is that even in a universe where gravity is time dependent (and decreasing), we would still be able to have an accelerating universe like we see today. the only factor that determines whether the universal expansion is accelerating is the value of gamma, not our gravity function.

This means that a time dependent gravity function can still gives us a result consistent with our experimental measurements of the universe.

**A specific case**

Solving the equation for the second differential of r is extremely difficult, so let’s look at a very simple case where we choose some constants to make life as easy as possible:

Substituting these into our equation (2) gives us:

We can then solve this to give:

So, finally we have arrived at our final equation. This would give us the universal expansion scale factor at time t, for a universe in which gravity follows the the equation G(t) = 1/t.

For this universe we can then see that when t = 5 for example, we would have a universal expansion scale factor of 28.5.

So, there we go – very complicated maths, way beyond IB level, so don’t worry if you didn’t follow that. And that’s just a simplified introduction to some of the maths in cosmology! You can read more about time dependent gravity here (also not for the faint hearted!)

**Envelope of projectile motion**

For any given launch angle and for a fixed initial velocity we will get projectile motion. In the graph above I have changed the launch angle to generate different quadratics. The black dotted line is then called the envelope of all these lines, and is the boundary line formed when I plot quadratics for every possible angle between 0 and pi.

**Finding the equation of an envelope for projectile motion **

Let’s start with the equations for projectile motion, usually given in parametric form:

Here v is the initial velocity which we will keep constant, theta is the angle of launch which we will vary, and g is the gravitational constant which we will take as 9.81.

First let’s rearrange these equations to eliminate the parameter t.

Next, we use the fact that the envelope of a curve is given by the points which satisfy the following 2 equations:

F(x,y,theta)=0 simply means we have rearranged an equation so that we have 3 variables on one side and have made this equal to 0. The second of these equations means the partial derivative of F with respect to theta. This means that we differentiate as usual with regards to theta, but treat x and y like constants.

Therefore we can rearrange our equation for y to give:

and in order to help find the partial differential of F we can write:

We can then rearrange this to get x in terms of theta:

We can then substitute this into the equation for F(x,y,theta)=0 to eliminate theta:

We then have the difficulty of simplifying the second denominator, but luckily we have a trig equation to help:

Therefore we can simplify as follows:

and so:

And we have our equation for the envelope of projectile motion! As we can see it is itself a quadratic equation. Let’s look at some of the envelopes it will create. For example, if I launch a projectile with a velocity of 1, and taking g = 9.81, I get the following equation:

This is the envelope of projectile motion when I take the following projectiles in parametric form and vary theta from 0 to pi:

This gives the following graph:

If I was to take an initial velocity of 2 then I would have the following:

And an initial velocity of 4 would generate the following graph:

So, there we have it, we can now create the equation of the envelope of curves created by projectile motion for any given initial velocity!

**Other ideas for projectile motion**

There are lots of other things we can investigate with projectile motion. One example provided by fellow IB teacher Ferenc Beleznay is to fix the velocity and then vary the angle, then to plot the maximum points of the parabolas. He has created a Geogebra app to show this:

You can then find that the maximum points of the parabolas lie on an ellipse (as shown below).

See if you can find the equation of this ellipse!

**Using Maths to model the spread of Coronavirus (COVID-19)**

This coronavirus is the latest virus to warrant global fears over a disease pandemic. Throughout history we have seen pandemic diseases such as the Black Death in Middle Ages Europe and the Spanish Flu at the beginning of the 20th century. More recently we have seen HIV responsible for millions of deaths. In the last few years there have been scares over bird flu and SARS – yet neither fully developed into a major global health problem. So, how contagious is COVID-19, and how can we use mathematics to predict its spread?

Modelling disease outbreaks with real accuracy is an incredibly important job for mathematicians and all countries employ medical statisticians for this job . Understanding how diseases spread and how fast they can spread through populations is essential to developing effective medical strategies to minimise deaths. If you want to save lives maybe you should become a mathematician rather than a doctor!

Currently scientists know relatively little about the new virus – but they do know that it’s the same coronavirus family as SARS and MERS which can both cause serious respiratory problems. Scientists are particularly interested in trying to discover how infectious the virus is, how long a person remains contagious, and whether people can be contagious before they show any symptoms.

**In the case of COVID-19 we have the following early estimated values: **[From a paper published by medical statisticians in the UK on January 24]

**R _{0}. between 3.6 and 4.** This is defined as how many people an infectious person will pass on their infection to in a totally susceptible population. The higher the R

_{0}. value the more quickly an infection will spread. By comparison seasonal flu has a R

_{0}. value around 2.8.

**Total number infected** by January 21: prediction interval 9,217–14,245. Of these an estimated 3,050–4,017 currently with the virus and the others recovered (or died). This is based on an estimation that only around 5% of cases have been diagnosed. By February 4th they predict 132,751–273,649 will be infected.

**Transmission rate β** estimated at 1.07. β represents the transmission rate per day – so on average an infected person will infect another 1.07 people a day.

**Infectious period** estimated at 3.6 days. We can therefore calculate μ (the per capita recovery rate) by μ = 1/(3.6). This tells us how quickly people will be removed from the population (either recovered and become immune or died)

**SIR Model**

The basic model is based on the SIR model. The SIR model looks at how much of the population is susceptible to infection (S), how many of these go on to become infectious (I), and how many of these are removed (R) from the population being considered (i.e they either recover and thus won’t catch the virus again, or die).

The Guardian datablog have an excellent graphic to show the contagiousness relative to deadliness of different diseases [click to enlarge, or follow the link]. We can see that seasonal flu has an R_{0}. value of around 2.8 and a fatality rate of around 0.1%, whereas measles has an R_{0}. value of around 15 and a fatality rate of around 0.3%. This means that measles is much more contagious than seasonal flu.

You can notice that we have nothing in the top right hand corner (very deadly and very contagious). This is just as well as that could be enough to seriously dent the human population. Most diseases we worry about fall into 2 categories – contagious and not very deadly or not very contagious and deadly.

The equations above represent a SIR (susceptible, infectious, removed) model which can be used to model the spread of diseases like flu.

dS/dt represents the rate of change of those who are susceptible to the illness with respect to time. dI/dt represents the rate of change of those who are infected with respect to time. dR/dt represents the rate of change of those who have been removed with respect to time (either recovered or died).

For example, if dI/dt is high then the number of people becoming infected is rapidly increasing. When dI/dt is zero then there is no change in the numbers of people becoming infected (number of infections remain steady). When dI/dt is negative then the numbers of people becoming infected is decreasing.

**Modelling for COVID-19**

N is the total population. Let’s take as the population of Wuhan as 11 million.

μ is the per capita recovery (Calculated by μ = 1/(duration of illness) ). We have μ = 1/3.6 = 5/18.

β the transmission rate as approximately 1.07

Therefore our 3 equations for rates of change become:

dS/dt = -1.07 S I /11,000,000

dI/dt = 1.07 S I /11,000,000 – 5/18 I

dR/dt = 5/18 I

Unfortunately these equations are very difficult to solve – but luckily we can use a computer program or spreadsheet to plot what happens. We need to assign starting values for S, I and R – the numbers of people susceptible, infectious and removed. With the following values for January 21: S = 11,000,000, I = 3500, R = 8200, β = 1.07, μ = 5/18, I designed the following Excel spreadsheet (instructions on what formula to use here):

This gives a prediction that around 3.9 million people infected within 2 weeks! We can see that the SIR model that we have used is quite simplistic (and significantly different to the expert prediction of around 200,000 infected).

So, we can try and make things more realistic by adding some real life considerations. The current value of β (the transmission rate) is 1.07, i.e an infected person will infect another 1.07 people each day. We can significantly reduce this if we expect that infected people are quarantined effectively so that they do not interact with other members of the public, and indeed if people who are not sick avoid going outside. So, if we take β as (say) 0.6 instead we get the following table:

Here we can see that this change to β has had a dramatic effect to our model. Now we are predicting around 129,000 infected after 14 days – which is much more in line with the estimate in the paper above.

As we are seeing exponential growth in the spread, small changes to the parameters will have very large effects. There are more sophisticated SIR models which can then be used to better understand the spread of a disease. Nevertheless we can see clearly from the spreadsheet the interplay between susceptible, infected and recovered which is the foundation for understanding the spread of viruses like COVID-19.

[Edited in March to use the newly designated name COVID-19]

**Waging war with maths: Hollow squares**

The picture above [US National Archives, Wikipedia] shows an example of the hollow square infantry formation which was used in wars over several hundred years. The idea was to have an outer square of men, with an inner empty square. This then allowed the men in the formation to be tightly packed, facing the enemy in all 4 directions, whilst the hollow centre allowed the men flexibility to rotate (and also was a place to hold supplies). It was one of the infantry formations of choice against charging cavalry.

So, the question is, what groupings of men can be arranged into a hollow square? This is a current Nrich investigation, so I thought I’d do a mini-investigation on this.

We can rethink this question as asking which numbers can be written as the difference between 2 squares. For example in the following diagram (from the Nrich task Hollow Squares)

We can see that the hollow square formation contains a larger square of 20 by 20 and a smaller hollow square of 8 by 8. Therefore the number of men in this formation is:

20^{2}-8^{2} = 336.

The first question we might ask therefore is how many numbers from 1-100 can be written as the difference between 2 squares? These will all be potential formations for our army.

I wrote a quick code on Python to find all these combinations. I included 0 as a square number (though this no longer creates a hollow square, rather just a square!). You can copy this and run it in a Python editor like Repl.it.

for k in range(1,50):

```
```

` for a in range(0, 100):`

for b in range(0,100):

if a**2-b**2 == k :

print(k,a,b)

This returned the following results:

1 1 0

3 2 1

4 2 0

5 3 2

7 4 3

8 3 1

9 3 0

9 5 4

11 6 5

12 4 2

13 7 6

15 4 1

15 8 7

16 4 0

16 5 3

17 9 8

19 10 9

20 6 4

21 5 2

21 11 10

23 12 11

24 5 1

24 7 5

25 5 0

25 13 12

27 6 3

27 14 13

28 8 6

29 15 14

31 16 15

32 6 2

32 9 7

33 7 4

33 17 16

35 6 1

35 18 17

36 6 0

36 10 8

37 19 18

39 8 5

39 20 19

40 7 3

40 11 9

41 21 20

43 22 21

44 12 10

45 7 2

45 9 6

45 23 22

47 24 23

48 7 1

48 8 4

48 13 11

49 7 0

49 25 24

Therefore we can see that the numbers with no solutions found are:

2,6,10,14,18,22,26,30,34,38,42,46,50

which are all clearly in the sequence 4n-2.

Thinking about this, we can see that this can be written as 2(2n-1) which is the product of an even number and an odd number. This means that all numbers in this sequence will require an odd factor in each of their factor pairs:

eg. 50 can be written as 10 (even) x 5 (odd) or 2 (even) x 25 (odd) etc.

But with a^{2}-b^{2} = (a+b)(a-b), due to symmetries we will always end up with (a+b) and (a-b) being both even or both odd, so we can’t create a number with a factor pair of one odd and one even number. Therefore numbers in the sequence 4n-2 can’t be formed as the difference of 2 squares. There are some nicer (more formal) proofs of this here.

**A battalion with 960 soldiers**

Next we are asked to find how many different ways of arranging 960 soldiers in a hollow square. So let’s modify the code first:

for a in range(0, 1000):

for b in range(0,1000):

if a**2-b**2 == 960 :

print(a,b)

Which gives us the following solutions:

31 1

32 8

34 14

38 22

46 34

53 43

64 56

83 77

122 118

241 239

**General patterns**

We can notice that when the number of soldiers is 1,3,5,7,9,11 (2n-1) we can always find a solution with the pair n and n-1. For example, 21 can be written as 2n-1 with n = 11. Therefore we have 10 and 11 as our pair of squares. This works because 11^{2}-10^{2} = (11+10)(11-10) returns the factor pair 21 and 1. In general it always returns the factor pair, 2n-1 and 1.

We can also notice that when the number of soldiers is 4,8,12,16,20 (4n) we can always find a solution with the pair n+1 and n-1. For example, 20 can be written as 4n with n = 5. Therefore we have 6 and 4 as our pair of squares. This works because 6^{2}-4^{2} = (6+4)(6-4) returns the factor pair 10 and 2. In general it always returns the factor pair, 2n and 2.

And we have already shown that numbers 2,6,10,14,18,22 (4n-2) will have no solution. These 3 sequences account for all the natural numbers (as 2n-1 incorporates the 2 sequences 4n-3 and 4n-1).

So, we have found a method of always finding a hollow square formation (if one exists) as well as being able to use some computer code to find other possible solutions. There are lots of other avenues to explore here – could you find a method for finding all possible combinations for a given number of men? What happens when the hollow squares become rectangles?

**Normal Numbers – and random number generators**

Numberphile have a nice new video where Matt Parker discusses all different types of numbers – including “normal numbers”. Normal numbers are defined as irrational numbers for which the probability of choosing any given 1 digit number is the same, the probability of choosing any given 2 digit number is the same etc. For example in the normal number 0.12345678910111213141516… , if I choose any digit in the entire number at random P(1) = P(2) = P(3) = … P(9) = 1/10. Equally if I choose any 2 digit number at random I have P(10) = P(11) = P(12) = P(99) = 1/100.

It is incredibly hard to find normal numbers, but there is a formula to find some of them.

In base 10, we are restricted to choosing a value of c such that 10 and c are relatively prime (i.e share no common factors apart from 1). So if we choose c = 3 this gives:

We can now put this into Wolfram Alpha and see what number this gives us:

So we can put the first few digits into an online calculator to find the distributions

*0.000333333444444444444448148148148148148148148148148148148148148149382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049827160493827160493827160479423863312 7572016460905349794238683127572016460905349794238683127572016460 9053497942386831275720164609053497942386831275720164609053497942*

4: 61

1: 41

8: 40

3: 38

0: 36

2: 33

7: 33

9: 33

6: 32

5: 10

We can see that we are already seeing a reasonably similar distribution of single digits, though with 4 and 5 outliers. As the number progressed we would expect these distributions to even up (otherwise it would not be a normal number).

One of the potential uses of normal numbers is in random number generators – if you can use a normal number and specify a digit (or number of digits) at random then this should give an equal chance of returning each number.

To finish off this, let’s prove that the infinite series:

does indeed converge to a number (if it diverged then it could not be used to represent a real number). To do that we can use the ratio test (only worry about this bit if you have already studied the Calculus Option for HL!):

We can see that in the last limit 3 to the power n+1 will grow faster than 3 to the power n, therefore as n increases the limit will approach 0. Therefore by the ratio test the series converges to a real number.

**Is pi normal?**

Interestingly we don’t know if numbers like e, pi and ln(2) are normal or not. We can analyse large numbers of digits of pi – and it looks like it will be normal, but as yet there is no proof. Here are the distribution of the first 100,000 digits of pi:

1: 10137

6: 10028

3: 10026

5: 10026

7: 10025

0: 9999

8: 9978

4: 9971

2: 9908

9: 9902

Which we can see are all very close to the expected value of 10,000 (+/- around 1%).

So, next I copied the first 1 million digits of pi into a character frequency counter which gives the following:

5: 100359

3: 100230

4: 100230

9: 100106

2: 100026

8: 99985

0: 99959

7: 99800

1: 99758

6: 99548

This is even closer to the expected values of 100,000 with most with +/- 0.25 %.

Proving that pi is normal would be an important result in number theory – perhaps you could be the one to do it!

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

**Volume optimization of a cuboid**

This is an extension of the Nrich task which is currently live – where students have to find the maximum volume of a cuboid formed by cutting squares of size x from each corner of a 20 x 20 piece of paper. I’m going to use an n x 10 rectangle and see what the optimum x value is when n tends to infinity.

First we can find the volume of the cuboid:

Next we want to find when the volume is a maximum, so differentiate and set this equal to 0.

Next we use the quadratic formula to find the roots of the quadratic, and then see what happens as n tends to infinity (i.e we want to see what the optimum x values are for our cuboid when n approaches infinity). We only take the negative solution of the + – quadratic solutions because this will be the only one that fits the initial problem.

Next we try and simplify the square root by taking out a factor of 16, and then we complete the square for the term inside the square root (this will be useful next!)

Next we make a u substitution. Note that this means that as n approaches infinity, u approaches 0.

Substituting this into the expression gives us:

We then manipulate the surd further to get it in the following form:

Now, the reason for all that manipulation becomes apparent – we can use the binomial expansion for the square root of 1 + u^{2} to get the following:

Therefore we have shown that as the value of n approaches infinity, the value of x that gives the optimum volume approaches 2.5cm.

So, even though we start with a pretty simple optimization task, it quickly develops into some quite complicated mathematics. We could obviously have plotted the term in n to see what its behavior was as n approaches infinity, but it’s nicer to prove it. So, let’s check our result graphically.

As we can see from the graph, with n plotted on the x axis and x plotted on the y axis we approach x = 2.5 as n approaches infinity – as required.

**An m by n rectangle.**

So, we can then extend this by considering an n by m rectangle, where m is fixed and then n tends to infinity. As before the question is what is the value of x which gives the maximum volume as n tends to infinity?

We do the same method. First we write the equation for the volume and put it into the quadratic formula.

Next we complete the square, and make the u substitution:

Next we simplify the surd, and then use the expansion for the square root of 1 + u^{2}

This then gives the following answer:

So, we can see that for an n by m rectangle, as m is fixed and n tends to infinity, the value of x which gives the optimum volume tends to m/4. For example when we had a 10 by n rectangle (i.e m = 10) we had x = 2.5. When we have a 20 by n rectangle we would have x = 5 etc.

And we’ve finished! See what other things you can explore with this problem.

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

**Zeno’s Paradox – Achilles and the Tortoise**

This is a very famous paradox from the Greek philosopher Zeno – who argued that a runner (Achilles) who constantly halved the distance between himself and a tortoise would never actually catch the tortoise. The video above explains the concept.

There are two slightly different versions to this paradox. The first version has the tortoise as stationary, and Achilles as constantly halving the distance, but never reaching the tortoise (technically this is called the dichotomy paradox). The second version is where Achilles always manages to run to the point where the tortoise was previously, but by the time he reaches that point the tortoise has moved a little bit further away.

**Dichotomy Paradox**

The first version we can think of as follows:

Say the tortoise is 2 metres away from Achilles. Initially Achilles halves this distance by travelling 1 metre. He halves this distance again by travelling a further 1/2 metre. Halving again he is now 1/4 metres away. This process is infinite, and so Zeno argued that in a finite length of time you would never actually reach the tortoise. Mathematically we can express this idea as an infinite summation of the distances travelled each time:

1 + 1/2 + 1/4 + 1/8 …

Now, this is actually a geometric series – which has first term a = 1 and common ratio r = 1/2. Therefore we can use the infinite summation formula for a geometric series (which was derived about 2000 years after Zeno!):

sum = a/(1-r)

sum = 1/(1-0.5)

sum = 2

This shows that the summation does in fact converge – and so Achilles would actually reach the tortoise that remained 2 metres away. There is still however something of a sleight of hand being employed here however – given an *infinite* length of time we have shown that Achilles would reach the tortoise, but what about reaching the tortoise in a *finite* length of time? Well, as the distances get ever smaller, the time required to traverse them also gets ever closer to zero, so we can say that as the distance converges to 2 metres, the time taken will also converge to a finite number.

There is an alternative method to showing that this is a convergent series:

S = 1+ 1/2 + 1/4 + 1/8 + 1/16 + …

0.5S = 1/2+ 1/4 + 1/8 + 1/16 + …

S – 0.5S = 1

0.5S = 1

S = 2

Here we notice that in doing S – 0.5S all the terms will cancel out except the first one.

**Achilles and the Tortoise**

The second version also makes use of geometric series. If we say that the tortoise has been given a 10 m head start, and that whilst the tortoise runs at 1 m/s, Achilles runs at 10 m/s, we can try to calculate when Achilles would catch the tortoise. So in the first instance, Achilles runs to where the tortoise was (10 metres away). But because the tortoise runs at 1/10th the speed of Achilles, he is now a further 1m away. So, in the second instance, Achilles now runs to where the tortoise now is (a further 1 metre). But the tortoise has now moved 0.1 metres further away. And so on to infinity.

This is represented by a geometric series:

10 + 1 + 0.1 + 0.01 …

Which has first time a = 10 and common ratio r = 0.1. So using the same formula as before:

sum = a/(1-r)

sum = 10/(1-0.1)

sum = 11.11m

So, again we can show that because this geometric series converges to a finite value (11.11), then after a finite time Achilles will indeed catch the tortoise (11.11m away from where Achilles started from).

We often think of mathematics and philosophy as completely distinct subjects – one based on empirical measurement, the other on thought processes – but back in the day of the Greeks there was no such distinction. The resolution of Zeno’s paradox by use of calculus and limits to infinity some 2000 years after it was first posed is a nice reminder of the power of mathematics in solving problems across a wide range of disciplines.

**The Chess Board Problem**

The chess board problem is nothing to do with Zeno (it was first recorded about 1000 years ago) but is nevertheless another interesting example of the power of geometric series. It is explained in the video above. If I put 1 grain of rice on the first square of a chess board, 2 grains of rice on the second square, 4 grains on the third square, how much rice in total will be on the chess board by the time I finish the 64th square?

The mathematical series will be:

1+ 2 + 4 + 8 + 16 +……

So a = 1 and r = 2

Sum = a(1-r^{64})/(1-r)

Sum = (1-2^{64})/(1-2)

Sum = 2^{64 }-1

Sum = 18, 446,744, 073, 709, 551, 615

This is such a large number that, if stretched from end to end the rice would reach all the way to the star Alpha Centura and back 2 times.

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

**Fourier Transform**

The Fourier Transform and the associated Fourier series is one of the most important mathematical tools in physics. Physicist Lord Kelvin remarked in 1867:

*“Fourier’s theorem is not only one of the most beautiful results of modern analysis, but it may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics.”*

The Fourier Transform deals with time based waves – and these are one of the fundamental building blocks of the natural world. Sound, light, gravity, radio signals, Earthquakes and digital compression are just some of the phenomena that can be understood through waves. It’s not an exaggeration therefore to see the study of waves as one of the most important applications of mathematics in our modern life.

Here are some real life applications in a wide range of fields:

JPEG picture and MP3 sound compression – to allow data to reduced in size.

Analysing DNA sequences – to allow identification of specific regions of genetic code

Apps like Shazam which can recognise a song from a sample of music

Processing mobile phone network data and WIFI data

Signal processing – in everything from acoustic guitar amps or electrical currents through capacitors

Radio telescopes – used to construct images of the night sky

Building’s natural frequencies – architects can design buildings to better withstand earthquakes.

Medical imaging such as MRI scans

There are many more applications – this Guardian article is a good introduction to some others.

So, what is the Fourier Transform? It takes a graph like the graph f(t) = cos(at) below:

and transforms it into:

From the above cosine graph we can see that it is periodic time based function. Time is plotted on the x axis, and this graph will tell us the value of f(t) at any given time. The graph below with 2 spikes represents this same information in a different way. It shows the frequency (plotted on the x axis) of the cosine graph. Now the frequency of a function measures how many times it repeats per second. So for a graph f(t) = cos(at) it can be calculated as the inverse of the period. The period of cos(at) is 2pi/a so it has a frequency of a/2pi.

Therefore the frequency graph for cos(ax) will have spikes at a/2pi and -a/2pi.

But how does this new representation help us? Well most real life waves are much more complicated than simple sine or cosine waves – like this trumpet sound wave below:

But the remarkable thing is that every continuous wave can be modelled as the sum of sine and cosine waves. So we can break-down the very complicated wave above into (say) cos(x) + sin(2x) + 2cos(4x) . This new representation would be much easier to work with mathematically.

The way to find out what these constituent sine and cosine waves are that make up a complicated wave is to use the Fourier Transform. By transforming a function into one which shows the frequency peaks we can work out what the sine and cosine parts are for that function.

For example, this transformed graph above would show which frequency sine and cosine functions to use to model our original function. Each peak represents a sine or cosine function of a specific frequency. Add them all together and we have our function.

The maths behind this does get a little complicated. I’ll try and talk through the method using the function f(t) = cos(at).

So, the function we want to break down into its constituent cosine and sine waves is cos(at). Now, obviously this function can be represented just with cos(at) – but this is a good demonstration of how to use the maths for the Fourier Transform. We already know that this function has a frequency of a/2pi – so let’s see if we can find this frequency using the Transform.

This is the formula for the Fourier Transform. We “simply” replace the f(t) with the function we want to transform – then integrate.

To make this easier we use the exponential formula for cosine. When we have f(t) = cos(at) we can rewrite this as the function above in terms of exponential terms.

We substitute this version of f(t) into the formula.

Next we multiply out the exponential terms in the bracket (remember the laws of indices), and then split the integral into 2 parts. The reason we have grouped the powers in this way is because of the following step.

This is the delta function – which as you can see is very closely related to the integrals we have. Multiplying both sides by pi will get the integral in the correct form. The delta function is a function which is zero for all values apart from when the domain is zero.

So, the integral can be simplified as this above.

So, our function F will be zero for all values except when the delta function is zero. This gives use the above equations.

Therefore solving these equations we get an answer for the frequency of the graph.

This frequency agrees with the frequency we already expected to find for cos(at).

A slightly more complicated example would be to follow the same process but this time with the function f(t) = cos(at) + cos(bt). If the Fourier transform works correctly it should recognise that this function is composed of one cosine function with frequency a/2pi and another cosine function of b/2pi. If we follow through exactly the same method as above (we can in effect split the function into cos(at) and cos(bt) and do both separately), we should get:

This therefore is zero for all values except for when we have frequencies of a/2pi and b/2pi. So the Fourier Transform has correctly identified the constituent parts of our function.

If you want to read more about Fourier Transforms, then the Better Explained article is an excellent start.

**IB Revision**

**Non Euclidean Geometry – An Introduction**

It wouldn’t be an exaggeration to describe the development of non-Euclidean geometry in the 19th Century as one of the most profound mathematical achievements of the last 2000 years. Ever since Euclid (c. 330-275BC) included in his geometrical proofs an assumption (postulate) about parallel lines, mathematicians had been trying to prove that this assumption was true. In the 1800s however, mathematicians including Gauss started to wonder what would happen if this assumption was false – and along the way they discovered a whole new branch of mathematics. A mathematics where there is an absolute measure of distance, where straight lines can be curved and where angles in triangles don’t add up to 180 degrees. They discovered non-Euclidean geometry.

**Euclid’s parallel postulate (5th postulate)**

Euclid was a Greek mathematician – and one of the most influential men ever to live. Through his collection of books, *Elements, *he created the foundations of geometry as a mathematical subject. Anyone who studies geometry at secondary school will still be using results that directly stem from Euclid’s *Elements* – that angles in triangles add up to 180 degrees, that alternate angles are equal, the circle theorems, how to construct line and angle bisectors. Indeed you might find it slightly depressing that you were doing nothing more than re-learn mathematics well understood over 2000 years ago!

All of Euclid’s results were based on rigorous deductive mathematical proof – if A was true, and A implied B, then B was also true. However Euclid did need to make use of a small number of definitions (such as the definition of a line, point, parallel, right angle) before he could begin his first book He also needed a small number of postulates (assumptions given without proof) – such as: * “(It is possible) to draw a line between 2 points”* and “*All right angles are equal”*

Now the first 4 of these postulates are relatively uncontroversial in being assumed as true. The 5th however drew the attention of mathematicians for centuries – as they struggled in vain to *prove* it. It is:

*If a line crossing two other lines makes the interior angles on the same side less than two right angles, then these two lines will meet on that side when extended far enough. *

This might look a little complicated, but is made a little easier with the help of the sketch above. We have the line L crossing lines L1 and L2, and we have the angles A and B such that A + B is less than 180 degrees. Therefore we have the lines L1 and L2 intersecting. Lines which are not parallel will therefore intersect.

Euclid’s postulate can be restated in simpler (though not quite logically equivalent language) as:

*At most one line can be drawn through any point not on a given line parallel to the given line in a plane.*

In other words, if you have a given line (l) and a point (P), then there is only 1 line you can draw which is parallel to the given line and through the point (m).

Both of these versions do seem pretty self-evident, but equally there seems no reason why they should simply be assumed to be true. Surely they can actually be proved? Well, mathematicians spent the best part of 2000 years trying without success to do so.

**Why is the 5th postulate so important? **

Because Euclid’s proofs in *Elements *were deductive in nature, that means that if the 5th postulate was false, then all the subsequent “proofs” based on this assumption would have to be thrown out. Most mathematicians working on the problem did in fact believe it was true – but were keen to actually prove it.

As an example, the 5th postulate can be used to prove that the angles in a triangle add up to 180 degrees.

The sketch above shows that if A + B are less than 180 degrees the lines will intersect. Therefore because of symmetry (if one pair is more than 180 degrees, then other side will have a pair less than 180 degrees), a pair of parallel lines will have A + B = 180. This gives us:

This is the familiar diagram you learn at school – with alternate and corresponding angles. If we accept the diagram above as true, we can proceed with proving that the angles in a triangle add up to 180 degrees.

Once, we know that the two red angles are equal and the two green angles are equal, then we can use the fact that angles on a straight line add to 180 degrees to conclude that the angles in a triangle add to 180 degrees. But it needs the parallel postulate to be true!

In fact there are geometries in which the parallel postulate is not true – and so we can indeed have triangles whose angles don’t add to 180 degrees. More on this in the next post.

If you enjoyed this you might also like:

Non-Euclidean Geometry II – Attempts to Prove Euclid – The second part in the non-Euclidean Geometry series.

The Riemann Sphere – The Riemann Sphere is a way of mapping the entire complex plane onto the surface of a 3 dimensional sphere.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.

**Statistics to win penalty shoot-outs**

With the World Cup upon us again we can perhaps look forward to yet another heroic defeat on penalties by England. England are in fact the worst country of any of the major footballing nations at taking penalties, having won only 1 out of 7 shoot-outs at the Euros and World Cup. In fact of the 35 penalties taken in shoot-outs England have missed 12 – which is a miss rate of over 30%. Germany by comparison have won 5 out of 7 – and have a miss rate of only 15%.

With the stakes in penalty shoot-outs so high there have been a number of studies to look at optimum strategies for players.

**Shoot left when ahead
**

One study published in Psychological Science looked at all the penalties taken in penalty shoot-outs in the World Cup since 1982. What they found was pretty incredible – goalkeepers have a subconscious bias for diving to the right when their team is behind.

As is clear from the graphic, this is not a small bias towards the right, but a very strong one. When their team is behind the goalkeeper apparently favours his (likely) strong side 71% of the time. The strikers’ shot meanwhile continues to be placed either left or right with roughly the same likelihood as in the other situations. So, this built in bias makes the goalkeeper much less likely to help his team recover from a losing position in a shoot-out.

**Shoot high**

Analysis by Prozone looking at the data from the World Cups and European Championships between 1998 and 2010 compiled the following graphics:

The first graphic above shows the part of the goal that scoring penalties were aimed at. With most strikers aiming bottom left and bottom right it’s no surprise to see that these were the most successful areas.

The second graphic which shows where penalties were saved shows a more complete picture – goalkeepers made nearly all their saves low down. A striker who has the skill and control to lift the ball high makes it very unlikely that the goalkeeper will save his shot.

The last graphic also shows the risk involved in shooting high. This data shows where all the missed penalties (which were off-target) were being aimed. Unsurprisingly strikers who were aiming down the middle of the goal managed to hit the target! Interestingly strikers aiming for the right corner (as the goalkeeper stands) were far more likely to drag their shot off target than those aiming for the left side. Perhaps this is to do with them being predominantly right footed and the angle of their shooting arc?

**Win the toss and go first**

The Prozone data also showed the importance of winning the coin toss – 75% of the teams who went first went on to win. Equally, missing the first penalty is disastrous to a team’s chances – they went on to lose 81% of the time. The statistics also show a huge psychological role as well. Players who needed to score to keep their teams in the competition only scored a miserable 14% of the time. It would be interesting to see how these statistics are replicated over a larger data set.

**Don’t dive**

A different study which looked at 286 penalties from both domestic leagues and international competitions found that goalkeepers are actually best advised to stay in the centre of the goal rather than diving to one side. This had quite a significant affect on their ability to save the penalties – increasing the likelihood from around 13% to 33%. So, why don’t more goalkeepers stay still? Well, again this might come down to psychology – a diving save looks more dramatic and showcases the goalkeeper’s skill more than standing stationary in the centre.

**So, why do England always lose on penalties?**

There are some interesting psychological studies which suggest that England suffer more than other teams because English players are inhibited by their high public status (in other words, there is more pressure on them to perform – and hence that pressure is harder to deal with). One such study noted that the best penalty takers are the ones who compose themselves prior to the penalty. England’s players start to run to the ball only 0.2 seconds after the referee has blown – making them much less composed than other teams.

However, I think you can put too much analysis on psychology – the answer is probably simpler – that other teams beat England because they have technically better players. English footballing culture revolves much less around technical skill than elsewhere in Europe and South America – and when it comes to the penalty shoot-outs this has a dramatic effect.

As we can see from the statistics, players who are technically gifted enough to lift their shots into the top corners give the goalkeepers virtually no chance of saving them. England’s less technically gifted players have to rely on hitting it hard and low to the corner – which gives the goalkeeper a much higher percentage chance of saving them.

**Test yourself**

You can test your penalty taking skills with this online game from the Open University – choose which players are best suited to the pressure, decide what advice they need and aim your shot in the best position.

If you liked this post you might also like:

Championship Wages Predict League Position? A look at how statistics can predict where teams finish in the league.

Premier League Wages Predict League Positions? A similar analysis of Premier League teams.

**IB Revision**