**Projective Geometry**

Geometry is a discipline which has long been subject to mathematical fashions of the ages. In classical Greece, Euclid’s elements (Euclid pictured above) with their logical axiomatic base established the subject as the pinnacle on the “great mountain of Truth” that all other disciplines could but hope to scale. However the status of the subject fell greatly from such heights and by the late 18th century it was no longer a fashionable branch to study. The revival of interest in geometry was led by a group of French mathematicians at the start of the 1800s with their work on projective geometry. This then paved the way for the later development of non-Euclidean geometry and led to deep philosophical questions as to geometry’s links with reality and indeed just what exactly geometry was.

Projective geometry is the study of geometrical properties unchanged by projection. It strips away distinctions between conics, angles, distance and parallelism to create a geometry more fundamental than Euclidean geometry. For example the diagram below shows how an ellipse has been projected onto a circle. The ellipse and the circle are therefore projectively equivalent which means that projective results in the circle are also true in ellipses (and other conics).

Projective geometry can be understood in terms of rays of light emanating from a point. In the diagram above, the triangle IJK drawn on the glass screen would be projected to triangle LNO on the ground. This projection does not preserve either angles or side lengths – so the triangle on the ground will have different sized angles and sides to that on the screen. This may seem a little strange – after all we tend to think in terms of angles and sides in geometry, however in projective geometry distinctions about angles and lengths are stripped away (however something called the cross-ratio is still preserved).

We can see in the image above that a projection from the point E creates similar shapes when the 2 planes containing IJKL and ABCD are parallel. Therefore the Euclidean geometrical study of similar shapes can be thought of as a subset of plane positions in projective geometry.

Taking this idea further we can see that congruent shapes can be achieved if we have the centre of projection, E, “sent to infinity:” In projective geometry, parallel lines do indeed meet – at this point at infinity. Therefore with the point E sent to infinity we have a projection above yielding congruent shapes.

Projective geometry can be used with conics to associate every point (pole) with a line (polar), and vice versa. For example the point A had the associated red line, d. To find this we draw the 2 tangents from A to the conic. We then join the 2 points of intersection between B and C. This principle of duality allowed new theorems to be discovered simply by interchanging points and lines.

An example of both the symmetrical attractiveness and the mathematical potential for duality was first provided by Brianchon. In 1806 he used duality to discover the dual theorem of Pascal’s Theorem – simply by interchanging points and lines. Rarely can a mathematical discovery have been both so (mechanically) easy and yet so profoundly

beautiful.

**Brianchon’s Theorem**

**Pascal’s Theorem**

**Poncelet**

Poncelet was another French pioneer of projective geometry who used the idea of points and lines being “sent to infinity” to yield some remarkable results when used as a tool for mathematical proof.

**Another version of Pascal’s Theorem:**

Poncelet claimed he could prove Pascal’s theorem (shown above) where 6 points on a conic section joined to make a hexagon have a common line. He did this by sending the line GH to infinity. To understand this we can note that the previous point of intersection G of lines AB’ and A’B is now at infinity, which means that AB’ and A’B will now be parallel. This means that H being at infinity also creates the 2 parallel lines AC’. Poncelet now argued that because we could prove through geometrical means that B’C and BC’ were also parallel, that this was consistent with the line HI also being at infinity. Therefore by proving the specific case in a circle where line GHI has been sent to infinity he argued that we could prove using projective geometry the general case of Pascal’s theorem in any conic .

**Pascal’s Theorem with intersections at infinity:**

This branch of mathematics developed quickly in the early 1800s, sparking new interest in geometry and leading to a heated debate about whether geometry should retain its “pure” Euclidean roots of diagrammatic proof, or if it was best understood through algebra. The use of points and lines at infinity marked a shift away from geometry representing “reality” as understood from a Euclidean perspective, and by the late 1800s Beltrami, Poincare and others were able to incorporate the ideas of projective geometry and lines at infinity to provide their Euclidean models of non-Euclidean space. The development of projective geometry demonstrated how a small change of perspective could have profound consequences.

]]>**Narcissistic Numbers**

Narcissistic Numbers are defined as follows:

An n digit number is narcissistic if the sum of its digits to the nth power equal the original number.

For example with 2 digits, say I choose the number 36:

3^{2} + 6^{2} = 45. Therefore 36 is not a narcissistic number, as my answer is not 36.

For example with 3 digits, say I choose the number 124:

1^{3} + 2^{3} + 4^{3} = 73. Therefore 124 is not a narcissistic number as my answer is not 124.

The question is how to find all the narcissistic numbers less than 1000, without checking 1000 different numbers? Let’s start with 1 digit numbers.

**1 digit numbers**

0^{1} = 0

1^{1} = 1

2^{1} = 2 etc.

Therefore all numbers from 0-9 are narcissistic.

**2 digit numbers**

For 2 digit numbers in the form ab we need the following:

a^{2} + b^{2} = 10a + b.

Therefore

a^{2} – 10a + b^{2} – b = 0.

Next if we choose a = 1,2,3,4,5 we get the following simultaneous equations:

b^{2} – b -16 = 0

b^{2} – b -21 = 0

b^{2} – b -24 = 0

b^{2} – b -25 = 0

None of these factorise for integer solutions, therefore there are no 2 digit solutions from 11 to 59. Trying a = 6,7,8,9 we find that we get the same as the first four equations. This is because a and 10-a give equivalent solutions. In other words, when a = 1 we get the equation b^{2} – b -9 = 0 and when a = 9 we also get the equation b^{2} – b -9 = 0. This is because:

for:

a^{2} – 10a

if we substitute a = (10 – a) we get

(10 – a)^{2} – 10(10 – a) = a^{2} – 10a.

Therefore we prove that there are no 2 digit narcissistic numbers.

**3 digit numbers**

First we list the cube numbers:

1^{3} = 1, 2^{3} = 8, 3^{3} = 27, 4^{3} = 64, 5^{3} = 125, 6^{3} = 216, 7^{3} = 343, 8^{3} = 512, 9^{3} = 729.

and then consider 3 digit numbers of the form 1bc first. We need:

1^{3}+ b^{3} + c^{3} = 100 + 10b + c.

If our first digit is 1, then b^{3} + c^{3} need to add up to give us a number in the one hundreds, therefore:

99 ≤ ^{ }b^{3} + c^{3}≤ 198.

We can then check the cube numbers and see that the only possible combinations for a and b are 0 5, 5 0, 1 5, 5 1, 2 5, 5 2, 3 5, 5 3, 4 4, 4 5, 5 4. We can check these (only have to use the calculator for half as the reversed numbers give equivalent answers) and find that for 153 we do indeed get a narcissistic number i.e:

1^{3}+ 5^{3} + 3^{3} = 153.

Next we consider 3 digit numbers of the form 2bc first. We need:

192 ≤ ^{ }b^{3} + c^{3}≤ 292

This gives the following possibilities for b and c: 6 0, 0 6, 6 1, 16, 2 6, 6 2, 6 3, 3 6, 6 4, 4 6.

None of these give narcissistic numbers.

Next we consider 3 digit numbers of the form 3bc first. We need:

273 ≤ ^{ }b^{3} + c^{3}≤373

This gives the following possibilities for b and c: 6 4, 4 6, 6 5, 5 6, 7 1, 1 7, 7 2, 2 7, 7 3, 3 7, 7 0, 0 7.

Checking these we find 2 more narcissistic numbers:

370 = 3^{3}+ 7^{3} + 0^{3}

371= 3^{3}+ 7^{3} + 1^{3}

Using the same method, we can find that the only possibilities for 4bc are: 5 6, 6 5, 7 1, 1 7, 7 2, 2 7, 7 3, 3 7, 7 4, 4 7, 7 0, 0 7. Checking these gives us 1 more narcissistic number:

407= 4^{3}+ 0^{3} + 7^{3}

We can carry on with this method checking up to the form 9ab (it gets easier as there are less combinations possible, but will find no more narcissistic numbers. Therefore we have all the narcissistic numbers less than 1000:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 153, 370, 371, 407.

**Is there a limit to how many narcissistic numbers there are?**

Surprisingly there is a limit – there are exactly 88 narcissistic numbers in base 10. To see why we can consider the following:

In 3 digits the biggest number we can choose is 999. That would give 9^{3}+ 9^{3} + 9^{3} (or 3(9)^{3}). This needs to give a number in the hundreds (10^{2}) otherwise it would be impossible to achieve a narcissistic number. Therefore with an n digit number the largest number we can make is n(9)^{n} and if we can’t make a number in the 10^{n-1}, then a narcissistic number is not possible. If we can prove that the inequality:

n(9)^{n} < 10^{n-1}

is true for some values of n, then there will be an upper bound to the narcissistic numbers we can make. We could simply plot this directly, but let’s see if we can convince ourselves it’s true for some n without using graphical software first. Let’s see if we can find an equality:

n(9)^{n} = 10^{n-1}

First we take log base 10 of both sides

log n(9)^{n} = n-1

log(n) + nlog(9) = n-1

n(log9 -1) + logn +1 = 0

Next we make the substitution logn = u and therefore 10^{u} = n. This gives:

10^{u}(log9 -1) + u + 1 = 0.

Now we can clearly see that 10^{u} will grow much larger than u + 1, so any root must be for u is small. Let’s see, when u = 1 we get a positive number (as log9 -1 is a negative number close to 0), but when u = 2 we get a negative number. Therefore we have a root between u = 1 and u = 2. Given that we made the substitution logn = u, that means we have found the inequality n(9)^{n} < 10^{n-1} will hold for n somewhere between 10^{1} and 10 ^{2}.

Using Wolfram we can see that the equality is reached when u = 1.784, i.e when n = 10^{1.784} or approx 60.8. Therefore we can see that when we have more than 60 digit numbers, it is no longer possible to make narcissistic numbers.

**Quantum Mechanics – Statistical Universe**

Quantum mechanics is the name for the mathematics that can describe physical systems on extremely small scales. When we deal with the macroscopic – i.e scales that we experience in our everyday physical world, then Newtonian mechanics works just fine. However on the microscopic level of particles, Newtonian mechanics no longer works – hence the need for quantum mechanics.

Quantum mechanics is both very complicated and very weird – I’m going to try and give a very simplified (though not simple!) example of how *probabilities *are at the heart of quantum mechanics. Rather than speaking with certainty about the property of an object as we can in classical mechanics, we need to take about the probability that it holds such a property.

For example, one property of particles is *spin. *We can have create a particle with the property of either *up* spin or *down* spin. We can visualise this as an arrow pointing up or down:

We can then create an apparatus (say the slit below parallel to the z axis) which measures whether the particle is in either up state or down state. If the particle is in up spin then it will return a value of +1 and if it is in down spin then it will return a value of -1.

So far so normal. But here is where things get weird. If we then rotate the slit 90 degrees clockwise so that it is parallel to the x axis, we would expect from classical mechanics to get a reading of 0. i.e the “arrow” will not fit through the slit. However that is not what happens. Instead we will still get readings of -1 or +1. However if we run the experiment a large number of times we find that the mean average reading will indeed be 0!

What has happened is that the act of measuring the particle with the slit has changed the state of the particle. Say it was previously +1, i.e in *up* spin, by measuring it with the newly rotated slit we have forced the particle into a new state of either pointing right (*right* spin) or pointing left (*left* spin). Our rotated slit will then return a value of +1 if the particle is in right spin, and will return a value of -1 if the particle in in left spin.

In this case the probability that the apparatus will return a value of +1 is 50% and the probability that the apparatus will return a value of -1 is also 50%. Therefore when we run this experiment many times we get the average value of 0. Therefore classical mechanics is achieved as an probabilistic approximation of repeated particle interactions

We can look at a slightly more complicated example – say we don’t rotate the slit 90 degrees, but instead rotate it an arbitrary number of degrees from the z axis as pictured below:

Here the slit was initially parallel to the z axis in the x,y plane (i.e y=0), and has been rotated Θ degrees. So the question is what is the probability that our previously *up* spin particle will return a value of +1 when measured through this new slit?

The equations above give the probabilities of returning a +1 spin or a -1 spin depending on the angle of orientation. So in the case of a 90 degree orientation we have both P(+1) and P(-1) = 1/2 as we stated earlier. An orientation of 45 degrees would have P(+1) = 0.85 and P(-1) = 0.15. An orientation of 10 degrees would have P(+1) = 0.99 and P(-1) = 0.01.

The statistical average meanwhile is given by the above formula. If we rotate the slit by Θ degrees from the z axis in the x,z plane, then run the experiment many times, we will get a long term average of cosΘ. As we have seen before, when Θ = 90 this means we get an average value of 0. if Θ = 45 degrees we would get an average reading of √2/2.

This gives a very small snapshot into the ideas of quantum mechanics and the crucial role that probability plays in understanding quantum states. If you found that difficult, then don’t worry you’re in good company. As Richard Feynman the legendary physicist once said, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

]]>**Modeling hours of daylight**

Desmos has a nice student activity (on teacher.desmos.com) modeling the number of hours of daylight in Florida versus Alaska – which both produce a nice sine curve when plotted on a graph. So let’s see if this relationship also holds between Phuket and Manchester.

First we can find the daylight hours from this site, making sure to convert the times given to decimals of hours.

**Phuket**

Phuket has the following distribution of hours of daylight (taking the reading from the first of each month and setting 1 as January)

**Manchester **

Manchester has much greater variation and is as follows:

Therefore when we plot them together (Phuket in green and Manchester in blue) we get the following 2 curves:

We can see that these very closely fit sine curves, indeed we can see that the following regression lines fit the curves very closely:

**Manchester:**

**Phuket:**

For Manchester I needed to set the value of b (see what happens if you don’t do this!) Because we are working with Sine graphs, the value of d will give the equation of the axis of symmetry of the graph, which will also be the average hours of daylight over the year. We can see therefore that even though there is a huge variation between the hours of daylight in the 2 places, they both get on average the same amount of daylight across the year (12.3 hours versus 12.1 hours).

**Further investigation:**

Does the relationship still hold when looking at hours of sunshine rather than daylight? How many years would we expect our model be accurate for? It’s possible to investigate the use of sine waves to model a large amount of natural phenomena such as tide heights and musical notes – so it’s also possible to investigate in this direction as well.

]]>Cartoon from here

**The Gini Coefficient – Measuring Inequality **

The Gini coefficient is a value ranging from 0 to 1 which measures inequality. 0 represents perfect equality – i.e everyone in a population has exactly the same wealth. 1 represents complete inequality – i.e 1 person has all the wealth and everyone else has nothing. As you would expect, countries will always have a value somewhere between these 2 extremes. The way its calculated is best seen through the following graph (from here):

The Gini coefficient is calculated as the area of A divided by the area of A+B. As the area of A decreases then the curve which plots the distribution of wealth (we can call this the Lorenz curve) approaches the line y = x. This is the line which represents perfect equality.

**Inequality in Thailand**

The following graph will illustrate how we can plot a curve and calculate the Gini coefficient. First we need some data. I have taken the following information on income distribution from the 2002 World Bank data on Thailand where I am currently teaching:

Thailand:

The bottom 20% of the population have 6.3% of the wealth

The next 20% of the population have 9.9% of the wealth

The next 20% have 14% of the wealth

The next 20% have 20.8% of the wealth

The top 20% have 49% of the wealth

I can then write this in a cumulative frequency table (converting % to decimals):

Here the x axis represents the cumulative percentage of the population (measured from lowest to highest), and the y axis represents the cumulative wealth. This shows, for example that the the bottom 80% of the population own 51% of the wealth. This can then be plotted as a graph below (using Desmos):

From the graph we can see that Thailand has quite a lot of inequality – after all the top 20% have just under 50% of the wealth. The blue line represents how a perfectly equal society would look.

To find the Gini Coefficient we first need to find the area between the 2 curves. The area underneath the blue line represents the area A +B. This is just the area of a triangle with length and perpendicular height 1, therefore this area is 0.5.

The area under the green curve can be found using the trapezium rule, 0.5(a+b)h. Doing this for the first trapezium we get 0.5(0+0.063)(0.2) = 0.0063. The second trapezium is 0.5(0.063+0.162)(0.2) and so on. Adding these areas all together we get a total trapezium area of 0.3074. Therefore we get the area between the two curves as 0.5 – 0.3074 ≈ 0.1926

The Gini coefficient is then given by 0.1926/0.5 = 0.3852.

The actual World Bank calculation for Thailand’s Gini coefficient in 2002 was 0.42 – so we have slightly underestimated the inequality in Thailand. We would get a more accurate estimate by taking more data points, or by fitting a curve through our plotted points and then integrating. Nevertheless this is a good demonstration of how the method works.

In this graph (from here) we can see a similar plot of wealth distribution – here we have quintiles on the x axis (1st quintile is the bottom 20% etc). This time we can compare Hungary – which shows a high level of equality (the bottom 80% of the population own 62.5% of the wealth) and Namibia – which shows a high level of inequality (the bottom 80% of the population own just 21.3% of the wealth).

**How unequal is the world?**

We can apply the same method to measure world inequality. One way to do this is to calculate the per capita income of all the countries in the world and then to work out the share of the total global per capita income the (say) bottom 20% of the countries have. This information is represented in the graph above (from here). It shows that there was rising inequality (i.e the richer countries were outperforming the poorer countries) in the 2 decades prior to the end of the century, but that there has been a small decline in inequality since then.

If you want to do some more research on the Gini coefficient you can use the following resources:

The intmaths site article on this topic – which goes into more detail and examples of how to calculate the Gini coefficient

The ConferenceBoard site which contains a detailed look at world inequality

The World Bank data on the Gini coefficients of different countries.

]]>**Is Intergalactic space travel possible?**

The Andromeda Galaxy is around 2.5 million light years away – a distance so large that even with the speed of light at traveling as 300,000,000m/s it has taken 2.5 million years for that light to arrive. The question is, would it ever be possible for a journey to the Andromeda Galaxy to be completed in a human lifetime? With the speed of light a universal speed limit, it would be reasonable to argue that no journey greater than around 100 light years would be possible in the lifespan of a human – but this remarkably is not the case. We’re going to show that a journey to Andromeda would indeed be possible within a human lifespan. All that’s needed (!) is a rocket which is able to achieve constant acceleration and we can arrive with plenty of time to spare.

**Time dilation**

To understand how this is possible, we need to understand that as the speed of the journey increases, then time dilation becomes an important factor. The faster the rocket is traveling the greater the discrepancy between the internal body clock of the astronaut on the rocket and an observer on Earth. Let’s see how that works in practice by using the above equation.

Here we have

t(T): The time elapsed from the perspective of an observer on Earth

T: The time elapsed from the perspective of an astronaut on the rocket

c: The speed of light approx 300,000,000 m/s

a: The constant acceleration we assume for our rocket. For this example we will take a = 9.81 m/s^{2} which is the same as the gravity experienced on Earth. This would be the most natural for a human environment. The acceleration is measured relative to an inert observer.

Sinh(x): This is the hyperbolic sine function which can be defined as:

We should note that all our units are in meters, seconds and m/s^{2} therefore when the astronaut experiences 1 year passing on this rocket, we first need to convert this to seconds: 1 year = 60 x 60 x 24 x 365 = 31,536,000 seconds. Therefore T = 31,536,000 and:

which would give us the time experienced on Earth in seconds, therefore by dividing by (60 x 60 x 24 x 365) we can arrive at the time experienced on Earth in years:

Using either Desmos or Wolfram Alpha this gives an answer of 1.187. This means that 1 year experienced on the rocket is experienced as 1.19 years on Earth. Now we have our formula we can easily calculate other values. Two years is:

which gives an answer of 3.755 years. So 2 years on the rocket is experienced as 3.76 years on Earth. As we carry on with the calculations, and as we see the full effects of time dilation we get some startling results:

After 10 years on the space craft, civilization on Earth has advanced (or not) 15,000 years. After 20 years on the rocket, 445,000,000 years have passed on Earth and after 30 years 13,500,000,000,000 years, which around 1000 times greater than the age of the Universe post Big Bang. So, as we can see, time is no longer a great concern.

**Distance travelled**

Next let’s look at how far we can reach from Earth. This is given by the following equation:

Here we have

x(T): The distance travelled from Earth

T, c and a as before.

Cosh(x): This is the hyperbolic cosine function which can be defined as:

Again we note that we are measuring in meters and seconds. Therefore to find the distance travelled in one year we convert 1 year to seconds as before:

Next we note that this will give an answer in meters, so we can convert to light years by dividing by 9.461×10^{15}

Again using Wolfram Alpha or Desmos we find that after one year the spacecraft will be 0.563 light years from Earth. After two years we have:

which gives us 2.91 light years from Earth. Calculating the next values gives us the following table:

We can see that as our spacecraft approaches the speed of light, we will travel the expected number of light years as measured by an observer on Earth.

So we can see that we would easily reach the Andromeda Galaxy within 20 years on a spacecraft and could have spanned the size of the observable universe within 30 years. Now, all we need is to build a spaceship capable of constant acceleration, and resolve how a human body could cope with such forces and we’re there!

**How likely is this?**

Well, the technology needed to build a spacecraft capable of constant acceleration to get close to light speed is not yet available – but there are lots of interesting ideas about how these could be designed in theory. One of the most popular ideas is to make a “solar sail” – which would collect light from the Sun (or any future nearby stars) to propel it along on its journey. Another alternative would be a laser sail – which rather than relying on the Sun, would receive pin-point laser beams from the Earth.

Equally we are a long way from being able to send humans – much more likely is that the future of spaceflight will be carried out by machines. Scientists have suggested that if the spacecraft payload was around 1 gram (say either a miniaturized robot or digital data depending on the mission’s aim), a solar sail or laser sail could be feasibly built which would be sufficient to achieve 25% the speed of light.

NASA have begun launching continuous acceleration spacecraft powered by the Sun. In 2018 they launched the Near-Earth Asteroid Scout. This will unfurl a solar sail and be propelled to a speed of 28,600 m/s. Whilst this is a long way from near-light speeds, it is a proof of concept and does show one potential way that interstellar travel could be achieved.

You can read more about the current scientific advances on solar sails here, and some more on the mathematics of space travel here.

]]>This is a nice example of using some maths to solve a puzzle from the mindyourdecisions youtube channel (screencaptures from the video).

**How to Avoid The Troll: A Puzzle**

In these situations it’s best to look at the extreme case first so you get some idea of the problem. If you are feeling particularly pessimistic you could assume that the troll is always going to be there. Therefore you would head to the top of the barrier each time. This situation is represented below:

**The Pessimistic Solution:**

Another basic strategy would be the optimistic strategy. Basically head in a straight line hoping that the troll is not there. If it’s not, then the journey is only 2km. If it is then you have to make a lengthy detour. This situation is shown below:

**The Optimistic Solution:**

The expected value was worked out here by doing 0.5 x (2) + 0.5 x (2 + root 2) = 2.71.

The question is now, is there a better strategy than either of these? An obvious possibility is heading for the point halfway along where the barrier might be. This would make a triangle of base 1 and height 1/2. This has a hypotenuse of root (5/4). In the best case scenario we would then have a total distance of 2 x root (5/4). In the worst case scenario we would have a total distance of root(5/4) + 1/2 + root 2. We find the expected value by multiply both by 0.5 and adding. This gives 2.63 (2 dp). But can we do any better? Yes – by using some algebra and then optimising to find a minimum.

**The Optimisation Solution:**

To minimise this function, we need to differentiate and find when the gradient is equal to zero, or draw a graph and look for the minimum. Now, hopefully you can remember how to differentiate polynomials, so here I’ve used Wolfram Alpha to solve it for us. Wolfram Alpha is incredibly powerful -and also very easy to use. Here is what I entered:

and here is the output:

So, when we head for a point exactly 1/(2 root 2) up the potential barrier, we minimise the distance travelled to around 2.62 miles.

So, there we go, we have saved 0.21 miles from our most pessimistic model, and 0.01 miles from our best guess model of heading for the midpoint. Not a huge difference – but nevertheless we’ll save ourselves a few seconds!

This is a good example of how an exploration could progress – once you get to the end you could then look at changing the question slightly, perhaps the troll is only 1/3 of the distance across? Maybe the troll appears only 1/3 of the time? Could you even generalise the results for when the troll is y distance away or appears z percent of the time?

]]>This is a very famous paradox from the Greek philosopher Zeno – who argued that a runner (Achilles) who constantly halved the distance between himself and a tortoise would never actually catch the tortoise. The video above explains the concept.

There are two slightly different versions to this paradox. The first version has the tortoise as stationary, and Achilles as constantly halving the distance, but never reaching the tortoise (technically this is called the dichotomy paradox). The second version is where Achilles always manages to run to the point where the tortoise was previously, but by the time he reaches that point the tortoise has moved a little bit further away.

**Dichotomy Paradox**

The first version we can think of as follows:

Say the tortoise is 2 metres away from Achilles. Initially Achilles halves this distance by travelling 1 metre. He halves this distance again by travelling a further 1/2 metre. Halving again he is now 1/4 metres away. This process is infinite, and so Zeno argued that in a finite length of time you would never actually reach the tortoise. Mathematically we can express this idea as an infinite summation of the distances travelled each time:

1 + 1/2 + 1/4 + 1/8 …

Now, this is actually a geometric series – which has first term a = 1 and common ratio r = 1/2. Therefore we can use the infinite summation formula for a geometric series (which was derived about 2000 years after Zeno!):

sum = a/(1-r)

sum = 1/(1-0.5)

sum = 2

This shows that the summation does in fact converge – and so Achilles would actually reach the tortoise that remained 2 metres away. There is still however something of a sleight of hand being employed here however – given an *infinite* length of time we have shown that Achilles would reach the tortoise, but what about reaching the tortoise in a *finite* length of time? Well, as the distances get ever smaller, the time required to traverse them also gets ever closer to zero, so we can say that as the distance converges to 2 metres, the time taken will also converge to a finite number.

There is an alternative method to showing that this is a convergent series:

S = 1+ 1/2 + 1/4 + 1/8 + 1/16 + …

0.5S = 1/2+ 1/4 + 1/8 + 1/16 + …

S – 0.5S = 1

0.5S = 1

S = 2

Here we notice that in doing S – 0.5S all the terms will cancel out except the first one.

**Achilles and the Tortoise**

The second version also makes use of geometric series. If we say that the tortoise has been given a 10 m head start, and that whilst the tortoise runs at 1 m/s, Achilles runs at 10 m/s, we can try to calculate when Achilles would catch the tortoise. So in the first instance, Achilles runs to where the tortoise was (10 metres away). But because the tortoise runs at 1/10th the speed of Achilles, he is now a further 1m away. So, in the second instance, Achilles now runs to where the tortoise now is (a further 1 metre). But the tortoise has now moved 0.1 metres further away. And so on to infinity.

This is represented by a geometric series:

10 + 1 + 0.1 + 0.01 …

Which has first time a = 10 and common ratio r = 0.1. So using the same formula as before:

sum = a/(1-r)

sum = 10/(1-0.1)

sum = 11.11m

So, again we can show that because this geometric series converges to a finite value (11.11), then after a finite time Achilles will indeed catch the tortoise (11.11m away from where Achilles started from).

We often think of mathematics and philosophy as completely distinct subjects – one based on empirical measurement, the other on thought processes – but back in the day of the Greeks there was no such distinction. The resolution of Zeno’s paradox by use of calculus and limits to infinity some 2000 years after it was first posed is a nice reminder of the power of mathematics in solving problems across a wide range of disciplines.

**The Chess Board Problem**

The chess board problem is nothing to do with Zeno (it was first recorded about 1000 years ago) but is nevertheless another interesting example of the power of geometric series. It is explained in the video above. If I put 1 grain of rice on the first square of a chess board, 2 grains of rice on the second square, 4 grains on the third square, how much rice in total will be on the chess board by the time I finish the 64th square?

The mathematical series will be:

1+ 2 + 4 + 8 + 16 +……

So a = 1 and r = 2

Sum = a(1-r^{64})/(1-r)

Sum = (1-2^{64})/(1-2)

Sum = 2^{64 }-1

Sum = 18, 446,744, 073, 709, 551, 615

This is such a large number that, if stretched from end to end the rice would reach all the way to the star Alpha Centura and back 2 times.

]]>**Fourier Transform**

The Fourier Transform and the associated Fourier series is one of the most important mathematical tools in physics. Physicist Lord Kelvin remarked in 1867:

*“Fourier’s theorem is not only one of the most beautiful results of modern analysis, but it may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics.”*

The Fourier Transform deals with time based waves – and these are one of the fundamental building blocks of the natural world. Sound, light, gravity, radio signals, Earthquakes and digital compression are just some of the phenomena that can be understood through waves. It’s not an exaggeration therefore to see the study of waves as one of the most important applications of mathematics in our modern life.

Here are some real life applications in a wide range of fields:

JPEG picture and MP3 sound compression – to allow data to reduced in size.

Analysing DNA sequences – to allow identification of specific regions of genetic code

Apps like Shazam which can recognise a song from a sample of music

Processing mobile phone network data and WIFI data

Signal processing – in everything from acoustic guitar amps or electrical currents through capacitors

Radio telescopes – used to construct images of the night sky

Building’s natural frequencies – architects can design buildings to better withstand earthquakes.

Medical imaging such as MRI scans

There are many more applications – this Guardian article is a good introduction to some others.

So, what is the Fourier Transform? It takes a graph like the graph f(t) = cos(at) below:

and transforms it into:

From the above cosine graph we can see that it is periodic time based function. Time is plotted on the x axis, and this graph will tell us the value of f(t) at any given time. The graph below with 2 spikes represents this same information in a different way. It shows the frequency (plotted on the x axis) of the cosine graph. Now the frequency of a function measures how many times it repeats per second. So for a graph f(t) = cos(at) it can be calculated as the inverse of the period. The period of cos(at) is 2pi/a so it has a frequency of a/2pi.

Therefore the frequency graph for cos(ax) will have spikes at a/2pi and -a/2pi.

But how does this new representation help us? Well most real life waves are much more complicated than simple sine or cosine waves – like this trumpet sound wave below:

But the remarkable thing is that every continuous wave can be modelled as the sum of sine and cosine waves. So we can break-down the very complicated wave above into (say) cos(x) + sin(2x) + 2cos(4x) . This new representation would be much easier to work with mathematically.

The way to find out what these constituent sine and cosine waves are that make up a complicated wave is to use the Fourier Transform. By transforming a function into one which shows the frequency peaks we can work out what the sine and cosine parts are for that function.

For example, this transformed graph above would show which frequency sine and cosine functions to use to model our original function. Each peak represents a sine or cosine function of a specific frequency. Add them all together and we have our function.

The maths behind this does get a little complicated. I’ll try and talk through the method using the function f(t) = cos(at).

So, the function we want to break down into its constituent cosine and sine waves is cos(at). Now, obviously this function can be represented just with cos(at) – but this is a good demonstration of how to use the maths for the Fourier Transform. We already know that this function has a frequency of a/2pi – so let’s see if we can find this frequency using the Transform.

This is the formula for the Fourier Transform. We “simply” replace the f(t) with the function we want to transform – then integrate.

To make this easier we use the exponential formula for cosine. When we have f(t) = cos(at) we can rewrite this as the function above in terms of exponential terms.

We substitute this version of f(t) into the formula.

Next we multiply out the exponential terms in the bracket (remember the laws of indices), and then split the integral into 2 parts. The reason we have grouped the powers in this way is because of the following step.

This is the delta function – which as you can see is very closely related to the integrals we have. Multiplying both sides by pi will get the integral in the correct form. The delta function is a function which is zero for all values apart from when the domain is zero.

So, the integral can be simplified as this above.

So, our function F will be zero for all values except when the delta function is zero. This gives use the above equations.

Therefore solving these equations we get an answer for the frequency of the graph.

This frequency agrees with the frequency we already expected to find for cos(at).

A slightly more complicated example would be to follow the same process but this time with the function f(t) = cos(at) + cos(bt). If the Fourier transform works correctly it should recognise that this function is composed of one cosine function with frequency a/2pi and another cosine function of b/2pi. If we follow through exactly the same method as above (we can in effect split the function into cos(at) and cos(bt) and do both separately), we should get:

This therefore is zero for all values except for when we have frequencies of a/2pi and b/2pi. So the Fourier Transform has correctly identified the constituent parts of our function.

If you want to read more about Fourier Transforms, then the Better Explained article is an excellent start.

]]>**Non Euclidean Geometry – An Introduction**

It wouldn’t be an exaggeration to describe the development of non-Euclidean geometry in the 19th Century as one of the most profound mathematical achievements of the last 2000 years. Ever since Euclid (c. 330-275BC) included in his geometrical proofs an assumption (postulate) about parallel lines, mathematicians had been trying to prove that this assumption was true. In the 1800s however, mathematicians including Gauss started to wonder what would happen if this assumption was false – and along the way they discovered a whole new branch of mathematics. A mathematics where there is an absolute measure of distance, where straight lines can be curved and where angles in triangles don’t add up to 180 degrees. They discovered non-Euclidean geometry.

**Euclid’s parallel postulate (5th postulate)**

Euclid was a Greek mathematician – and one of the most influential men ever to live. Through his collection of books, *Elements, *he created the foundations of geometry as a mathematical subject. Anyone who studies geometry at secondary school will still be using results that directly stem from Euclid’s *Elements* – that angles in triangles add up to 180 degrees, that alternate angles are equal, the circle theorems, how to construct line and angle bisectors. Indeed you might find it slightly depressing that you were doing nothing more than re-learn mathematics well understood over 2000 years ago!

All of Euclid’s results were based on rigorous deductive mathematical proof – if A was true, and A implied B, then B was also true. However Euclid did need to make use of a small number of definitions (such as the definition of a line, point, parallel, right angle) before he could begin his first book He also needed a small number of postulates (assumptions given without proof) – such as: * “(It is possible) to draw a line between 2 points”* and “*All right angles are equal”*

Now the first 4 of these postulates are relatively uncontroversial in being assumed as true. The 5th however drew the attention of mathematicians for centuries – as they struggled in vain to *prove* it. It is:

*If a line crossing two other lines makes the interior angles on the same side less than two right angles, then these two lines will meet on that side when extended far enough. *

This might look a little complicated, but is made a little easier with the help of the sketch above. We have the line L crossing lines L1 and L2, and we have the angles A and B such that A + B is less than 180 degrees. Therefore we have the lines L1 and L2 intersecting. Lines which are not parallel will therefore intersect.

Euclid’s postulate can be restated in simpler (though not quite logically equivalent language) as:

*At most one line can be drawn through any point not on a given line parallel to the given line in a plane.*

In other words, if you have a given line (l) and a point (P), then there is only 1 line you can draw which is parallel to the given line and through the point (m).

Both of these versions do seem pretty self-evident, but equally there seems no reason why they should simply be assumed to be true. Surely they can actually be proved? Well, mathematicians spent the best part of 2000 years trying without success to do so.

**Why is the 5th postulate so important? **

Because Euclid’s proofs in *Elements *were deductive in nature, that means that if the 5th postulate was false, then all the subsequent “proofs” based on this assumption would have to be thrown out. Most mathematicians working on the problem did in fact believe it was true – but were keen to actually prove it.

As an example, the 5th postulate can be used to prove that the angles in a triangle add up to 180 degrees.

The sketch above shows that if A + B are less than 180 degrees the lines will intersect. Therefore because of symmetry (if one pair is more than 180 degrees, then other side will have a pair less than 180 degrees), a pair of parallel lines will have A + B = 180. This gives us:

This is the familiar diagram you learn at school – with alternate and corresponding angles. If we accept the diagram above as true, we can proceed with proving that the angles in a triangle add up to 180 degrees.

Once, we know that the two red angles are equal and the two green angles are equal, then we can use the fact that angles on a straight line add to 180 degrees to conclude that the angles in a triangle add to 180 degrees. But it needs the parallel postulate to be true!

In fact there are geometries in which the parallel postulate is not true – and so we can indeed have triangles whose angles don’t add to 180 degrees. More on this in the next post.

If you enjoyed this you might also like:

Non-Euclidean Geometry II – Attempts to Prove Euclid – The second part in the non-Euclidean Geometry series.

The Riemann Sphere – The Riemann Sphere is a way of mapping the entire complex plane onto the surface of a 3 dimensional sphere.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.

]]>