This is one of my favourite videos from the legendary Carl Sagan. He explains the consequences of near to speed of light travel.

This topic fits quite well into a number of mathematical topics – from graphing, to real life uses of equations, to standard form and unit conversions. It also challenges our notion of time as we usually experience it and therefore leads onto some interesting questions about the nature of reality. Below we can see the time dilation graph:

which clearly shows that for low speeds there is very little time dilation, but when we start getting to within 90% of the speed of light, that there is a very significant time dilation effect. For more accuracy we can work out the exact dilation using the formula given – where v is the speed traveled, c is the speed of light, t is the time experienced in the observer’s own frame of reference (say, by looking at his watch) and t’ is the time experienced in a different, stationary time frame (say on Earth) . Putting some numbers in for real life examples:

1) A long working air steward spends a cumulative total of 5 years in the air – flying at an average speed of 900km/h. How much longer will he live (from a stationary viewpoint) compared to if he had been a bus driver?

2) Voyager 1, launched in 1977 and now currently about 1.8×10^10 km away from Earth is traveling at around 17km/s. How far does this craft travel in 1 hour? What would the time dilation be for someone onboard since 1977?

3) I built a spacecraft capable of traveling at 95% the speed of light. I said goodbye to my twin sister and hopped aboard, flew for a while before returning to Earth. If I experienced 10 years on the space craft, how much younger will I be than my twin?

**Scroll to the bottom for the answers**

Marcus De Sautoy also presents an interesting Horizon documentary on the speed of light, its history and the CERN experiments last year that suggested that some particles may have traveled faster than light:

There is a lot of scope for extra content on this topic – for example, looking at the distance of some stars visible in the night sky. For example, red super-giant star Belelgeuse is around 600 light years from Earth. (How many kilometres is that?) When we look at Betelgeuse we are actually looking 600 years “back in time” – so does it make sense to use time as a frame of reference for existence?

**Answers**

1) Convert 900km/h into km/s = 0.25km/s. Now substitute this value into the equation, along with the speed of light at 300,000km/s….and even using Google’s computer calculator we get a difference so negligible that the denominator rounds to 1.

2) With units already in km/s we substitute the values in – and using a powerful calculator find that denominator is 0.99999999839. Therefore someone traveling on the ship for what their watch recorded as 35 years would actually have been recorded as leaving Earth 35.0000000562 years ago. Which is about 1.78seconds! So still not much effect.

3) This time we get a denominator of 0.3122498999 and so the time experienced by my twin will be 32 years. In effect my sister will have aged 22 years more than me on my return. Amazing!

If you enjoyed this topic you might also like:

Michio Kaku – Universe in a Nutshell

Champagne Supernovas and the Birth of the Universe – some amazing pictures from space.

]]>**Even Pigeons Can Do Maths**

This is a really interesting study from a couple of years ago, which shows that even pigeons can deal with numbers as abstract quantities – in the study the pigeons counted groups of objects in their head and then classified the groups in terms of size. From the New York Times Article:

*“Given groups of six and nine, they could pick, or peck, the images in the right order. This is one more bit of evidence of how smart birds really are, and it is intriguing because the pigeons’ performance was so similar to the monkeys’. “I was surprised,” Dr. Scarf said.*

*He and his colleagues wrote that the common ability to learn rules about numbers is an example either of different groups — birds and primates, in this case — evolving these abilities separately, or of both pigeons and primates using an ability that was already present in their last common ancestor.*

*That would really be something, because the common ancestor of pigeons and primates would have been alive around 300 million years ago, before dinosaurs and mammals. It may be that counting was already important, but Dr. Scarf said that if he had to guess, he would lean toward the idea that the numerical ability he tested evolved separately. “I can definitely see why both monkeys and pigeons could profit from this ability,” he said.”*

To find mathematical ability amongst both monkeys and pigeons therefore raises two equally interesting possibilities. Perhaps basic numeracy is a rare trait, but such a fundamentally important skill for life that it emerged hundreds of millions of years ago. Or perhaps basic numeracy is a relatively common trait – which can evolve independently in different species.

Either way, it is clear that there must be an evolutionary benefit for being able to process abstract quantities – most likely in terms of food. A monkey who can look at two piles of coconuts and count 5 in one pile and 6 in the other and know that 6 is a bigger quantity than 5 can then choose the larger pile to sit alongside and eat. Perhaps this evolutionary benefit is the true origin of our ability to do maths.

Another similar experiment looked at the ability of chimpanzees to both count numbers, and also demonstrated their remarkable photographic memory.

On the screen the monkeys are given a flash of 10 number for a fraction of a second, before the numbers are covered up, and they then proceed to correctly show the position of all numbers from 1-10. They are much better at this task than humans. This is a good task to try at school using the online game here and would also make a good IB investigation. Can you beat the chimps?

This all ties into the question about where mathematical ability comes from. If there had been no evolutionary ability for such abstract abilities with numbers, then perhaps today our brains would be physically incapable of higher level mathematical thinking.

If you enjoyed this post you might also like:

]]>**Maths of Global Warming – Modeling Climate Change**

The above graph is from NASA’s climate change site, and was compiled from analysis of ice core data. Scientists from the National Oceanic and Atmospheric Administration (NOAA) drilled into thick polar ice and then looked at the carbon content of air trapped in small bubbles in the ice. From this we can see that over large timescales we have had large oscillations in the concentration of carbon dioxide in the atmosphere. During the ice ages we have had around 200 parts per million carbon dioxide, rising to around 280 in the inter-glacial periods. However this periodic oscillation has been broken post 1950 – leading to a completely different graph behaviour, and putting us on target for 400 parts per million in the very near future.

**Analysising the data**

One of the fields that mathematicians are always in demand for is data analysis. Understanding data, modeling with the data collected and using that data to predict future events. Let’s have a quick look at some very simple modeling. The graph above shows a superimposed sine graph plotted using Desmos onto the NOAA data.

y = -0.8sin(3x +0.1) – 1

Whilst not a perfect fit, it does capture the general trend of the data and its oscillatory behaviour until 1950. We can see that post 1950 we would then expect to be seeing a decline in carbon dioxide rather than the reverse – which on our large timescale graph looks close to vertical.

**Dampened Sine wave**

This is a dampened sine wave, achieved by adding e^{-x} to the front of the sine term. This achieves the result of progressively reducing the amplitude of the sine function. The above graph is:

y = e^{-0.06x} (-0.6sin(3x+0.1) -1 )

This captures the shape in the middle of the graph better than the original sine function, but at the expense of less accuracy at the left and right.

**Polynomial Regression**

We can make use of Desmos’ regression tools to fit curves to points. Here I have entered a table of values and then seen which polynomial gives the best fit:

We can see that the purple cubic fits the first 5 points quite well (with a high R² value). So we should be able to create a piecewise function to describe this graph.

**Piecewise Function**

Here I have restricted the domain of the first polynomial (entered below):

Second polynomial:

Third polynomial:

Fourth polynomial:

Finished model:

Shape of model:

We would then be able to fit this to the original model scale by applying a vertical translation (i.e add 280), vertical and horizontal stretch. It would probably have been easier to align the scales at the beginning! Nevertheless we have the shape we wanted.

**Analysing the models**

Our piecewise function gives us a good data fit for the domain we were working in – so if we then wanted to use some calculus to look at non horizontal inflections (say), this would be a good model to use. If we want to analyse what we would have expected to happen without human activity, then the sine models at the very start are more useful in capturing the trend of the oscillations.

**Post 1950s**

Looking on a completely different scale, we can see the general tend of carbon dioxide concentration post 1950 is pretty linear. This time I’ll scale the axis at the start. Here 1960 corresponds with x = 0, and 1970 corresponds with x = 5 etc.

Actually we can see that a quadratic fits the curve better than a linear graph – which is bad news, implying that the rates of change of carbon in the atmosphere will increase. Using our model we can predict that on current trends in 2030 there will be 500 parts per million of carbon in the atmosphere.

**Stern Report**

According to the Stern Report, 500ppm is around the upper limit of what we need to aim to stabalise the carbon levels at (450ppm-550ppm of carbon equivalent) before the economic and social costs of climate change become economically catastrophic. The Stern Report estimates that it will cost around 1% of global GDP to stablise in this range. Failure to do that is predicted to lock in massive temperature rises of between 3 and 10 degrees by the end of the century.

If you are interested in doing an investigation on this topic:

- Plus Maths have a range of articles on the maths behind climate change
- The Stern report is a very detailed look at the evidence, graphical data and economic costs.

This is a great puzzle which the Guardian ran last week:

*Fill in the equations below using any of the basic mathematical operations, +, –, x, ÷, and as many brackets as you like, so that they make arithmetical sense.*

*10 9 8 7 6 5 4 3 2 1 = 2017*

There are many different ways of solving this – see if you can find the simplest way!

Scroll down to see some possible answers.

I had a play around with this and this is my effort:

(10+(9 x 8 x 7) -(6 + 5) ) x 4 + 3 + (2 x 1) = 2017

An even nicer answer was provided on the Guardian – which doesn’t even need brackets:

10 x 9 x 8 x 7 / 6 / 5 x 4 x 3 + 2 – 1 = 2017

Any other solutions?

]]>**Finger Ratio Predicts Maths Ability?**

Some of the studies on the 2D: 4D finger ratios (as measured in the picture above) are interesting when considering what factors possibly affect mathematical ability. A 2007 study by Mark Brosnan from the University of Bath found that:

*“Boys with the longest ring fingers relative to their index fingers tend to excel in math. The boys with the lowest ratios also were the ones whose abilities were most skewed in the direction of math rather than literacy.*

*With the girls, there was no correlation between finger ratio and numeracy, but those with higher ratios–presumably indicating low testosterone levels–had better scores on verbal abilities. The link, according to the researchers, is that testosterone levels in the womb influence both finger length and brain development.*

*In men, the ring (fourth) finger is usually longer than the index (second); their so-called 2D:4D ratio is lower than 1. In females, the two fingers are more likely to be the same length. Because of this sex difference, some scientists believe that a low ratio could be a marker for higher prenatal testosterone levels, although it’s not clear how the hormone might influence finger development.”*

In the study, Brosnan photocopied the hands of 74 boys and girls aged 6 and 7. He worked out the 2D:4D finger ratio by dividing the length of the index finger (2D) with the length of the ring finger (4D). They then compared the finger ratios with standardised UK maths and English tests. The differences found were small, but significant.

Another study of 136 men and 137 women, looked at the link between finger ratio and aggression. The results are plotted in the graph above – which clearly show this data follows a normal distribution. The men are represented with the blue line, the women the green line and the overall cohort in red. You can see that the male distribution is shifted to the left as they have a lower mean ratio. (Males: mean 0.947, standard deviation 0.029, Females: mean 0.965, standard deviation 0.026).

The 95% confidence interval for average length is 0.889-1.005 for males and 0.913-1.017 for females. That means that 95% of the male and female populations will fall into these categories.

The correlation between digit ratio and everything from personality, sexuality, sporting ability and management has been studied. If a low 2D:4D ratio is indeed due to testosterone exposure in the womb (which is not confirmed), then that raises the question as to why testosterone exposure should affect mathematical ability. And if it is not connected to testosterone, then what is responsible for the correlation between digit ratios and mathematical talent?

I think this would make a really interesting Internal Assessment investigation at either Studies or Standard Level. Also it works well as a class investigation at KS3 and IGCSE into correlation and scatter diagrams. Does the relationship still hold for when you look at algebraic skills rather than numeracy? Or is algebraic talent distinct from numeracy talent?

A detailed academic discussion of the scientific literature on this topic is available here.

If you enjoyed this post you might also like:

]]>**Modelling Radioactive decay**

We can model radioactive decay of atoms using the following equation:

**N(t) = N _{0} e^{-λt}**

Where:

**N _{0}**: is the initial quantity of the element

**λ**: is the radioactive decay constant

**t**: is time

**N(t)**: is the quantity of the element remaining after time t.

So, for Carbon-14 which has a half life of 5730 years (this means that after 5730 years exactly half of the initial amount of Carbon-14 atoms will have decayed) we can calculate the decay constant **λ. **

After 5730 years, N(5730) will be exactly half of N_{0}, therefore we can write the following:

**N(5730) = 0.5N _{0} = N_{0} e^{-λt}**

therefore:

**0.5 = e ^{-λt}**

and if we take the natural log of both sides and rearrange we get:

**λ = ln(1/2) / -5730**

**λ ≈0.000121**

We can now use this to solve problems involving Carbon-14 (which is used in Carbon-dating techniques to find out how old things are).

eg. You find an old parchment and after measuring the Carbon-14 content you find that it is just 30% of what a new piece of paper would contain. How old is this paper?

We have

**N(t) = N _{0} e^{-0.000121t}**

**N(t)/N _{0}** =

**0.30** = **e ^{-0.000121t}**

**t = ln(0.30)/(-0.000121)**

**t = 9950 years old.**

**Probability density functions**

We can also do some interesting maths by rearranging:

**N(t) = N _{0} e^{-λt}**

**N(t)/N _{0}** =

and then plotting **N(t)/N _{0}** against time.

**N(t)/N _{0}** will have a range between 0 and 1 as when t = 0,

We can then manipulate this into the form of a probability density function – by finding the constant a which makes the area underneath the curve equal to 1.

solving this gives a = λ. Therefore the following integral:

will give the fraction of atoms which will have decayed between times t1 and t2.

We could use this integral to work out the half life of Carbon-14 as follows:

Which if we solve gives us t = 5728.5 which is what we’d expect (given our earlier rounding of the decay constant).

We can also now work out the expected (mean) time that an atom will exist before it decays. To do this we use the following equation for finding E(x) of a probability density function:

and if we substitute in our equation we get:

Now, we can integrate this by parts:

So the expected (mean) life of an atom is given by 1/λ. In the case of Carbon, with a decay constant λ ≈0.000121 we have an expected life of a Carbon-14 atom as:

E(t) = 1 /0.000121

E(t) = 8264 years.

Now that may sound a little strange – after all the half life is 5730 years, which means that half of all atoms will have decayed after 5730 years. So why is the mean life so much higher? Well it’s because of the long right tail in the graph – we will have some atoms with very large lifespans – and this will therefore skew the mean to the right.

]]>**Amanda Knox and Bad Maths in Courts**

This post is inspired by the recent BBC News article, “Amanda Knox and Bad Maths in Courts.” The article highlights the importance of good mathematical understanding when handling probabilities – and how mistakes by judges and juries can sometimes lead to miscarriages of justice.

**A scenario to give to students:**

*A murder scene is found with two types of blood – that of the victim and that of the murderer. As luck would have it, the unidentified blood has an incredibly rare blood disorder, only found in 1 in every million men. The capital and surrounding areas have a population of 20 million – and the police are sure the murderer is from the capital. The police have already started cataloging all citizens’ blood types for their new super crime-database. They already have nearly 1 million male samples in there – and bingo – one man, Mr XY, is a match. He is promptly marched off to trial, there is no other evidence, but the jury are told that the odds are 1 in a million that he is innocent. He is duly convicted. The question is, how likely is it that he did not commit this crime? *

**Answer:**

*We can be around 90% confident that he did not commit this crime. Assuming that there are approximately 10 million men in the capital, then were everyone cataloged on the database we would have on average 10 positive matches. Given that there is no other evidence, it is therefore likely that he is only a 1 in 10 chance of being guilty. Even though P(Fail Test/Innocent) = 1/1,000,000, P(Innocent/Fail test) = 9/10.
*

**Amanda Knox**

Eighteen months ago, Amanda Knox and Raffaele Sollecito, who were previously convicted of the murder of British exchange student Meredith Kercher, were acquitted. The judge at the time ruled out re-testing a tiny DNA sample found at the scene, stating that, “The sum of the two results, both unreliable… cannot give a reliable result.”

This logic however, whilst intuitive is not mathematically correct. As explained by mathematician Coralie Colmez in the BBC News article, by repeating relatively unreliable tests we can make them more reliable – the larger the pooled sample size, the more confident we can be in the result.

**Sally Clark**

One of the most (in)famous examples of bad maths in the court room is that of Sally Clark – who was convicted of the murder of her two sons in 1999. It has been described as, “one of the great miscarriages of justice in modern British legal history.” Both of Sally Clark’s children died from cot-death whilst still babies. Soon afterwards she was arrested for murder. The case was based on a seemingly incontrovertible statistic – that the chance of 2 children from the same family dying from cot-death was 1 in 73 million. Experts testified to this, the jury were suitably convinced and she was convicted.

The crux of the prosecutor’s case was that it was so statistically unlikely that this had happened by chance, that she must have killed her children. However, this was bad maths – which led to an innocent woman being jailed for four years before her eventual acquittal.

**Independent Events**

The 1 in 73 million figure was arrived at by simply looking at the probability of a single cot-death (1 in 8500 ) and then squaring it – because it had happened twice. However, this method only works if both events are independent – and in this case they clearly weren’t. Any biological or social factors which contribute to the death of a child due to cot-death will also mean that another sibling is also at elevated risk.

**Prosecutor’s Fallacy**

Additionally this figure was presented in a way which is known as the “prosecutor’s fallacy” – the 1 in 73 million figure (even if correct) didn’t represent the probability of Sally Clark’s innocence, because it should have been compared against the probability of guilt for a double homicide. In other words, the probability of a false positive is not the same as the probability of innocence. In mathematical language, P(Fail Test/Innocent) is not equal to P(Innocent/Fail test).

Subsequent analysis of the Sally Clark case by a mathematics professor concluded that rather than having a 1 in 73 million chance of being innocent, actually it was about 4-10 times more likely this was due to natural causes rather than murder. Quite a big turnaround – and evidence of why understanding statistics is so important in the courts.

This topic has also been highlighted recently by the excellent ToK website, Lancaster School ToK.

If you enjoyed this topic you might also like:

]]>Golden Balls, hosted by Jasper Carrot, is based on a version of the Prisoner’s Dilemma. For added interest, try and predict what the 2 contestants are going to do. Any psychological cues to pick up on?

Game theory is an interesting branch of mathematics with links across a large number of disciplines – from politics to economics to biology and psychology. The most well known example is that of the Prisoner’s Dilemma. (Illustrated below). Two prisoners are taken into custody and held in separate rooms. During interrogation they are told that if they testify to everything (ie betray their partner) then they will go free and their partner will get 10 years. However, if they both testify they will both get 5 years, and if they both remain silent then they will both get 6 months in jail.

So, what is the optimum strategy for prisoner A? In this version he should testify – because whichever strategy his partner chooses this gives prisoner A the best possible outcome. Looking at it in reverse, if prisoner B testifies, then prisoner A would have been best testifying (gets 5 years rather than 10). If prisoner B remains silent, then prisoner A would have been best testifying (goes free rather than 6 months).

This brings in an interesting moral dilemma – ie. even if the prisoner and his partner are innocent they are is placed in a situation where it is in his best interest to testify against their partner – thus increasing the likelihood of an innocent man being sent to jail. This situation represents a form of plea bargaining – which is more common in America than Europe.

Part of the dilemma arises because if both men know that the optimum strategy is to testify, then they both end up with lengthy 5 year jail sentences. If only they can trust each other to be altruistic rather than selfish – and both remain silent, then they get away with only 6 months each. So does mathematics provide an amoral framework? i.e. in this case mathematically optimum strategies are not “nice,” but selfish.

Game theory became quite popular during the Cold War, as the matrix above represented the state of the nuclear stand-off. The threat of Mutually Assured Destruction (MAD) meant that neither the Americans or the Russians had any incentive to strike, because that would inevitably lead to a retaliatory strike – with catastrophic consequences. The above matrix uses negative infinity to represent the worst possible outcome, whilst both sides not striking leads to a positive pay off. Such a game has a very strong Nash Equilibrium – ie. there is no incentive to deviate from the non strike policy. Could the optimal maths strategy here be said to be responsible for saving the world?

Game theory can be extended to evolutionary biology – and is covered in Richard Dawkin’s The Selfish Gene in some detail. Basically whilst it is an optimum strategy to be selfish in a single round of the prisoner’s dilemma, any iterated games (ie repeated a number of times) actually tend towards a co-operative strategy. If someone is nasty to you on round one (ie by testifying) then you can punish them the next time. So with the threat of punishment, a mutually co-operative strategy is superior.

You can actually play the iterated Prisoner Dilemma game as an applet on the website Game Theory. Alternatively pairs within a class can play against each other.

An interesting extension is this applet, also on Game Theory, which models the evolution of 2 populations – residents and invaders. You can set different responses – and then see what happens to the respective populations. This is a good reflection of interactions in real life – where species can choose to live co-cooperatively, or to fight for the same resources.

The first stop for anyone interested in more information about Game Theory should be the Maths Illuminated website – which has an entire teacher unit on the subject – complete with different sections,a video and pdf documents. There’s also a great article on Plus Maths – Does it Pay to be Nice? all about this topic. There are a lot of different games which can be modeled using game theory – and many are listed here . These include the Stag Hunt, Hawk/ Dove and the Peace War game. Some of these have direct applicability to population dynamics, and to the geo-politics of war versus peace.

If you enjoyed this post you might also like:

]]>**Graham’s Number – literally big enough to collapse your head into a black hole**

Graham’s Number is a number so big that it would *literally* collapse your head into a black hole were you fully able to comprehend it. And that’s not hyperbole – the informational content of Graham’s Number is so astronomically large that it exceeds the maximum amount of entropy that could be stored in a brain sized piece of space – i.e. a black hole would form prior to fully processing all the data content. This is a great introduction to notation for *really* big numbers. Numberphile have produced a fantastic video on the topic:

Graham’s Number makes use of Kuth’s up arrow notation (explanation from wikipedia:)

In the series of hyper-operations we have

1) Multiplication:

For example,

2) Exponentiation:

For example,

3) Tetration:

For example,

- etc.

4) Pentation:

and so on.

Examples:

Which clearly can lead to some absolutely huge numbers very quickly. Graham’s Number – which was arrived at mathematically as an upper bound for a problem relating to vertices on hypercubes is (explanation from Wikipedia)

where the number of *arrows* in each layer, starting at the top layer, is specified by the value of the next layer below it; that is,

and where a superscript on an up-arrow indicates how many arrows are there. In other words, *G* is calculated in 64 steps: the first step is to calculate *g*_{1} with four up-arrows between 3s; the second step is to calculate *g*_{2} with *g*_{1} up-arrows between 3s; the third step is to calculate *g*_{3} with *g*_{2} up-arrows between 3s; and so on, until finally calculating *G* = *g*_{64} with *g*_{63} up-arrows between 3s.

So a number so big it can’t be fully processed by the human brain. This raises some interesting questions about maths and knowledge – Graham’s Number is an example of a number that exists but is beyond full human comprehension – it therefore is an example of a upper bound of human knowledge. Therefore will there always be things in the Universe which are beyond full human understanding? Or can mathematics provide a shortcut to knowledge that would otherwise be inaccessible?

If you enjoyed this post you might also like:

How Are Prime Numbers Distributed? Twin Primes Conjecture – a discussion about the amazing world of prime numbers.

Wau: The Most Amazing Number in the World? – a post which looks at the amazing properties of Wau

]]>This is a really interesting puzzle to study – which fits very well when studying geometric series, proof and the history of maths.

The two most intuitive answers are either that it has no sum or that it sums to zero. If you group the pattern into pairs, then each pair (1, -1) = 0. However if you group the pattern by first leaving the 1, then grouping pairs of (-1,1) you would end up with a sum of 1.

Firstly it’s worth seeing why we shouldn’t just use our formula for a geometric series:

with r as the multiplicative constant of -1. This formula requires that the absolute value of r is less than 1 – otherwise the series will not converge.

The series 1,-1,1,-1…. is called Grandi’s series – after a 17th century Italian mathematician (pictured) – and sparked a few hundred years worth of heated mathematical debate as to what the correct summation was.

Using the Cesaro method (explanation pasted from here )

If *a*_{n} = (−1)^{n+1} for *n* ≥ 1. That is, {*a*_{n}} is the sequence

Then the sequence of partial sums {*s*_{n}} is

so whilst the series not converge, if we calculate the terms of the sequence {(*s*_{1} + … + *s*_{n})/*n*} we get:

so that

So, using different methods we have shown that this series “should” have a summation of 0 (grouping in pairs), or that it “should” have a sum of 1 (grouping in pairs after the first 1), or that it “should” have no sum as it simply oscillates, or that it “should” have a Cesaro sum of 1/2 – no wonder it caused so much consternation amongst mathematicians!

This approach can be extended to the complex series, which is looked at in the blog God Plays Dice

This is a really great example of how different proofs can sometimes lead to different (and unexpected) results. What does this say about the nature of proof?

]]>