You are currently browsing the category archive for the ‘Uncategorized’ category.

This post is inspired by the Quora thread on interesting functions to plot.

**The butterfly**

This is a slightly simpler version of the butterfly curve which is plotted using polar coordinates on Desmos as:

Polar coordinates are an alternative way of plotting functions – and are explored a little in HL Maths when looking at complex numbers. The theta value specifies an angle of rotation measured anti-clockwise from the x axis, and the r value specifies the distance from the origin. So for example the polar coordinates (90 degrees, 1) would specify a point 90 degrees ant clockwise from the x axis and a distance 1 from the origin (i.e the point (0,1) in our usual Cartesian plane).

2. **Fermat’s Spiral**

This is plotted by the polar equation:

The next 3 were all created by my students.

3. **Chaotic spiral (by Laura Y9)**

I like how this graph grows ever more tangled as it coils in on itself. This was created by the polar equation:

4. **The flower (by Felix Y9)**

Some nice rotational symmetries on this one. Plotted by:

5. **The heart (by Tiffany Y9)**

Simple but effective! This was plotted using the usual x,y coordinates:

You can also explore how to draw the Superman and Batman logos using Wolfram Alpha here.

This is a quick example of how using Tracker software can generate a nice physics-related exploration. I took a spring, and attached it to a stand with a weight hanging from the end. I then took a video of the movement of the spring, and then uploaded this to Tracker.

**Height against time**

The first graph I generated was for the height of the spring against time. I started the graph when the spring was released from the low point. To be more accurate here you can calibrate the y axis scale with the actual distance. I left it with the default settings.

You can see we have a very good fit for a sine/cosine curve. This gives the approximate equation:

y = -65cos10.5(t-3.4) – 195

(remembering that the y axis scale is x 100).

This oscillating behavior is what we would expect from a spring system – in this case we have a period of around 0.6 seconds.

**Momentum against velocity**

For this graph I first set the mass as 0.3kg – which was the weight used – and plotted the y direction momentum against the y direction velocity. It then produces the above linear relationship, which has a gradient of around 0.3. Therefore we have the equation:

p = 0.3v

If we look at the theoretical equation linking momentum:

p = mv

(Where m = mass). We can see that we have almost perfectly replicated this theoretical equation.

**Height against velocity**

I generated this graph with the mass set to the default 1kg. It plots the y direction against the y component velocity. You can see from the this graph that the velocity is 0 when the spring is at the top and bottom of its cycle. We can then also see that it reaches its maximum velocity when halfway through its cycle. If we were to model this we could use an ellipse (remembering that both scales are x100 and using x for vy):

If we then wanted to develop this as an investigation, we could look at how changing the weight or the spring extension affected the results and look for some general conclusions for this. So there we go – a nice example of how tracker can quickly generate some nice personalised investigations!

**Finger Ratio Predicts Maths Ability?**

Some of the studies on the 2D: 4D finger ratios (as measured in the picture above) are interesting when considering what factors possibly affect mathematical ability. A 2007 study by Mark Brosnan from the University of Bath found that:

*“Boys with the longest ring fingers relative to their index fingers tend to excel in math. The boys with the lowest ratios also were the ones whose abilities were most skewed in the direction of math rather than literacy.*

*With the girls, there was no correlation between finger ratio and numeracy, but those with higher ratios–presumably indicating low testosterone levels–had better scores on verbal abilities. The link, according to the researchers, is that testosterone levels in the womb influence both finger length and brain development.*

*In men, the ring (fourth) finger is usually longer than the index (second); their so-called 2D:4D ratio is lower than 1. In females, the two fingers are more likely to be the same length. Because of this sex difference, some scientists believe that a low ratio could be a marker for higher prenatal testosterone levels, although it’s not clear how the hormone might influence finger development.”*

In the study, Brosnan photocopied the hands of 74 boys and girls aged 6 and 7. He worked out the 2D:4D finger ratio by dividing the length of the index finger (2D) with the length of the ring finger (4D). They then compared the finger ratios with standardised UK maths and English tests. The differences found were small, but significant.

Another study of 136 men and 137 women, looked at the link between finger ratio and aggression. The results are plotted in the graph above – which clearly show this data follows a normal distribution. The men are represented with the blue line, the women the green line and the overall cohort in red. You can see that the male distribution is shifted to the left as they have a lower mean ratio. (Males: mean 0.947, standard deviation 0.029, Females: mean 0.965, standard deviation 0.026).

The 95% confidence interval for average length is 0.889-1.005 for males and 0.913-1.017 for females. That means that 95% of the male and female populations will fall into these categories.

The correlation between digit ratio and everything from personality, sexuality, sporting ability and management has been studied. If a low 2D:4D ratio is indeed due to testosterone exposure in the womb (which is not confirmed), then that raises the question as to why testosterone exposure should affect mathematical ability. And if it is not connected to testosterone, then what is responsible for the correlation between digit ratios and mathematical talent?

I think this would make a really interesting Internal Assessment investigation at either Studies or Standard Level. Also it works well as a class investigation at KS3 and IGCSE into correlation and scatter diagrams. Does the relationship still hold for when you look at algebraic skills rather than numeracy? Or is algebraic talent distinct from numeracy talent?

A detailed academic discussion of the scientific literature on this topic is available here.

If you enjoyed this post you might also like:

**Amanda Knox and Bad Maths in Courts**

This post is inspired by the recent BBC News article, “Amanda Knox and Bad Maths in Courts.” The article highlights the importance of good mathematical understanding when handling probabilities – and how mistakes by judges and juries can sometimes lead to miscarriages of justice.

**A scenario to give to students:**

*A murder scene is found with two types of blood – that of the victim and that of the murderer. As luck would have it, the unidentified blood has an incredibly rare blood disorder, only found in 1 in every million men. The capital and surrounding areas have a population of 20 million – and the police are sure the murderer is from the capital. The police have already started cataloging all citizens’ blood types for their new super crime-database. They already have nearly 1 million male samples in there – and bingo – one man, Mr XY, is a match. He is promptly marched off to trial, there is no other evidence, but the jury are told that the odds are 1 in a million that he is innocent. He is duly convicted. The question is, how likely is it that he did not commit this crime? *

**Answer:**

*We can be around 90% confident that he did not commit this crime. Assuming that there are approximately 10 million men in the capital, then were everyone cataloged on the database we would have on average 10 positive matches. Given that there is no other evidence, it is therefore likely that he is only a 1 in 10 chance of being guilty. Even though P(Fail Test/Innocent) = 1/1,000,000, P(Innocent/Fail test) = 9/10.
*

**Amanda Knox**

Eighteen months ago, Amanda Knox and Raffaele Sollecito, who were previously convicted of the murder of British exchange student Meredith Kercher, were acquitted. The judge at the time ruled out re-testing a tiny DNA sample found at the scene, stating that, “The sum of the two results, both unreliable… cannot give a reliable result.”

This logic however, whilst intuitive is not mathematically correct. As explained by mathematician Coralie Colmez in the BBC News article, by repeating relatively unreliable tests we can make them more reliable – the larger the pooled sample size, the more confident we can be in the result.

**Sally Clark**

One of the most (in)famous examples of bad maths in the court room is that of Sally Clark – who was convicted of the murder of her two sons in 1999. It has been described as, “one of the great miscarriages of justice in modern British legal history.” Both of Sally Clark’s children died from cot-death whilst still babies. Soon afterwards she was arrested for murder. The case was based on a seemingly incontrovertible statistic – that the chance of 2 children from the same family dying from cot-death was 1 in 73 million. Experts testified to this, the jury were suitably convinced and she was convicted.

The crux of the prosecutor’s case was that it was so statistically unlikely that this had happened by chance, that she must have killed her children. However, this was bad maths – which led to an innocent woman being jailed for four years before her eventual acquittal.

**Independent Events**

The 1 in 73 million figure was arrived at by simply looking at the probability of a single cot-death (1 in 8500 ) and then squaring it – because it had happened twice. However, this method only works if both events are independent – and in this case they clearly weren’t. Any biological or social factors which contribute to the death of a child due to cot-death will also mean that another sibling is also at elevated risk.

**Prosecutor’s Fallacy**

Additionally this figure was presented in a way which is known as the “prosecutor’s fallacy” – the 1 in 73 million figure (even if correct) didn’t represent the probability of Sally Clark’s innocence, because it should have been compared against the probability of guilt for a double homicide. In other words, the probability of a false positive is not the same as the probability of innocence. In mathematical language, P(Fail Test/Innocent) is not equal to P(Innocent/Fail test).

Subsequent analysis of the Sally Clark case by a mathematics professor concluded that rather than having a 1 in 73 million chance of being innocent, actually it was about 4-10 times more likely this was due to natural causes rather than murder. Quite a big turnaround – and evidence of why understanding statistics is so important in the courts.

This topic has also been highlighted recently by the excellent ToK website, Lancaster School ToK.

If you enjoyed this topic you might also like:

**Does it Pay to be Nice? Game Theory and Evolution**

Golden Balls, hosted by Jasper Carrot, is based on a version of the Prisoner’s Dilemma. For added interest, try and predict what the 2 contestants are going to do. Any psychological cues to pick up on?

Game theory is an interesting branch of mathematics with links across a large number of disciplines – from politics to economics to biology and psychology. The most well known example is that of the Prisoner’s Dilemma. (Illustrated below). Two prisoners are taken into custody and held in separate rooms. During interrogation they are told that if they testify to everything (ie betray their partner) then they will go free and their partner will get 10 years. However, if they both testify they will both get 5 years, and if they both remain silent then they will both get 6 months in jail.

So, what is the optimum strategy for prisoner A? In this version he should testify – because whichever strategy his partner chooses this gives prisoner A the best possible outcome. Looking at it in reverse, if prisoner B testifies, then prisoner A would have been best testifying (gets 5 years rather than 10). If prisoner B remains silent, then prisoner A would have been best testifying (goes free rather than 6 months).

This brings in an interesting moral dilemma – ie. even if the prisoner and his partner are innocent they are is placed in a situation where it is in his best interest to testify against their partner – thus increasing the likelihood of an innocent man being sent to jail. This situation represents a form of plea bargaining – which is more common in America than Europe.

Part of the dilemma arises because if both men know that the optimum strategy is to testify, then they both end up with lengthy 5 year jail sentences. If only they can trust each other to be altruistic rather than selfish – and both remain silent, then they get away with only 6 months each. So does mathematics provide an amoral framework? i.e. in this case mathematically optimum strategies are not “nice,” but selfish.

Game theory became quite popular during the Cold War, as the matrix above represented the state of the nuclear stand-off. The threat of Mutually Assured Destruction (MAD) meant that neither the Americans or the Russians had any incentive to strike, because that would inevitably lead to a retaliatory strike – with catastrophic consequences. The above matrix uses negative infinity to represent the worst possible outcome, whilst both sides not striking leads to a positive pay off. Such a game has a very strong Nash Equilibrium – ie. there is no incentive to deviate from the non strike policy. Could the optimal maths strategy here be said to be responsible for saving the world?

Game theory can be extended to evolutionary biology – and is covered in Richard Dawkin’s The Selfish Gene in some detail. Basically whilst it is an optimum strategy to be selfish in a single round of the prisoner’s dilemma, any iterated games (ie repeated a number of times) actually tend towards a co-operative strategy. If someone is nasty to you on round one (ie by testifying) then you can punish them the next time. So with the threat of punishment, a mutually co-operative strategy is superior.

You can actually play the iterated Prisoner Dilemma game as an applet on the website Game Theory. Alternatively pairs within a class can play against each other.

An interesting extension is this applet, also on Game Theory, which models the evolution of 2 populations – residents and invaders. You can set different responses – and then see what happens to the respective populations. This is a good reflection of interactions in real life – where species can choose to live co-cooperatively, or to fight for the same resources.

The first stop for anyone interested in more information about Game Theory should be the Maths Illuminated website – which has an entire teacher unit on the subject – complete with different sections,a video and pdf documents. There’s also a great article on Plus Maths – Does it Pay to be Nice? all about this topic. There are a lot of different games which can be modeled using game theory – and many are listed here . These include the Stag Hunt, Hawk/ Dove and the Peace War game. Some of these have direct applicability to population dynamics, and to the geo-politics of war versus peace.

If you enjoyed this post you might also like:

**Graham’s Number – literally big enough to collapse your head into a black hole**

Graham’s Number is a number so big that it would *literally* collapse your head into a black hole were you fully able to comprehend it. And that’s not hyperbole – the informational content of Graham’s Number is so astronomically large that it exceeds the maximum amount of entropy that could be stored in a brain sized piece of space – i.e. a black hole would form prior to fully processing all the data content. This is a great introduction to notation for *really* big numbers. Numberphile have produced a fantastic video on the topic:

Graham’s Number makes use of Kuth’s up arrow notation (explanation from wikipedia:)

In the series of hyper-operations we have

1) Multiplication:

For example,

2) Exponentiation:

For example,

3) Tetration:

For example,

- etc.

4) Pentation:

and so on.

Examples:

Which clearly can lead to some absolutely huge numbers very quickly. Graham’s Number – which was arrived at mathematically as an upper bound for a problem relating to vertices on hypercubes is (explanation from Wikipedia)

where the number of *arrows* in each layer, starting at the top layer, is specified by the value of the next layer below it; that is,

and where a superscript on an up-arrow indicates how many arrows are there. In other words, *G* is calculated in 64 steps: the first step is to calculate *g*_{1} with four up-arrows between 3s; the second step is to calculate *g*_{2} with *g*_{1} up-arrows between 3s; the third step is to calculate *g*_{3} with *g*_{2} up-arrows between 3s; and so on, until finally calculating *G* = *g*_{64} with *g*_{63} up-arrows between 3s.

So a number so big it can’t be fully processed by the human brain. This raises some interesting questions about maths and knowledge – Graham’s Number is an example of a number that exists but is beyond full human comprehension – it therefore is an example of a upper bound of human knowledge. Therefore will there always be things in the Universe which are beyond full human understanding? Or can mathematics provide a shortcut to knowledge that would otherwise be inaccessible?

If you enjoyed this post you might also like:

How Are Prime Numbers Distributed? Twin Primes Conjecture – a discussion about the amazing world of prime numbers.

Wau: The Most Amazing Number in the World? – a post which looks at the amazing properties of Wau

**What is the sum of the infinite sequence 1, -1, 1, -1, 1…..?**

This is a really interesting puzzle to study – which fits very well when studying geometric series, proof and the history of maths.

The two most intuitive answers are either that it has no sum or that it sums to zero. If you group the pattern into pairs, then each pair (1, -1) = 0. However if you group the pattern by first leaving the 1, then grouping pairs of (-1,1) you would end up with a sum of 1.

Firstly it’s worth seeing why we shouldn’t just use our formula for a geometric series:

with r as the multiplicative constant of -1. This formula requires that the absolute value of r is less than 1 – otherwise the series will not converge.

The series 1,-1,1,-1…. is called Grandi’s series – after a 17th century Italian mathematician (pictured) – and sparked a few hundred years worth of heated mathematical debate as to what the correct summation was.

Using the Cesaro method (explanation pasted from here )

If *a*_{n} = (−1)^{n+1} for *n* ≥ 1. That is, {*a*_{n}} is the sequence

Then the sequence of partial sums {*s*_{n}} is

so whilst the series not converge, if we calculate the terms of the sequence {(*s*_{1} + … + *s*_{n})/*n*} we get:

so that

So, using different methods we have shown that this series “should” have a summation of 0 (grouping in pairs), or that it “should” have a sum of 1 (grouping in pairs after the first 1), or that it “should” have no sum as it simply oscillates, or that it “should” have a Cesaro sum of 1/2 – no wonder it caused so much consternation amongst mathematicians!

This approach can be extended to the complex series, which is looked at in the blog God Plays Dice

This is a really great example of how different proofs can sometimes lead to different (and unexpected) results. What does this say about the nature of proof?

**The Mathematics of Crime and Terrorism**

The ever excellent Numberphile have just released a really interesting video looking at what mathematical models are used to predict terrorist attacks and crime. Whereas a Poisson distribution assumes that events that happen are completely independent, it is actually the case that one (say) burglary in a neighbourhood means that another burglary is much more likely to happen shortly after. Therefore we need a new distribution to model this. The one that Hannah Fry talks about in the video is called the Hawkes process – which gets a little difficult. Nevertheless this is a nice video for showing the need to adapt models to represent real life data.

This was the last question on the May 2016 Calculus option paper for IB HL. It’s worth nearly a quarter of the entire marks – and is well off the syllabus in its difficulty. You could make a case for this being the most difficult IB HL question ever. As such it was a terrible exam question – but would make a very interesting exploration topic. So let’s try and understand it!

**Part (a)**

First I’m going to go through a solution to the question – this was provided by another HL maths teacher, Daniel – who worked through a very nice answer. For the first part of the question we need to try and understand what is actually happening – we have the sum of an integral – where we are summing a sequence of definite integrals. So when n = 0 we have the single integral from 0 to pi of sint/t. When n = 1 we have the single integral from pi to 2pi of sint/t. The summation of the first n terms will add the answers to the first n integrals together.

This is the plot of y = sinx/x from 0 to 6pi. Using the GDC we can find that the roots of this function are n(pi). This gives us the first mark in the question – as when we are integrating from 0 to pi the graph is above the x axis and so the integral is positive. When we integrate from pi to 2pi the graph is below the x axis and so the integral is negative. Since our sum consists of alternating positive and negative terms, then we have an alternating series.

**Part (b i)**

This is where it starts to get difficult! You might be tempted to try and integrate sint/t – which is what I presume a lot of students will have done. It looks like integration by parts might work on this. However this was a nasty trap laid by the examiners – integrating by parts is a complete waste of time as this function is non-integrable. This means that there is no elementary function or standard basic integration method that will integrate it. (We will look later at how it can be integrated – it gives something called the Si(x) function). Instead this is how Daniel’s method progresses:

Hopefully the first 2 equalities make sense – we replace n with n+1 and then replace t with T + pi. dt becomes dT when we differentiate t = T + pi. In the second integral we have also replaced the limits (n+1)pi and (n+2)pi with n(pi) and (n+1)pi as we are now integrating with respect to T and so need to change the limits as follows:

t = (n+1)(pi)

T+ pi = (n+1)(pi)

T = n(pi). This is now the lower integral value.

The third integral uses the fact that sin(T + pi) = – sin(T).

The fourth integral then uses graphical logic. y = -sinx/x looks like this:

This is the same as y = sinx/x but reflected in the x axis. Therefore the absolute value of the integral of y = -sinx/x will be the same as the absolute integral of y = sinx/x. The fourth integral has also noted that we can simply replace T with t to produce an equivalent integral. The last integral then notes that the integral of sint/(t+pi) will be less than the integral of sint/t. This then gives us the inequality we desire.

Don’t worry if that didn’t make complete sense – I doubt if more than a handful of IB students in the whole world got that in exam conditions. Makes you wonder what the point of that question was, but let’s move on.

**Part (b ii)**

OK, by now most students will have probably given up in despair – and the next part doesn’t get much easier. First we should note that we have been led to show that we have an alternating series where the absolute value of u_n+1 is less than the absolute value of u_n. Let’s check the requirements for proving an alternating series converges:

We already have shown it’s an absolute decreasing sequence, so we just now need to show the limit of the sequence is 0.

OK – here we start by trying to get a lower and upper bound for u_n. We want to show that as n gets large, the limit of u_n = 0. In the second integral we have used the fact that the absolute value of an integral of a function is always less than or equal to the integral of an absolute value of a function. That might not make any sense, so let’s look graphically:

This graph above is y = sinx/x. If we integrate this function then the parts under the x axis will contribute a negative amount.

But this graph is y = absolute (sinx/x). Here we have no parts under the x axis – and so the integral of absolute (sinx/x) will always be greater than or equal to the integral of y = sinx/x.

To get the third integral we note that absolute (sinx) is bounded between 0 and 1 and so the integral of 1/x will always be greater than or equal to the integral of absolute (sinx)/x.

We next can ignore the absolute value because 1/x is always positive for positive x, and so we integrate 1/x to get ln(x). Substituting the values of the definite integral gives us a function of ln which as n approaches infinity approaches 0. Therefore as this limit approaches 0, and this function was always greater than or equal to absolute u_n, then the limit of absolute u_n must also be 0.

Therefore we have satisfied the requirements for the Alternating Series test and so the series is convergent.

**Part (c)**

Part (c) is at least accessible for level 6 and 7 students as long as you are still sticking with the question. Here we note that we have been led through steps to prove we have an alternating and convergent series. Now we use the fact that the sum to infinity of a convergent alternating series lies between any 2 successive partial sums. Then we can use the GDC to find the first few partial sums:

And there we are! 14 marks in the bag. Makes you wonder who the IB write their exams for – this was so far beyond sixth form level as to be ridiculous. More about the Si(x) function in the next post.