IB Maths and GCSE Maths Resources from British International School Phuket. Theory of Knowledge (ToK). Maths explorations and investigations.  Real life maths. Maths careers. Maths videos. Maths puzzles and Maths lesson resources.

Screen Shot 2015-01-25 at 8.08.14 PM

British International School Phuket

Welcome to the British International School Phuket’s maths website.  I am currently working at BISP and so I am running my site as the school’s maths resources website for both our students and students around the world.

We are a British international school located on the tropical island of Phuket in Southern Thailand. We offer a number of scholarships each year, catering for a number of national and international standard sports stars as well as for academic excellence. You can find out more about our school here.

Thailand Maths Challenge 

photo of finalist day

British International School, Phuket (BISP) and Rangsit University’s College of Information and Communication Technology are proud to announce the winner of the 2015-16 Thailand Maths Challenge. Yu Qing Wu from Bangkok Patana school turned in a incredibly impressive performance to claim first place. Her achievement is all the more impressive given that she is still only in Year 11 at school and was competing with students up to two years older than herself.

The finalists round was held at Rangsit University’s Sky Lounge on February 19th.  Students had to complete 3 rounds of problem solving questions with a focus on a branch of mathematics called Number Theory – which requires rigorous proof and mathematical logic.  Assitant Professor Wongsakorn Charoenpanitseri who was one of the coordinators for the World Mathematics Olympiad held last year in Thailand helped with the judging process.

Second place was also claimed by a Bangkok Patana student, Benjada Karprasertsri, whilst third place went to Santkorn Gorsagun from Mahidol University Demonstration School. There were also good results from Anglo Singapore and Trinity International School.   The top three students were all offered full and partial scholarships to study at Rangsit University at the College of ICT.

Over 200 of Thailand’s top schools were invited to participate: international schools, bilingual schools, Thai private schools, and state schools in order to find the best young mathematicians in the country.  Well done to all who took part!

 

2d 4d

Finger Ratio Predicts Maths Ability?

Some of the studies on the 2D: 4D finger ratios (as measured in the picture above) are interesting when considering what factors possibly affect mathematical ability.  A 2007 study by Mark Brosnan from the University of Bath found that:

“Boys with the longest ring fingers relative to their index fingers tend to excel in math. The boys with the lowest ratios also were the ones whose abilities were most skewed in the direction of math rather than literacy.

With the girls, there was no correlation between finger ratio and numeracy, but those with higher ratios–presumably indicating low testosterone levels–had better scores on verbal abilities. The link, according to the researchers, is that testosterone levels in the womb influence both finger length and brain development.

In men, the ring (fourth) finger is usually longer than the index (second); their so-called 2D:4D ratio is lower than 1. In females, the two fingers are more likely to be the same length. Because of this sex difference, some scientists believe that a low ratio could be a marker for higher prenatal testosterone levels, although it’s not clear how the hormone might influence finger development.”

In the study, Brosnan photocopied the hands of 74 boys and girls aged 6 and 7.  He worked out the 2D:4D finger ratio by dividing the length of the index finger (2D) with the length of the ring finger (4D). They then compared the finger ratios with standardised UK maths and English tests.  The differences found were small, but significant.

2d 4d

Another study of 136 men and 137 women, looked at the link between finger ratio and aggression.  The results are plotted in the graph above – which clearly show this data follows a normal distribution.  The men are represented with the blue line, the women the green line and the overall cohort in red.  You can see that the male distribution is shifted to the left as they have a lower mean ratio.  (Males: mean 0.947, standard deviation 0.029, Females: mean 0.965, standard deviation 0.026).

The 95% confidence interval for average length is 0.889-1.005 for males and 0.913-1.017 for females.  That means that 95% of the male and female populations will fall into these categories.

The correlation between digit ratio and everything from personality, sexuality, sporting ability and management has been studied.  If a low 2D:4D ratio is indeed due to testosterone exposure in the womb (which is not confirmed), then that raises the question as to why testosterone exposure should affect mathematical ability.  And if it is not connected to testosterone, then what is responsible for the correlation between digit ratios and mathematical talent?

I think this would make a really interesting Internal Assessment investigation at either Studies or Standard Level.  Also it works well as a class investigation at KS3 and IGCSE into correlation and scatter diagrams.   Does the relationship still hold for when you look at algebraic skills rather than numeracy?  Or is algebraic talent distinct from numeracy talent?

A detailed academic discussion of the scientific literature on this topic is available here.

If you enjoyed this post you might also like:

Simulations -Traffic Jams and Asteroid Impacts

NASA, Aliens and Binary Codes from the Stars

screen-shot-2016-11-04-at-5-06-42-pm

Modelling Radioactive decay

We can model radioactive decay of atoms using the following equation:

N(t) = N0 e-λt

Where:

N0: is the initial quantity of the element

λ: is the radioactive decay constant

t: is time

N(t): is the quantity of the element remaining after time t.

So, for Carbon-14 which has a half life of 5730 years (this means that after 5730 years exactly half of the initial amount of Carbon-14 atoms will have decayed) we can calculate the decay constant λ.  

After 5730 years, N(5730) will be exactly half of N0, therefore we can write the following:

N(5730) = 0.5N0 = N0 e-λt

therefore:

0.5 = e-λt

and if we take the natural log of both sides and rearrange we get:

λ = ln(1/2) / -5730

λ ≈0.000121

We can now use this to solve problems involving Carbon-14 (which is used in Carbon-dating techniques to find out how old things are).

eg.  You find an old parchment and after measuring the Carbon-14 content you find that it is just 30% of what a new piece of paper would contain.  How old is this paper?

We have

N(t) = N0 e-0.000121t

N(t)/N0e-0.000121t

0.30e-0.000121t

t = ln(0.30)/(-0.000121)

t = 9950 years old.

screen-shot-2016-11-04-at-5-10-43-pm

Probability density functions

We can also do some interesting maths by rearranging:

N(t) = N0 e-λt

N(t)/N0 =  e-λt

and then plotting N(t)/N0 against time.

screen-shot-2016-11-04-at-4-21-41-pm

N(t)/N0 will have a range between 0 and 1 as when t = 0, N(0)N0 which gives N(0)/N(0) = 1.

We can then manipulate this into the form of a probability density function – by finding the constant a which makes the area underneath the curve equal to 1.

screen-shot-2016-11-04-at-4-48-31-pm

solving this gives a = λ.  Therefore the following integral:

screen-shot-2016-11-04-at-4-50-04-pm

will give the fraction of atoms which will have decayed between times t1 and t2.

We could use this integral to work out the half life of Carbon-14 as follows:

screen-shot-2016-11-04-at-4-52-07-pm

Which if we solve gives us t = 5728.5 which is what we’d expect (given our earlier rounding of the decay constant).

We can also now work out the expected (mean) time that an atom will exist before it decays.  To do this we use the following equation for finding E(x) of a probability density function:

screen-shot-2016-11-04-at-4-56-00-pm

and if we substitute in our equation we get:

screen-shot-2016-11-04-at-4-56-07-pm

Now, we can integrate this by parts:

screen-shot-2016-11-04-at-4-57-55-pm

So the expected (mean) life of an atom is given by 1/λ.  In the case of Carbon, with a decay constant λ ≈0.000121 we have an expected life of a Carbon-14 atom as:

E(t) = 1 /0.000121

E(t) = 8264 years.

Now that may sound a little strange – after all the half life is 5730 years, which means that half of all atoms will have decayed after 5730 years.  So why is the mean life so much higher?  Well it’s because of the long right tail in the graph – we will have some atoms with very large lifespans – and this will therefore skew the mean to the right.

gavel

Amanda Knox and Bad Maths in Courts

This post is inspired by the recent BBC News article, “Amanda Knox and Bad Maths in Courts.”   The article highlights the importance of good mathematical understanding when handling probabilities – and how mistakes by judges and juries can sometimes lead to miscarriages of justice.

A scenario to give to students:

A murder scene is found with two types of blood – that of the victim and that of the murderer.  As luck would have it, the unidentified blood has an incredibly rare blood disorder, only found in 1 in every million men.  The capital and surrounding areas have a population of 20 million – and the police are sure the murderer is from the capital.   The police have already started cataloging all citizens’ blood types for their new super crime-database.  They already have nearly 1 million male samples in there – and bingo – one man, Mr XY, is a match.  He is promptly marched off to trial, there is no other evidence, but the jury are told that the odds are 1 in a million that he is innocent.  He is duly convicted.   The question is, how likely is it that he did not commit this crime? 

Answer:

We can be around 90% confident that he did not commit this crime.  Assuming that there are approximately 10 million men in the capital, then were everyone cataloged on the database we would have on average 10 positive matches.  Given that there is no other evidence, it is therefore likely that he is only a 1 in 10 chance of being guilty.  Even though P(Fail Test/Innocent) = 1/1,000,000,  P(Innocent/Fail test) = 9/10.

Amanda Knox

Eighteen months ago, Amanda Knox and Raffaele Sollecito, who were previously convicted of the murder of British exchange student Meredith Kercher, were acquitted.  The judge at the time ruled out re-testing a tiny DNA sample found at the scene, stating that, “The sum of the two results, both unreliable… cannot give a reliable result.”

This logic however, whilst intuitive is not mathematically correct.   As explained by mathematician Coralie Colmez in the BBC News article, by repeating relatively unreliable tests we can make them more reliable – the larger the pooled sample size, the more confident we can be in the result.

sally clark

Sally Clark

One of the most (in)famous examples of bad maths in the court room is that of Sally Clark – who was convicted of the murder of her two sons in 1999.  It has been described as, “one of the great miscarriages of justice in modern British legal history.”  Both of Sally Clark’s children died from cot-death whilst still babies.  Soon afterwards she was arrested for murder.  The case was based on a seemingly incontrovertible statistic – that the chance of 2 children from the same family dying from cot-death was 1 in 73 million.  Experts testified to this, the jury were suitably convinced and she was convicted.

The crux of the prosecutor’s case was that it was so statistically unlikely that this had happened by chance, that she must have killed her children.  However, this was bad maths – which led to an innocent woman being jailed for four years before her eventual acquittal.

Independent Events

The 1 in 73 million figure was arrived at by simply looking at the probability of a single cot-death (1 in 8500 ) and then squaring it – because it had happened twice.  However, this method only works if both events are independent – and in this case they clearly weren’t.  Any biological or social factors which contribute to the death of a child due to cot-death will also mean that another sibling is also at elevated risk.

Prosecutor’s Fallacy

Additionally this figure was presented in a way which is known as the “prosecutor’s fallacy” – the 1 in 73 million figure (even if correct) didn’t represent the probability of Sally Clark’s innocence, because it should have been compared against the probability of guilt for a double homicide.   In other words, the probability of a false positive is not the same as the probability of innocence.  In mathematical language, P(Fail Test/Innocent) is not equal to P(Innocent/Fail test).

Subsequent analysis of the Sally Clark case by a mathematics professor concluded that rather than having a 1 in 73 million chance of being innocent, actually it was about 4-10 times more likely this was due to natural causes rather than murder.  Quite a big turnaround – and evidence of why understanding statistics is so important in the courts.

This topic has also been highlighted recently by the excellent ToK website, Lancaster School ToK.

If you enjoyed this topic you might also like:

Benford’s Law – Using Maths to Catch Fraudsters

The Mathematics of Cons – Pyramid Selling

Does it Pay to be Nice?  Game Theory and Evolution

Golden Balls, hosted by Jasper Carrot, is based on a version of the Prisoner’s Dilemma. For added interest, try and predict what the 2 contestants are going to do. Any psychological cues to pick up on?

Game theory is an interesting branch of mathematics with links across a large number of disciplines – from politics to economics to biology and psychology.  The most well known example is that of the Prisoner’s Dilemma.  (Illustrated below).  Two prisoners are taken into custody and held in separate rooms.  During interrogation they are told that if they testify to everything (ie betray their partner) then they will go free and their partner will get 10 years.  However, if they both testify they will both get 5 years, and if they both remain silent then they will both get 6 months in jail.

prisoner dilemma

So, what is the optimum strategy for prisoner A?  In this version he should testify – because whichever strategy his partner chooses this gives prisoner A the best possible outcome.  Looking at it in reverse, if prisoner B testifies, then prisoner A would have been best testifying (gets 5 years rather than 10).   If prisoner B remains silent, then prisoner A would have been best testifying (goes free rather than 6 months).

This brings in an interesting moral dilemma – ie. even if the prisoner and his partner are innocent they are is placed in a situation where it is in his best interest to testify against their partner – thus increasing the likelihood of an innocent man being sent to jail.  This situation represents a form of plea bargaining – which is more common in America than Europe.

Part of the dilemma arises because if both men know that the optimum strategy is to testify, then they both end up with lengthy 5 year jail sentences.  If only they can trust each other to be altruistic rather than selfish – and both remain silent, then they get away with only 6 months each.   So does mathematics provide an amoral framework?  i.e. in this case mathematically optimum strategies are not “nice,” but selfish.

MAD

Game theory became quite popular during the Cold War, as the matrix above represented the state of the nuclear stand-off.  The threat of Mutually Assured Destruction (MAD) meant that neither the Americans or the Russians had any incentive to strike, because that would inevitably lead to a retaliatory strike – with catastrophic consequences.  The above matrix uses negative infinity to represent the worst possible outcome, whilst both sides not striking leads to a positive pay off.  Such a game has a very strong Nash Equilibrium – ie. there is no incentive to deviate from the non strike policy.  Could the optimal maths strategy here be said to be responsible for saving the world?

selfish gene

Game theory can be extended to evolutionary biology – and is covered in Richard Dawkin’s The Selfish Gene in some detail.  Basically whilst it is an optimum strategy to be selfish in a single round of the prisoner’s dilemma, any iterated games (ie repeated a number of times) actually tend towards a co-operative strategy.  If someone is nasty to you on round one (ie by testifying) then you can punish them the next time.  So with the threat of punishment, a mutually co-operative strategy is superior.

You can actually play the iterated Prisoner Dilemma game as an applet on the website Game Theory. Alternatively pairs within a class can play against each other.

An interesting extension is this applet, also on Game Theory, which models the evolution of 2 populations – residents and invaders.  You can set different responses – and then see what happens to the respective populations.  This is a good reflection of interactions in real life – where species can choose to live co-cooperatively, or to fight for the same resources.

The first stop for anyone interested in more information about Game Theory should be the Maths Illuminated website – which has an entire teacher unit on the subject – complete with different sections,a video and pdf documents.  There’s also a great article on Plus Maths – Does it Pay to be Nice? all about this topic.  There are a lot of different games which can be modeled using game theory – and many are listed here . These include the Stag Hunt, Hawk/ Dove and the Peace War game.  Some of these have direct applicability to population dynamics, and to the geo-politics of war versus peace.

If you enjoyed this post you might also like:

Simulations -Traffic Jams and Asteroid Impacts

Langton’s Ant – Order out of Chaos

grahams number

Graham’s Number – literally big enough to collapse your head into a black hole

Graham’s Number is a number so big that it would literally collapse your head into a black hole were you fully able to comprehend it. And that’s not hyperbole – the informational content of Graham’s Number is so astronomically large that it exceeds the maximum amount of entropy that could be stored in a brain sized piece of space – i.e. a black hole would form prior to fully processing all the data content. This is a great introduction to notation for really big numbers. Numberphile have produced a fantastic video on the topic:

Graham’s Number makes use of Kuth’s up arrow notation (explanation from wikipedia:)

In the series of hyper-operations we have

1) Multiplication:

   \begin{matrix}    a\times b & = & \underbrace{a+a+\dots+a} \\    & & b\mbox{ copies of }a   \end{matrix}

For example,

   \begin{matrix}   4\times 3 & = & \underbrace{4+4+4} & = & 12\\    & & 3\mbox{ copies of }4   \end{matrix}

2) Exponentiation:

   \begin{matrix}    a\uparrow b= a^b = & \underbrace{a\times a\times\dots\times a}\\    & b\mbox{ copies of }a   \end{matrix}

For example,

   \begin{matrix}    4\uparrow 3= 4^3 = & \underbrace{4\times 4\times 4} & = & 64\\    & 3\mbox{ copies of }4   \end{matrix}

3) Tetration:

   \begin{matrix}    a\uparrow\uparrow b & = {\ ^{b}a}  = & \underbrace{a^{a^{{}^{.\,^{.\,^{.\,^a}}}}}} &     = & \underbrace{a\uparrow (a\uparrow(\dots\uparrow a))}  \\       & & b\mbox{ copies of }a     & & b\mbox{ copies of }a   \end{matrix}

For example,

   \begin{matrix}    4\uparrow\uparrow 3 & = {\ ^{3}4}  = & \underbrace{4^{4^4}} &     = & \underbrace{4\uparrow (4\uparrow 4)} & = & 4^{256} & \approx & 1.34078079\times 10^{154}& \\       & & 3\mbox{ copies of }4     & & 3\mbox{ copies of }4   \end{matrix}
3\uparrow\uparrow 2=3^3=27
3\uparrow\uparrow 3=3^{3^3}=3^{27}=7625597484987
3\uparrow\uparrow 4=3^{3^{3^3}}=3^{3^{27}}=3^{7625597484987}
3\uparrow\uparrow 5=3^{3^{3^{3^3}}}=3^{3^{3^{27}}}=3^{3^{7625597484987}}
etc.

4) Pentation:

   \begin{matrix}    a\uparrow\uparrow\uparrow b= &     \underbrace{a_{}\uparrow\uparrow (a\uparrow\uparrow(\dots\uparrow\uparrow a))}\\     & b\mbox{ copies of }a   \end{matrix}

and so on.

Examples:

3\uparrow\uparrow\uparrow2 = 3\uparrow\uparrow3 = 3^{3^3} = 3^{27}=7,625,597,484,987
   \begin{matrix}     3\uparrow\uparrow\uparrow3 = 3\uparrow\uparrow3\uparrow\uparrow3 = 3\uparrow\uparrow(3\uparrow3\uparrow3) = &     \underbrace{3_{}\uparrow 3\uparrow\dots\uparrow 3} \\    & 3\uparrow3\uparrow3\mbox{ copies of }3   \end{matrix}   \begin{matrix}    = & \underbrace{3_{}\uparrow 3\uparrow\dots\uparrow 3} \\    & \mbox{7,625,597,484,987 copies of 3}   \end{matrix}=\underbrace{3^{3^{3^{3^{\cdot^{\cdot^{\cdot^{\cdot^{3}}}}}}}}}_{7,625,597,484,987}

Which clearly can lead to some absolutely huge numbers very quickly. Graham’s Number – which was arrived at mathematically as an upper bound for a problem relating to vertices on hypercubes is (explanation from Wikipedia)

grahams number

where the number of arrows in each layer, starting at the top layer, is specified by the value of the next layer below it; that is,

G = g_{64},\text{ where }g_1=3\uparrow\uparrow\uparrow\uparrow 3,\  g_n = 3\uparrow^{g_{n-1}}3,

and where a superscript on an up-arrow indicates how many arrows are there. In other words, G is calculated in 64 steps: the first step is to calculate g1 with four up-arrows between 3s; the second step is to calculate g2 with g1 up-arrows between 3s; the third step is to calculate g3 with g2 up-arrows between 3s; and so on, until finally calculating G = g64 with g63 up-arrows between 3s.

So a number so big it can’t be fully processed by the human brain.  This raises some interesting questions about maths and knowledge – Graham’s Number is an example of a number that exists but is beyond full human comprehension – it therefore is an example of a upper bound of human knowledge.  Therefore will there always be things in the Universe which are beyond full human understanding?  Or can mathematics provide a shortcut to knowledge that would otherwise be inaccessible?

If you enjoyed this post you might also like:

How Are Prime Numbers Distributed? Twin Primes Conjecture – a discussion about the amazing world of prime numbers.

Wau: The Most Amazing Number in the World? – a post which looks at the amazing properties of Wau


What is the sum of the infinite sequence 1, -1, 1, -1, 1…..?

This is a really interesting puzzle to study – which fits very well when studying geometric series, proof and the history of maths.

The two most intuitive answers are either that it has no sum or that it sums to zero.  If you group the pattern into pairs, then each pair (1, -1) = 0.  However if you group the pattern by first leaving the 1, then grouping pairs of (-1,1) you would end up with a sum of 1.

Firstly it’s worth seeing why we shouldn’t just use our formula for a geometric series:

with r as the multiplicative constant of -1.  This formula requires that the absolute value of r is less than 1 – otherwise the series will not converge.

The series 1,-1,1,-1…. is called Grandi’s series – after a 17th century Italian mathematician (pictured) – and sparked a few hundred years worth of heated mathematical debate as to what the correct summation was.

cesaro summation

Using the Cesaro method (explanation pasted from here )

If an = (−1)n+1 for n ≥ 1. That is, {an} is the sequence

1, -1, 1, -1, \ldots.\,

Then the sequence of partial sums {sn} is

1, 0, 1, 0, \ldots,\,

so whilst the series not converge, if we calculate the terms of the sequence {(s1 + … + sn)/n} we get:

\frac{1}{1}, \,\frac{1}{2}, \,\frac{2}{3}, \,\frac{2}{4}, \,\frac{3}{5}, \,\frac{3}{6}, \,\frac{4}{7}, \,\frac{4}{8}, \,\ldots,

so that

\lim_{n\to\infty} \frac{s_1 + \cdots + s_n}{n} = 1/2.

So, using different methods we have shown that this series “should” have a summation of 0 (grouping in pairs), or that it “should” have a sum of 1 (grouping in pairs after the first 1), or that it “should” have no sum as it simply oscillates, or that it “should”  have a Cesaro sum of 1/2 – no wonder it caused so much consternation amongst mathematicians!

This approach can be extended to the complex series, 1 + i + i^2 + i^3 + i^4 + i^5 + \cdots which is looked at in the blog  God Plays Dice

This is a really great example of how different proofs can sometimes lead to different (and unexpected) results. What does this say about the nature of proof?

The Mathematics of Crime and Terrorism

The ever excellent Numberphile have just released a really interesting video looking at what mathematical models are used to predict terrorist attacks and crime.  Whereas a Poisson distribution assumes that events that happen are completely independent, it is actually the case that one (say) burglary in a neighbourhood means that another burglary is much more likely to happen shortly after.  Therefore we need a new distribution to model this.  The one that Hannah Fry talks about in the video is called the Hawkes process – which gets a little difficult.  Nevertheless this is a nice video for showing the need to adapt models to represent real life data.

The Watson Selection Task – a logical puzzle

The Watson Selection Task is a logical problem designed to show how bad we are at making logical decisions.  Watson first used it in 1968 – and found that only 10% of the population would get the correct answer.  Indeed around 65% of the population make the same error.  Here is the task:

The participants were given the following instructions:

Here is a rule: “every card that has a D on one side has a 3 on the other.” Your task is to select all those cards, but only those cards, which you would have to turn over in order to discover whether or not the rule has been violated.

Screen Shot 2016-06-01 at 2.13.27 PM

Give yourself a couple of minutes to work out what you think the answer is – and then highlight the space below where the answer is written in white text.

The correct answer is to pick the D card and the 7 card

 

 

 

 

This result is normally quite unexpected – but it highlights one of the logical fallacies that we often fall into:

A implies B does not mean that B implies A

All cats have 4 legs (Cats = A, legs = B, A implies B)
All 4 legged animals are cats (B implies A)

We can see that here we would make a logical error if we concluded that all 4 legged animals were cats.

In the logic puzzle we need to turn over only 2 cards, D and 7.  This is surprising because most people will also say that you need to turn over card with a 3.  First we need to be clear about what we are trying to do:  We want to find evidence that the rule we are given is false.

If we turn over the D and find a number other than 3, we have evidence that the rule is false – therefore we need to turn over D.

If we turn over the 7 and find a D on the other side, we have evidence that the rule is false – therefore we need to turn over the 7.

But what about the 3?  If we turn over the 3 and find a D then we have no evidence that the rule is false (which is what we are looking for).  If we turn over the 3 and find another letter then this also gives us no evidence that the rule is false.  After all our rule says that all Ds have 3s on the other side, but it doesn’t say that all 3s have Ds on the other side.

Screen Shot 2015-12-09 at 7.03.47 AM

Are mathematicians better at this puzzle than historians?

Given the importance of logical thought in mathematics, people have done studies to see if undergraduate students in maths perform better than humanities students on this task.  Here are the results:

Screen Shot 2016-06-01 at 2.11.03 PM

You can see that there is a significant difference between the groups.  Maths students correctly guessed the answer D7 29% of the time, but only 8% of history students did.  The maths university lecturers performed best – getting the answer right 43% of the time.

Making different mistakes

Screen Shot 2016-06-01 at 2.11.18 PM

You can also analyse the mistakes that students made- by only looking at the proportions of incorrect selections.  Here again are significant differences which show that the groups are thinking about the problem in different ways.  DK7 was chosen by around 1/5 of both maths students and lecturers, but by hardly any history students.

You can read about these results in much more depth in the following research paper Mathematicians and the Selection Task – where they also use Chi Squared testing for significance levels.

 

 

A longer look at the Si(x) function

Sinx/x can’t be integrated into an elementary function – instead we define:

Screen Shot 2016-05-21 at 9.31.12 PM

Where Si(x) is a special function.  This may sound strange – but we already come across another similar case with the integral of 1/x.  In this case we define the integral of 1/x as ln(x).  ln(x) is a function with its own graph and I can use it to work out definite integrals of 1/x.  For example the integral of 1/x from 1 to 5 will be ln(5) – ln(1) = ln(5).

The graph of Si(x) looks like this:

Screen Shot 2016-05-21 at 9.45.57 PM

Or, on a larger scale:

Screen Shot 2016-05-21 at 9.46.04 PM

You can see that it is symmetrical about the y axis, has an oscillating motion and as x gets large approaches a limit.  In fact this limit is pi/2.

Because Si(0) = 0,  you can write the following integrals as:

Screen Shot 2016-05-21 at 9.30.25 PM

How to integrate sinx/x ?

It’s all very well to define a new function – and say that this is the integral of sinx/x – but how was this function generated in the first place?

Well, one way to integrate difficult functions is to use Taylor and Maclaurin expansions.  For example the Maclaurin expansion of sinx/x for values near x=0 is:

Screen Shot 2016-05-21 at 9.31.00 PM

This means that in the domain close to x = 0, the function sinx/x behaves in a similar way to the polynomial above.  The last part of this expression O( )  just means everything else in this expansion will be x^6 or greater.

Graph of sinx/x

Screen Shot 2016-05-21 at 9.28.31 PM

Graph of 1 – x^2/6 + x^4/120

Screen Shot 2016-05-22 at 6.38.28 AM

In the region close to x=0 these functions behave in a very similar manner (this would be easier to see with similar scales so let’s look on a GDC):

So for the region above (x between 0 and 2) the 2 graphs are virtually indistinguishable.

Therefore if we want to integrate sinx/x for values close to 0 we can just integrate our new function 1 – x^2/6 + x^4/120 and get a good approximation.

Let’s try how accurate this is.  We can use Wolfram Alpha to tell us that:

Screen Shot 2016-05-22 at 6.49.08 AM

and let’s use Wolfram to work out the integral as well:

Screen Shot 2016-05-22 at 6.49.59 AM

Our approximation is accurate to 3 dp, 1.371 in both cases.  If we wanted greater accuracy we would simply use more terms in the Maclaurin expansion.

So, by using the Maclaurin expansion for terms near x = 0 and the Taylor expansion for terms near x = a we can build up information as to the values of the Si(x) function.

 

 

Screen Shot 2016-05-21 at 7.18.26 AM

This was the last question on the May 2016 Calculus option paper for IB HL.  It’s worth nearly a quarter of the entire marks – and is well off the syllabus in its difficulty.  You could make a case for this being the most difficult IB HL question ever.  As such it was a terrible exam question – but would make a very interesting exploration topic.  So let’s try and understand it!

Part (a)

First I’m going to go through a solution to the question – this was provided by another HL maths teacher, Daniel – who worked through a very nice answer.  For the first part of the question we need to try and understand what is actually happening – we have the sum of an integral – where we are summing a sequence of definite integrals.  So when n = 0 we have the single integral from 0 to pi of sint/t.  When n = 1 we have the single integral from pi to 2pi of sint/t.  The summation of the first n terms will add the answers to the first n integrals together.

Screen Shot 2016-05-21 at 9.16.26 PM

This is the plot of y = sinx/x from 0 to 6pi.  Using the GDC we can find that the roots of this function are n(pi).  This gives us the first mark in the question – as when we are integrating from 0 to pi the graph is above the x axis and so the integral is positive. When we integrate from pi to 2pi the graph is below the x axis and so the integral is negative.  Since our sum consists of alternating positive and negative terms, then we have an alternating series.

Part (b i)

This is where it starts to get difficult!  You might be tempted to try and integrate sint/t – which is what I presume a lot of students will have done.  It looks like integration by parts might work on this.  However this was  a nasty trap laid by the examiners – integrating by parts is a complete waste of time as this function is non-integrable.  This means that there is no elementary function or standard basic integration method that will integrate it.  (We will look later at how it can be integrated – it gives something called the Si(x) function).  Instead this is how Daniel’s method progresses:

Screen Shot 2016-05-21 at 9.50.36 PM

Hopefully the first 2 equalities make sense – we replace n with n+1 and then replace t with T + pi.  dt becomes dT when we differentiate t = T + pi.  In the second integral we have also replaced the limits (n+1)pi and (n+2)pi with n(pi) and (n+1)pi as we are now integrating with respect to T and so need to change the limits as follows:

t = (n+1)(pi)

T+ pi = (n+1)(pi)

T = n(pi).  This is now the lower integral value.

The third integral uses the fact that sin(T + pi) = – sin(T).

The fourth integral then uses graphical logic.  y = -sinx/x looks like this:

Screen Shot 2016-05-21 at 10.08.00 PM

This is the same as y = sinx/x but reflected in the x axis.  Therefore the absolute value of the integral of  y = -sinx/x  will be the same as the absolute integral of y = sinx/x.  The fourth integral has also noted that we can simply replace T with t to produce an equivalent integral.  The last integral then notes that the integral of sint/(t+pi) will be less than the integral of sint/t.  This then gives us the inequality we desire.

Don’t worry if that didn’t make complete sense – I doubt if more than a handful of IB students in the whole world got that in exam conditions.  Makes you wonder what the point of that question was, but let’s move on.

Part (b ii)

OK, by now most students will have probably given up in despair – and the next part doesn’t get much easier.  First we should note that we have been led to show that we have an alternating series where the absolute value of u_n+1 is less than the absolute value of u_n.  Let’s check the requirements for proving an alternating series converges:

Screen Shot 2016-05-21 at 10.22.18 PM

We already have shown it’s an absolute decreasing sequence, so we just now need to show the limit of the sequence is 0.

Screen Shot 2016-05-21 at 10.24.05 PM

OK – here we start by trying to get a lower and upper bound for u_n.  We want to show that as n gets large, the limit of u_n = 0.  In the second integral we have used the fact that the absolute value of an integral of a function is always less than or equal to the integral of an absolute value of a function.  That might not make any sense, so let’s look graphically:

Screen Shot 2016-05-21 at 9.29.06 PM

This graph above is y = sinx/x.  If we integrate this function then the parts under the x axis will contribute a negative amount.

Screen Shot 2016-05-22 at 7.36.26 AM

But this graph is y = absolute (sinx/x).  Here we have no parts under the x axis – and so the integral of absolute (sinx/x) will always be greater than or equal to the integral of y = sinx/x.

To get the third integral we note that absolute (sinx) is bounded between 0 and 1 and so the   integral of 1/x will always be greater than or equal to the integral of absolute (sinx)/x.

We next can ignore the absolute value because 1/x is always positive for positive x, and so we integrate 1/x to get ln(x). Substituting the values of the definite integral gives us a function of ln which as n approaches infinity approaches 0.  Therefore as this limit approaches 0, and this function was always greater than or equal to absolute u_n, then the limit of absolute u_n must also be 0.

Therefore we have satisfied the requirements for the Alternating Series test and so the series is convergent.

Part (c)

Part (c) is at least accessible for level 6 and 7 students as long as you are still sticking with the question.  Here we note that we have been led through steps to prove we have an alternating and convergent series.  Now we use the fact that the sum to infinity of a convergent alternating series lies between any 2 successive partial sums.  Then we can use the GDC to find the first few partial sums:

Screen Shot 2016-05-21 at 10.30.29 PMAnd there we are!  14 marks in the bag.  Makes you wonder who the IB write their exams for – this was so far beyond sixth form level as to be ridiculous.  More about the Si(x) function in the next post.

 

Website Stats

  • 3,349,383 views

Recent Posts

Follow IB Maths Resources from British International School Phuket on WordPress.com