Graham’s Number – literally big enough to collapse your head into a black hole

Graham’s Number is a number so big that it would literally collapse your head into a black hole were you fully able to comprehend it. And that’s not hyperbole – the informational content of Graham’s Number is so astronomically large that it exceeds the maximum amount of entropy that could be stored in a brain sized piece of space – i.e. a black hole would form prior to fully processing all the data content. This is a great introduction to notation for really big numbers. Numberphile have produced a fantastic video on the topic:

Graham’s Number makes use of Kuth’s up arrow notation (explanation from wikipedia:)

In the series of hyper-operations we have

1) Multiplication:

For example,

2) Exponentiation:

For example,

3) Tetration:

For example,

etc.

4) Pentation:

and so on.

Examples:

Which clearly can lead to some absolutely huge numbers very quickly. Graham’s Number – which was arrived at mathematically as an upper bound for a problem relating to vertices on hypercubes is (explanation from Wikipedia)

where the number of arrows in each layer, starting at the top layer, is specified by the value of the next layer below it; that is,

and where a superscript on an up-arrow indicates how many arrows are there. In other words, G is calculated in 64 steps: the first step is to calculate g1 with four up-arrows between 3s; the second step is to calculate g2 with g1 up-arrows between 3s; the third step is to calculate g3 with g2 up-arrows between 3s; and so on, until finally calculating G = g64 with g63 up-arrows between 3s.

So a number so big it can’t be fully processed by the human brain.  This raises some interesting questions about maths and knowledge – Graham’s Number is an example of a number that exists but is beyond full human comprehension – it therefore is an example of a upper bound of human knowledge.  Therefore will there always be things in the Universe which are beyond full human understanding?  Or can mathematics provide a shortcut to knowledge that would otherwise be inaccessible?

If you enjoyed this post you might also like:

How Are Prime Numbers Distributed? Twin Primes Conjecture – a discussion about the amazing world of prime numbers.

Wau: The Most Amazing Number in the World? – a post which looks at the amazing properties of Wau

What is the sum of the infinite sequence 1, -1, 1, -1, 1…..?

This is a really interesting puzzle to study – which fits very well when studying geometric series, proof and the history of maths.

The two most intuitive answers are either that it has no sum or that it sums to zero.  If you group the pattern into pairs, then each pair (1, -1) = 0.  However if you group the pattern by first leaving the 1, then grouping pairs of (-1,1) you would end up with a sum of 1.

Firstly it’s worth seeing why we shouldn’t just use our formula for a geometric series:

with r as the multiplicative constant of -1.  This formula requires that the absolute value of r is less than 1 – otherwise the series will not converge.

The series 1,-1,1,-1…. is called Grandi’s series – after a 17th century Italian mathematician (pictured) – and sparked a few hundred years worth of heated mathematical debate as to what the correct summation was.

Using the Cesaro method (explanation pasted from here )

If an = (−1)n+1 for n ≥ 1. That is, {an} is the sequence

Then the sequence of partial sums {sn} is

so whilst the series not converge, if we calculate the terms of the sequence {(s1 + … + sn)/n} we get:

so that

So, using different methods we have shown that this series “should” have a summation of 0 (grouping in pairs), or that it “should” have a sum of 1 (grouping in pairs after the first 1), or that it “should” have no sum as it simply oscillates, or that it “should”  have a Cesaro sum of 1/2 – no wonder it caused so much consternation amongst mathematicians!

This approach can be extended to the complex series, $1 + i + i^2 + i^3 + i^4 + i^5 + \cdots$ which is looked at in the blog  God Plays Dice

This is a really great example of how different proofs can sometimes lead to different (and unexpected) results. What does this say about the nature of proof?

The Mathematics of Crime and Terrorism

The ever excellent Numberphile have just released a really interesting video looking at what mathematical models are used to predict terrorist attacks and crime.  Whereas a Poisson distribution assumes that events that happen are completely independent, it is actually the case that one (say) burglary in a neighbourhood means that another burglary is much more likely to happen shortly after.  Therefore we need a new distribution to model this.  The one that Hannah Fry talks about in the video is called the Hawkes process – which gets a little difficult.  Nevertheless this is a nice video for showing the need to adapt models to represent real life data.

The Watson Selection Task – a logical puzzle

The Watson Selection Task is a logical problem designed to show how bad we are at making logical decisions.  Watson first used it in 1968 – and found that only 10% of the population would get the correct answer.  Indeed around 65% of the population make the same error.  Here is the task:

The participants were given the following instructions:

Here is a rule: “every card that has a D on one side has a 3 on the other.” Your task is to select all those cards, but only those cards, which you would have to turn over in order to discover whether or not the rule has been violated.  Each card has a number on one side and a letter on the other.

Give yourself a couple of minutes to work out what you think the answer is – and then highlight the space below where the answer is written in white text.

The correct answer is to pick the D card and the 7 card

This result is normally quite unexpected – but it highlights one of the logical fallacies that we often fall into:

A implies B does not mean that B implies A

All cats have 4 legs (Cats = A, legs = B, A implies B)
All 4 legged animals are cats (B implies A)

We can see that here we would make a logical error if we concluded that all 4 legged animals were cats.

In the logic puzzle we need to turn over only 2 cards, D and 7.  This is surprising because most people will also say that you need to turn over card with a 3.  First we need to be clear about what we are trying to do:  We want to find evidence that the rule we are given is false.

If we turn over the D and find a number other than 3, we have evidence that the rule is false – therefore we need to turn over D.

If we turn over the 7 and find a D on the other side, we have evidence that the rule is false – therefore we need to turn over the 7.

But what about the 3?  If we turn over the 3 and find a D then we have no evidence that the rule is false (which is what we are looking for).  If we turn over the 3 and find another letter then this also gives us no evidence that the rule is false.  After all our rule says that all Ds have 3s on the other side, but it doesn’t say that all 3s have Ds on the other side.

Are mathematicians better at this puzzle than historians?

Given the importance of logical thought in mathematics, people have done studies to see if undergraduate students in maths perform better than humanities students on this task.  Here are the results:

You can see that there is a significant difference between the groups.  Maths students correctly guessed the answer D7 29% of the time, but only 8% of history students did.  The maths university lecturers performed best – getting the answer right 43% of the time.

Making different mistakes

You can also analyse the mistakes that students made- by only looking at the proportions of incorrect selections.  Here again are significant differences which show that the groups are thinking about the problem in different ways.  DK7 was chosen by around 1/5 of both maths students and lecturers, but by hardly any history students.

You can read about these results in much more depth in the following research paper Mathematicians and the Selection Task – where they also use Chi Squared testing for significance levels.

A longer look at the Si(x) function

Sinx/x can’t be integrated into an elementary function – instead we define:

Where Si(x) is a special function.  This may sound strange – but we already come across another similar case with the integral of 1/x.  In this case we define the integral of 1/x as ln(x).  ln(x) is a function with its own graph and I can use it to work out definite integrals of 1/x.  For example the integral of 1/x from 1 to 5 will be ln(5) – ln(1) = ln(5).

The graph of Si(x) looks like this:

Or, on a larger scale:

You can see that it is symmetrical about the y axis, has an oscillating motion and as x gets large approaches a limit.  In fact this limit is pi/2.

Because Si(0) = 0,  you can write the following integrals as:

How to integrate sinx/x ?

It’s all very well to define a new function – and say that this is the integral of sinx/x – but how was this function generated in the first place?

Well, one way to integrate difficult functions is to use Taylor and Maclaurin expansions.  For example the Maclaurin expansion of sinx/x for values near x=0 is:

This means that in the domain close to x = 0, the function sinx/x behaves in a similar way to the polynomial above.  The last part of this expression O( )  just means everything else in this expansion will be x^6 or greater.

Graph of sinx/x

Graph of 1 – x^2/6 + x^4/120

In the region close to x=0 these functions behave in a very similar manner (this would be easier to see with similar scales so let’s look on a GDC):

So for the region above (x between 0 and 2) the 2 graphs are virtually indistinguishable.

Therefore if we want to integrate sinx/x for values close to 0 we can just integrate our new function 1 – x^2/6 + x^4/120 and get a good approximation.

Let’s try how accurate this is.  We can use Wolfram Alpha to tell us that:

and let’s use Wolfram to work out the integral as well:

Our approximation is accurate to 3 dp, 1.371 in both cases.  If we wanted greater accuracy we would simply use more terms in the Maclaurin expansion.

So, by using the Maclaurin expansion for terms near x = 0 and the Taylor expansion for terms near x = a we can build up information as to the values of the Si(x) function.

This was the last question on the May 2016 Calculus option paper for IB HL.  It’s worth nearly a quarter of the entire marks – and is well off the syllabus in its difficulty.  You could make a case for this being the most difficult IB HL question ever.  As such it was a terrible exam question – but would make a very interesting exploration topic.  So let’s try and understand it!

Part (a)

First I’m going to go through a solution to the question – this was provided by another HL maths teacher, Daniel – who worked through a very nice answer.  For the first part of the question we need to try and understand what is actually happening – we have the sum of an integral – where we are summing a sequence of definite integrals.  So when n = 0 we have the single integral from 0 to pi of sint/t.  When n = 1 we have the single integral from pi to 2pi of sint/t.  The summation of the first n terms will add the answers to the first n integrals together.

This is the plot of y = sinx/x from 0 to 6pi.  Using the GDC we can find that the roots of this function are n(pi).  This gives us the first mark in the question – as when we are integrating from 0 to pi the graph is above the x axis and so the integral is positive. When we integrate from pi to 2pi the graph is below the x axis and so the integral is negative.  Since our sum consists of alternating positive and negative terms, then we have an alternating series.

Part (b i)

This is where it starts to get difficult!  You might be tempted to try and integrate sint/t – which is what I presume a lot of students will have done.  It looks like integration by parts might work on this.  However this was  a nasty trap laid by the examiners – integrating by parts is a complete waste of time as this function is non-integrable.  This means that there is no elementary function or standard basic integration method that will integrate it.  (We will look later at how it can be integrated – it gives something called the Si(x) function).  Instead this is how Daniel’s method progresses:

Hopefully the first 2 equalities make sense – we replace n with n+1 and then replace t with T + pi.  dt becomes dT when we differentiate t = T + pi.  In the second integral we have also replaced the limits (n+1)pi and (n+2)pi with n(pi) and (n+1)pi as we are now integrating with respect to T and so need to change the limits as follows:

t = (n+1)(pi)

T+ pi = (n+1)(pi)

T = n(pi).  This is now the lower integral value.

The third integral uses the fact that sin(T + pi) = – sin(T).

The fourth integral then uses graphical logic.  y = -sinx/x looks like this:

This is the same as y = sinx/x but reflected in the x axis.  Therefore the absolute value of the integral of  y = -sinx/x  will be the same as the absolute integral of y = sinx/x.  The fourth integral has also noted that we can simply replace T with t to produce an equivalent integral.  The last integral then notes that the integral of sint/(t+pi) will be less than the integral of sint/t.  This then gives us the inequality we desire.

Don’t worry if that didn’t make complete sense – I doubt if more than a handful of IB students in the whole world got that in exam conditions.  Makes you wonder what the point of that question was, but let’s move on.

Part (b ii)

OK, by now most students will have probably given up in despair – and the next part doesn’t get much easier.  First we should note that we have been led to show that we have an alternating series where the absolute value of u_n+1 is less than the absolute value of u_n.  Let’s check the requirements for proving an alternating series converges:

We already have shown it’s an absolute decreasing sequence, so we just now need to show the limit of the sequence is 0.

OK – here we start by trying to get a lower and upper bound for u_n.  We want to show that as n gets large, the limit of u_n = 0.  In the second integral we have used the fact that the absolute value of an integral of a function is always less than or equal to the integral of an absolute value of a function.  That might not make any sense, so let’s look graphically:

This graph above is y = sinx/x.  If we integrate this function then the parts under the x axis will contribute a negative amount.

But this graph is y = absolute (sinx/x).  Here we have no parts under the x axis – and so the integral of absolute (sinx/x) will always be greater than or equal to the integral of y = sinx/x.

To get the third integral we note that absolute (sinx) is bounded between 0 and 1 and so the   integral of 1/x will always be greater than or equal to the integral of absolute (sinx)/x.

We next can ignore the absolute value because 1/x is always positive for positive x, and so we integrate 1/x to get ln(x). Substituting the values of the definite integral gives us a function of ln which as n approaches infinity approaches 0.  Therefore as this limit approaches 0, and this function was always greater than or equal to absolute u_n, then the limit of absolute u_n must also be 0.

Therefore we have satisfied the requirements for the Alternating Series test and so the series is convergent.

Part (c)

Part (c) is at least accessible for level 6 and 7 students as long as you are still sticking with the question.  Here we note that we have been led through steps to prove we have an alternating and convergent series.  Now we use the fact that the sum to infinity of a convergent alternating series lies between any 2 successive partial sums.  Then we can use the GDC to find the first few partial sums:

And there we are!  14 marks in the bag.  Makes you wonder who the IB write their exams for – this was so far beyond sixth form level as to be ridiculous.  More about the Si(x) function in the next post.

IB HL Calculus P3 May 2016:  The Hardest IB Paper Ever?

IB HL Paper 3 Calculus May 2016 was a very poor paper.  It was unduly difficult and missed off huge chunks of the syllabus.  You can see question 5 posted above. (I work through the solution to this in the next post).  This is so far off the syllabus as to be well into undergraduate maths.  Indeed it wouldn’t look out of place in an end of first year or end of second year undergraduate calculus exam.  So what’s it doing on a sixth form paper for 17-18 year olds?   The examiners completely abandoned their remit to produce a test of the syllabus content – and instead decided that a one hour exam was the time to introduce extensions to that syllabus, whilst virtually ignoring all the core content of the course.

A breakdown of the questions

1) Maclaurin- on the syllabus.  This was reasonable.  As was using it to find the limit of a fraction.  Part (c) requires use of Lagrange error – which students find difficult and forms a very small part of the course.  If this was the upper level of the challenge in the paper then fair enough, but it was far from it.

2) Fundamental Theorem of Calculus – barely on the syllabus – and unpredictable in advance as to what is going to be asked on this.  This has never been asked before on any paper, there is no guidance in the syllabus, there was no support in the specimen paper and most textbooks do not cover this in any detail.  This seems like an all or nothing question – students will either get 7 or 0 on this question.  Part (c) for an extra 3 marks seems completely superfluous.

3) Mean Value Theorem – a small part of the syllabus given dispropotionate exam question coverage because the examiners seem to like proof questions.  This seems like an all or nothing question as well – if you get the concept then it’s 7 marks, if not it’ll likely be 0.

4) Differential equations –  This question would have been much better if they had simply been given the integrating factor /separate variables question in part (b), leaving some extra marks to test something else on part (a) – perhaps Euler’s Method?

5) An insane extension to the syllabus which took the question well into undergraduate mathematics – and hid within it a “trap” to make students try to integrate a function that can’t actually be integrated.  This really should have been nowhere near the exam.  At 14 marks this accounted for nearly a quarter of the exam.

Content unassessed

The syllabus is only 48 hours and all schools spend that time ploughing through limits and differentiability of functions, L’Hopital’s rule, Riemann sums, Rolle’s Theorem, standard differential equations, isoclines, slope fields, the squeeze theorem, absolute and conditional convergence, error bounds, indefinite integrals, the ratio test, power series, radius of convergence.  All of these went pretty much unassessed.  I would say that the exam tested around 15% of the syllabus content.  Even the assessment of alternating series convergence was buried inside question 5 – making is effectively inaccessible to all students.

The result of this is that there will be a huge squash in the grade boundaries – perhaps as low as 50-60% for a Level 6 and 25-35% for a level 4.    The last 20 marks on the paper will probably be completely useless – separating no students at all.  This then produces huge unpredictability as dropping 4-5 marks might take from from a level 5 to level 3 or level 6 to level 4.

Teachers no longer have any confidence in the IB HL examiners

One of my fellow HL teachers posted this following the Calculus exam:

At various times throughout the year I joke with my students about how the HL Mathematics examiners must be like a group of comic book villains sitting in a lair, devising new ways to form cruel questions to make students suffer and this exam leads me to believe that this is not too far fetched of a concept.

And I would tend to agree.  Who wants students to be demoralised with low scores and questions they can’t succeed on.  Surely that should not be an aim when creating an exam!

I’ve taught the HL Calculus Option for the last 4 years – I think the course is a good one.  It’s difficult but a rewarding syllabus which introduces some of the tools needed for undergraduate maths.  However I no longer have any confidence in the IB or the IB examiners to produce a fair test to examine this content.  Many other HL teachers feel the same way.  So what choice is left?  Abandon the Calculus option and start again from scratch with another option?  Or continue to put our trust in the IB, when they continue to let teachers (and more importantly the students) down?

Why is IB HL Maths so hard?

This is a question that nearly all students who take the subject will ask themselves at some point during the course – and they’d be right to do so because it’s a question that most teachers ask as well.  The table below shows the students entered for the May 2015 IB exams:

Only 14% of IB students take IB HL,  an incredibly low take up.  Even more remarkably half of this group will only get a level 2-4.   Looking at the top end, to get a level 6 or 7 at HL you probably need to have a maths ability in the top 3.5% of all global IB students.  Out of a year group of 60 you would expect on average only 2 students to be good enough to get a 5 and 2 more to get a 6 or 7.  Given than universities asking for HL maths will invariably be asking for level 5+, this means that the IB have designed a course which is only really useful for 4 students in every 60.

It’s not as if the IB aren’t aware that they’ve created a course that hardly any students get benefit from.  The most recent release acknowledges that “students struggle to reach their full potential” in the subject – with a plan to reduce the marks on the paper to give students more time.  But this is failing to address the overall cause of the problem, i.e that examiners persist in producing bad exams which don’t take into account the needs of the students taking them.

You can see this failure easily enough by looking at past paper grade boundaries.  Given that only half of HL students will come out with more than a level 4, the grade boundary for a level 4 is normally pitifully low – around 40% on paper 1.  There’s absolutely no justification for this when 70% is low enough to produce a level 7.  What is the final 30% of the paper – too difficult even for level 7 students – hoping to achieve?   This is evidence of a bad exam and nothing more.  And yet examiners seem to be incapable of doing anything to change this.  An exam designed expecting level 4 students to be getting 50% should be an absolute minimum requirement.  Hard questions and low grade boundaries simply result in demotivated and disillusioned students who feel like their 2 years slogging through HL has been wasted.  Is this the legacy that IB want to achieve?  That students who start the course with enthusiasm and love for the subject get gradually crushed and demoralised?  It seems a pretty poor outcome.

The option paper for Calculus also shows the same lamentable failings.  The November 2015 Calculus paper required only 38% to get a level 4 and 56% for a level 6.  Given that so few students achieve level 6 or 7 this means the paper was so badly designed that probably close to 75% of the students taking it got less than 50%.  How can this be anything other than a failure of the examiners to actually produce a fair test?  This bunching up of all grades meant that a slip on one question could cost a student 2 grades.  17 marks out of 60 was a level 2 – but 23 marks a level 4.  Equally a student getting 28 marks or 34 marks would have got level 4 or 6 respectively.  This is a terrible test!  Small mistakes are massively penalised and all students leave the exam room feeling like they have failed.

Examiner Mindset

The examiner mindset especially in evidence in the Calculus option unit is to purposely avoid all topics that they think the students will be able to do, and instead to find parts of the syllabus that they expect to catch students out on.  On P1 and P2 where examiners have 240 marks to play with, these exams are a good reflection of the syllabus content.  For P3 this isn’t the case – large chunks of the syllabus are ignored completely in exams, which makes it all the more unfair when examiners decide to deliberately look for problem areas, rather than concentrating on making a test which reflects the overall content of the course.  There also seems to be the desire to make the Calculus option a university undergraduate level maths paper – as though there is a pride to be taken in making it as difficult and rigorous as possible.  But HL maths is not an undergraduate  course – and the students taking it are not university mathematicians.  Designing a paper which is aimed only at the needs of level 6 and 7 students is incredibly unfair to the rest of the cohort – and yet this appears to be what happens.

IB HL not fit for purpose?

The IB HL examiners have effectively signed the death warrant for IB HL.  From 2020 IB HL will be killed off in its current form – as a direct result of the low numbers and persistent negative feedback from both students and teachers.  It is planned to merge Studies, SL and HL into just 2 subjects – SL and HL – which will of course require a substantial dumbing down of the HL syllabus to ensure that the 2 new courses are accessible to all IB students.  And this in itself is a huge shame – there is a definite need for 3 levels of maths at IB, catering for very different student needs.  However despite the obvious reform being to stop making the HL papers so unnecessarily hard, the entire IB maths offering is going to be hugely dumbed down instead.  What a terrible legacy.

Could Trump be the next President of America?

There is a lot of statistical maths behind polling data to make it as accurate as possible – though poor sampling techniques can lead to unexpected results.   For example in the UK 2015 general election even though labour were predicted to win around 37.5% of the vote, they only polled 34%.  This was a huge political shock and led to a Conservative government when all the pollsters were predicting a hung parliament.   In the postmortem following the fallout of this failure, YouGov concluded that their sampling methods were at fault – leading to big errors in their predictions.

Trump versus Clinton

The graph above from Real Clear Politics shows the current hypothetical face off between Clinton and Trump amongst American voters.  Given that both are now clear favourites to win their respective party nominations, attention has started to turn to how they fare against each other.

Normal distribution

A great deal of statistics dealing with populations is based on the normal distribution.  The normal distribution has the bell curve shape above – with the majority of the population bunched around the mean value, and with symmetrical tails at each end.  For example most men in the UK will be between 5 feet 8 and 6 foot – with a symmetrical tail of men much taller and much smaller.  For polling data mathematicians usually use a sample of 1000 people – this is large enough to give a good approximation to the normal distribution whilst not being too large to be prohibitively expensive to conduct.

A Polling Example

The following example is from the excellent introduction to this topic from the University of Arizona.

So, say we have sample 1000 people asking them a simple Yes/No/Don’t Know type question.  Say for example we asked 1000 people if they would vote for Trump, Clinton or if they were undecided.  In our poll 675 people say, “Yes” to Trump – so what we want to know is what is our confidence interval for how accurate this prediction is.  Here is where the normal distribution comes in.  We use the following equations:

We have μ representing the mean.

n = the number of people we asked which is 1000

p0 = our sample probability of “Yes” for Trump which is 0.675

Therefore  μ = 1000 x 0.675 = 675

We can use the same values to calculate the standard deviation σ:

σ = (1000(0.675)(1-0.675))0.5

σ = 14.811

We now can use the following table:

This tells us that when we have a normal distribution, we can be 90% confident that the data will be within +/- 1.645 standard deviations of the mean.

So in our hypothetical poll we are 90% confident that the real number of people who will vote for Trump will be +/- 1.645 standard deviations from our sample mean of 675

This gives us the following:

upper bound estimate = 675 + 1.645(14.811) = 699.4

lower bound estimate  = 675 – 1.645(14.811) = 650.6

Therefore we can convert this back to a percent – and say that we can be 90% confident that between 65% and 70% of the population will vote for Trump.  We therefore have a prediction of 67.5% with a margin of error of +or – 2.5%.   You will see most polls that are published using a + – 2.5% margin of error – which means they are using a sample of 1000 people and a confidence interval of 90%.

Real Life

Back to the real polling data on the Clinton, Trump match-up.  We can see that the current trend is a narrowing of the polls between the 2 candidates – 47.3% for Clinton and 40.8% for Trump.  This data is an amalgamation of a large number of polls – so should be reasonably accurate.  You can see some of the original data behind this:

This is a very detailed polling report from CNN – and as you can see above, they used a sample of 1000 adults in order to get a margin of error of around 3%.  However with around 6 months to go it’s very likely these polls will shift.  Could we really have President Trump?  Only time will tell.

This is a nice example of using some maths to solve a puzzle from the mindyourdecisions youtube channel (screencaptures from the video).

How to Avoid The Troll: A Puzzle

In these situations it’s best to look at the extreme case first so you get some idea of the problem.  If you are feeling particularly pessimistic you could assume that the troll is always going to be there.  Therefore you would head to the top of the barrier each time.  This situation is represented below:

The Pessimistic Solution:

Another basic strategy would be the optimistic strategy.  Basically head in a straight line hoping that the troll is not there.  If it’s not, then the journey is only 2km.  If it is then you have to make a lengthy detour.  This situation is shown below:

The Optimistic Solution:

The expected value was worked out here by doing 0.5 x (2) + 0.5 x (2 + root 2) = 2.71.

The question is now, is there a better strategy than either of these?  An obvious possibility is heading for the point halfway along where the barrier might be.  This would make a triangle of base 1 and height 1/2.  This has a hypotenuse of root (5/4).  In the best case scenario we would then have a total distance of 2 x root (5/4).  In the worst case scenario we would have a total distance of root(5/4) + 1/2 + root 2.  We find the expected value by multiply both by 0.5 and adding.  This gives 2.63 (2 dp).  But can we do any better?  Yes – by using some algebra and then optimising to find a minimum.

The Optimisation Solution:

To minimise this function, we need to differentiate and find when the gradient is equal to zero, or draw a graph and look for the minimum.  Now, hopefully you can remember how to differentiate polynomials, so here I’ve used Wolfram Alpha to solve it for us.  Wolfram Alpha is incredibly powerful -and also very easy to use.  Here is what I entered:

and here is the output:

So, when we head for a point exactly 1/(2 root 2) up the potential barrier, we minimise the distance travelled to around 2.62 miles.

So, there we go, we have saved 0.21 miles from our most pessimistic model, and 0.01 miles from our best guess model of heading for the midpoint.  Not a huge difference – but nevertheless we’ll save ourselves a few seconds!

This is a good example of how an exploration could progress – once you get to the end you could then look at changing the question slightly, perhaps the troll is only 1/3 of the distance across?  Maybe the troll appears only 1/3 of the time?  Could you even generalise the results for when the troll is y distance away or appears z percent of the time?

### Website Stats

• 3,712,902 views