**Ramanujan’s Taxi Cabs and the Sum of 2 Cubes **

The Indian mathematician Ramanujan (picture cite: Wikipedia) is renowned as one of great self-taught mathematical prodigies. His correspondence with the renowned mathematician G. H Hardy led him to being invited to study in England, though whilst there he fell sick. Visiting him in hospital, Hardy remarked that the taxi that had brought him to the hospital had a very “rather dull number” – number 1729. Ramanujan remarked in reply, ” No Hardy, it’s a very interesting number! It’s the smallest number expressible as the sum of 2 cubes in 2 different ways!”

Ramanujan was profoundly interested in number theory – the study of integers and patterns inherent within them. The general problem referenced above is finding integer solutions to the below equation for given values of A:

In the case that A = 1729, we have 2 possible ways of finding distinct integer solutions:

The smallest number which can be formed through 3 distinct (positive) integer solutions to the equation is A = 87, 539, 319.

Although this began as a number theory problem it has close links with both graphs and group theory – and it is from these fields that mathematicians have gained a deeper understanding as to the nature of its solutions. The modern field of elliptical curve cryptography is closely related to the ideas below and provides a very secure method of encrypting data.

We start by sketching the graph of:

For some given integer value of A. We will notice that the graph has a line of symmetry around y = x and also an asymptote at y = -x. If we plot:

We can see that both our integer solutions to this problem (1,12) and (9,10) lie on the curve:

**Group theory**

Groups can be considered as sets which follow a set number of rules with regards to operations like multiplication, addition etc. Establishing that a set is a group then allows certain properties to be inferred. If we can establish the following rules hold then we can create an Abelian group. If we start with a set A and and operation Θ.

1) **Identity.** For an element e in A, we have a Θ e = a for all a in A.

(for example 0 is the identity element for the addition operation for the set of integers numbers. a+0 = a for all a in the real numbers).

2) **Closure**. For all elements a,b in A, a Θ b = c, where c is also in A.

(For example with the addition operation, the addition of 2 integers numbers is still an integer)

3) **Associativity**. For all elements a,b,c in A, (a Θ b) Θ c = a Θ (b Θ c)

(For example with the addition operation, (1+2) + 3 = 1 + (2+3) )

4) **Inverse**. For each a in A there exists a b in A such that a Θ b = b Θ a = e. Where e is the identity.

(For example with the addition operation, 4+-4 = -4+4 = 0. 0 is the identity element for addition)

5) **Commutativity**. For all elements a,b in A, a Θ b = b Θ a

(For example with the addition operation 1+2 = 2+1).

As we have seen, the set of integers under the operation addition forms an abelian group.

**Establishing a group**

So, let’s see if we can establish a Abelian group based around the rational coordinates on our graph. We can demonstrate with the graph:

We then take 2 coordinate points with rational coordinates (i.e coordinates that can be written as a fraction of integers). In this case A (1,12) and B (9,10).

We then draw the line through A and B. This will intersect the graph in a 3rd point, C (except in a special case to be looked at in a minute).

We then reflect this new point C in the line y = x, giving us C’.

In this case C’ is the point (46/3, -37/3)

We therefore define *addition* (our operation Θ) in this group as:

A + B = C’.

(1,12) + (9,10) = (46/3, -37/3).

We now need to deal with the special case when a line joining 2 points on the curve does not intersect the curve again. This will happen whenever the gradient of this line is -1, which will make it parallel to the graph’s asymptote y = -x.

In this case we affix a special point at infinity to the Cartesian (x,y) plane. We define this point as the point through which all lines with gradient -1 intersect. Therefore in our expanded geometry, the line through AB *will* intersect the curve at this point at infinity. Let’s call our special point Φ. Now we have a new geometry, the (x,y) plane affixed with Φ.

We can now create an Abelian group. For any 2 rational points P(x,y), Q(x,y) we will have:

1) **Identity.** P + Φ = Φ + P = P

2) **Closure**. P + Q = R. (Where R(x,y) is also a rational point on the curve)

3) **Associativity**. (P+Q) + R = P+(Q+R)

4) **Inverse**. P + (-P) = Φ

5) **Commutativity**. P+Q = Q+P

**Understanding the identity**

Let’s see if we can understand some of these. For the identity, if we have a point A on the line and the point at infinity then this will contain the line with gradient -1. Therefore the line between the point at infinity and A will intersect the curve again at B. Our new point, B’ will be created by reflecting this point in the line y = x. This gets us back to point A. Therefore P + Φ = P as required.

**Understanding the inverse**

With the inverse of our point P(x,y) given as -P = (-x,-y) we can see that this is the reflection in the line y = x. We can see that we we join up the 2 points reflected in the line y = x we will have a line with slope -1, which will intersect with the curve at our point at infinity. Therefore P + (-P) = Φ.

Through our graphical understanding the commutativity rule also follows immediately, It doesn’t matter which of the 2 points come first when we draw a line that connects them, therefore P+Q = Q+P.

**Understanding associativity and closure**

Neither associativity nor closure are obvious from our graph. We could check individual points to show that (P+Q) + R = P+(Q+R), but it would be harder to explain why this always held. Equally whilst it’s clear that P+Q will always create a point on the curve it’s not obvious that this will be a *rational* point.

In fact we do have both associativity and closure for our group as we have the following algebraic definition for our addition operation:

The addition of 2 points is given by:

In the case of our curve:

If we take P = (1,12). P + P will be given by:

We can check this result graphically. If P and Q are the same point, then the line that passes through both P and Q has to be the tangent to the curve at that point. Therefore we would have:

Here the tangent at A does indeed meet the curve again – at point C, which does reflect in y = x to give us the coordinates above.

We could also find this intersection point algebraically. If we differentiate the original curve to find the gradient when x = 1 we can find the equation of the tangent when x=1 and then substitute this back into the equation of the curve to find the intersection point. This would give us:

We would then reverse the x and y coordinates to reflect in the line y = x. This also gives us the same coordinates.

More generally if we have the 2 rational coordinates on the curve:

We have the algebraic formula for addition as:

If P = (1,12) and Q = (9,10), P + Q would give (after much tedious substitution!):

This agrees with the coordinates we found earlier using the much easier geometrical approach. As we can see from this formula, both coordinate points will always be rational – as they will be composed of combinations of our original rational coordinates. For any given curve there will be a generator set of coordinates through which we can generate all other rational coordinates on the curve through our addition operation.

So, we seem to have come a long way from our original goal – finding integer solutions to an algebraic equation. Instead we seem to have got sidetracked into studying graphs and establishing groups. However by reinterpreting this problem as one in group theory then this then opens up many new mathematical techniques to help us understand the solutions to this problem.

A fuller introduction to this topic is the very readable, “Taxicabs and the Sum of Two Cubes” by Joseph Silverman (from which the 2 general equations were taken) .

]]>**Waging war with maths: Hollow squares**

The picture above [US National Archives, Wikipedia] shows an example of the hollow square infantry formation which was used in wars over several hundred years. The idea was to have an outer square of men, with an inner empty square. This then allowed the men in the formation to be tightly packed, facing the enemy in all 4 directions, whilst the hollow centre allowed the men flexibility to rotate (and also was a place to hold supplies). It was one of the infantry formations of choice against charging cavalry.

So, the question is, what groupings of men can be arranged into a hollow square? This is a current Nrich investigation, so I thought I’d do a mini-investigation on this.

We can rethink this question as asking which numbers can be written as the difference between 2 squares. For example in the following diagram (from the Nrich task Hollow Squares)

We can see that the hollow square formation contains a larger square of 20 by 20 and a smaller hollow square of 8 by 8. Therefore the number of men in this formation is:

20^{2}-8^{2} = 336.

The first question we might ask therefore is how many numbers from 1-100 can be written as the difference between 2 squares? These will all be potential formations for our army.

I wrote a quick code on Python to find all these combinations. I included 0 as a square number (though this no longer creates a hollow square, rather just a square!). You can copy this and run it in a Python editor like Repl.it.

for k in range(1,50):

```
```

` for a in range(0, 100):`

for b in range(0,100):

if a**2-b**2 == k :

print(k,a,b)

This returned the following results:

1 1 0

3 2 1

4 2 0

5 3 2

7 4 3

8 3 1

9 3 0

9 5 4

11 6 5

12 4 2

13 7 6

15 4 1

15 8 7

16 4 0

16 5 3

17 9 8

19 10 9

20 6 4

21 5 2

21 11 10

23 12 11

24 5 1

24 7 5

25 5 0

25 13 12

27 6 3

27 14 13

28 8 6

29 15 14

31 16 15

32 6 2

32 9 7

33 7 4

33 17 16

35 6 1

35 18 17

36 6 0

36 10 8

37 19 18

39 8 5

39 20 19

40 7 3

40 11 9

41 21 20

43 22 21

44 12 10

45 7 2

45 9 6

45 23 22

47 24 23

48 7 1

48 8 4

48 13 11

49 7 0

49 25 24

Therefore we can see that the numbers with no solutions found are:

2,6,10,14,18,22,26,30,34,38,42,46,50

which are all clearly in the sequence 4n-2.

Thinking about this, we can see that this can be written as 2(2n-1) which is the product of an even number and an odd number. This means that all numbers in this sequence will require an odd factor in each of their factor pairs:

eg. 50 can be written as 10 (even) x 5 (odd) or 2 (even) x 25 (odd) etc.

But with a^{2}-b^{2} = (a+b)(a-b), due to symmetries we will always end up with (a+b) and (a-b) being both even or both odd, so we can’t create a number with a factor pair of one odd and one even number. Therefore numbers in the sequence 4n-2 can’t be formed as the difference of 2 squares. There are some nicer (more formal) proofs of this here.

**A battalion with 960 soldiers**

Next we are asked to find how many different ways of arranging 960 soldiers in a hollow square. So let’s modify the code first:

for a in range(0, 1000):

for b in range(0,1000):

if a**2-b**2 == 960 :

print(a,b)

Which gives us the following solutions:

31 1

32 8

34 14

38 22

46 34

53 43

64 56

83 77

122 118

241 239

**General patterns**

We can notice that when the number of soldiers is 1,3,5,7,9,11 (2n-1) we can always find a solution with the pair n and n-1. For example, 21 can be written as 2n-1 with n = 11. Therefore we have 10 and 11 as our pair of squares. This works because 11^{2}-10^{2} = (11+10)(11-10) returns the factor pair 21 and 1. In general it always returns the factor pair, 2n-1 and 1.

We can also notice that when the number of soldiers is 4,8,12,16,20 (4n) we can always find a solution with the pair n+1 and n-1. For example, 20 can be written as 4n with n = 5. Therefore we have 6 and 4 as our pair of squares. This works because 6^{2}-4^{2} = (6+4)(6-4) returns the factor pair 10 and 2. In general it always returns the factor pair, 2n and 2.

And we have already shown that numbers 2,6,10,14,18,22 (4n-2) will have no solution. These 3 sequences account for all the natural numbers (as 2n-1 incorporates the 2 sequences 4n-3 and 4n-1).

So, we have found a method of always finding a hollow square formation (if one exists) as well as being able to use some computer code to find other possible solutions. There are lots of other avenues to explore here – could you find a method for finding all possible combinations for a given number of men? What happens when the hollow squares become rectangles?

]]>**Finding the volume of a rugby ball (prolate spheroid)**

With the rugby union World Cup currently underway I thought I’d try and work out the volume of a rugby ball using some calculus. This method works similarly for American football and Australian rules football. The approach is to consider the rugby ball as an ellipse rotated 360 degrees around the x axis to create a volume of revolution. We can find the equation of an ellipse centered at (0,0) by simply looking at the x and y intercepts. An ellipse with y-intercept (0,b) and x intercept (a,0) will have equation:

Therefore for our rugby ball with a horizontal “radius” (vertex) of 14.2cm and a vertical “radius” (co-vertex) of 8.67cm will have equation:

We can see that when we plot this ellipse we get an equation which very closely resembles our rugby ball shape:

Therefore we can now find the volume of revolution by using the following formula:

But we can simplify matters by starting the rotation at x = 0 to find half the volume, before doubling our answer. Therefore:

Rearranging our equation of the ellipse formula we get:

Therefore we have the following integration:

Therefore our rugby ball has a volume of around 4.5 litres. We can compare this with the volume of a football (soccer ball) – which has a radius of around 10.5cm, therefore a volume of around 4800 cubic centimeters.

We can find the general volume of any rugby ball (mathematically defined as a prolate spheroid) by the following generalization:

We can see that this is very closely related to the formula for the volume of a sphere, which makes sense as the prolate spheroid behaves like a sphere deformed across its axes. Our prolate spheroid has “radii” b, b and a – therefore r cubed in the sphere formula becomes b squared a.

**Prolate spheroids in nature**

The image above [wiki image NASA] is of the Crab Nebula – a distant Supernova remnant around 6500 light years away. The shape of Crab Nebula is described as a prolate spheroid.

]]>**The Shoelace Algorithm to find areas of polygons**

This is a nice algorithm, formally known as Gauss’s Area formula, which allows you to work out the area of any polygon as long as you know the Cartesian coordinates of the vertices. The case can be shown to work for all triangles, and then can be extended to all polygons by first splitting them into triangles and following the same approach.

Let’s see if we can work out the algorithm ourselves using the construction at the top of the page. We want the area of the triangle (4), and we can see that this will be equivalent to the area of the rectangle minus the area of the 3 triangles (1) (2) (3).

Let’s start by adding some other coordinate points for the rectangle:

Therefore the area of the rectangle will be:

(1) + (2) +(3) +(4): (x_{3}-x_{2})(y_{1}-y_{3})

And the area of triangles will be:

(1): 0.5(x_{3}-x_{2})(y_{2}-y_{3})

(2): 0.5(x_{1}-x_{2})(y_{1}-y_{2})

(3): 0.5(x_{3}-x_{1})(y_{1}-y_{3})

Therefore the area of triangle (4) will be:

Area = (x_{3}-x_{2})(y_{1}-y_{3}) – 0.5(x_{3}-x_{2})(y_{2}-y_{3}) – 0.5(x_{1}-x_{2})(y_{1}-y_{2}) – 0.5(x_{3}-x_{1})(y_{1}-y_{3})

Therefore we have our algorithm! Let’s see if it works with the following coordinates added:

x_{1 } = 2 x_{2 } = 1 x_{3 } = 3

y_{1 } = 3 y_{2 } = 2 y_{3 } = 1

Area = (x_{3}-x_{2})(y_{1}-y_{3}) – 0.5(x_{3}-x_{2})(y_{2}-y_{3}) – 0.5(x_{1}-x_{2})(y_{1}-y_{2}) – 0.5(x_{3}-x_{1})(y_{1}-y_{3})

Area = (3-1)(3-1) – 0.5(3-1)(2-1) – 0.5(2-1)(3-2) – 0.5(3-2)(3-1)

Area = 4 – 1 – 0.5 – 1 = 1.5 units squared

We could check this using Pythagoras to find all 3 sides of the triangle, followed by the Cosine rule to find an angle, followed by the Sine area of triangle formula, but let’s take an easier route and ask Wolfram Alpha (simply type “area of a triangle with coordinates (1,2) (2,3) (3,1)). This does indeed confirm an area of 1.5 units squared. Our algorithm works. We can of course simplify the area formula by expanding brackets and simplifying. If we were to do this we would get the commonly used version of the area formula for triangles.

**The general case for finding areas of polygons**

The general formula for the area of an n-sided polygon is given above.

For a triangle this gives:

For a quadrilateral this gives:

For a pentagon this gives:

You might notice a nice shoelace like pattern (hence the name) where x coordinates criss cross with the next y coordinate along. To finish off let’s see if it works for an irregular pentagon.

If we arbitrarily assign our (x_{1}, y_{1}) as (1,1) and then (x_{2}, y_{2}) as (3,2), and continue in a clockwise direction we will get the following:

area = absolute of 0.5( 1×2 + 3×4 + 3×1 + 4×0 + 2×1 – 3×1 – 3×2 – 4×4 – 2×1 – 1×0)

area = 4.

Let’s check again with Wolfram Alpha – and yes it does indeed have an area of 4.

It could be a nice exploration task to take this further and to explore how many different methods there are to find the area of polygons – and compare their ease of use, level of mathematics required and aesthetic appeal.

]]>**Soap Bubbles and Catenoids**

Soap bubbles form such that they create a shape with the minimum surface area for the given constraints. For a fixed volume the minimum surface area is a sphere, which is why soap bubbles will form spheres where possible. We can also investigate what happens when a soap film is formed between 2 parallel circular lines like in the picture below: [Credit Wikimedia Commons, Blinking spirit]

In this case the shape formed is a catenoid – which provides the minimum surface area (for a fixed volume) for a 3D shape connecting the two circles. The catenoid can be defined in terms of parametric equations:

Where cosh() is the hyperbolic cosine function which can be defined as:

For our parametric equation, t and u are parameters which we vary, and c is a constant that we can change to create different catenoids. We can use Geogebra to plot different catenoids. Below is the code which will plot parametric curves when c =2 and t varies between -20pi and 20 pi.

We then need to create a slider for u, and turn on the trace button – and for every given value of u (between 0 and 2 pi) it will plot a curve. When we trace through all the values of u it will create a 3D shape – our catenoid.

**Individual curve (catenary)**

**Catenoid when c = 0.1**

**Catenoid when c = 0.5**

**Catenoid when c = 1**

**Catenoid when c = 2**

**Wormholes**

For those of you who know your science fiction, the catenoids above may look similar to a wormhole. That’s because the catenoid is a solution to the hypothesized mathematics of wormholes. These can be thought of as a “bridge” either through curved space-time to another part of the universe (potentially therefore allowing for faster than light travel) or a bridge connecting 2 distinct universes.

Above is the Morris-Thorne bridge wormhole [Credit The Image of a Wormhole].

**Further exploration:**

This is a topic with lots of interesting areas to explore – the individual curves (catenary) look similar to, but are distinct from parabola. These curves appear in bridge building and in many other objects with free hanging cables. Proving that catenoids form shapes with minimum surface areas requires some quite complicated undergraduate maths (variational calculus), but it would be interesting to explore some other features of catenoids or indeed to explore why the sphere is a minimum surface area for a given volume.

If you want to explore further you can generate your own Catenoids with the Geogebra animation I’ve made here.

]]>**IB Applications and Interpretations**

There is a reasonable cross-over between the current Studies course and the new Applications SL course. However there is quite a lot of new content – and as such the expectation is that this course could be quite a bit more challenging that the current SL course. The HL Applications course is a rather odd mix of former Maths Studies topics, former SL topics and former IB HL Statistics topics.

**Some key points:**

- The SL Applications course will be a complete sub-set of the HL Applications course, and the HL exam will include some of
*the same*questions as the SL exam. - Both SL and HL will only have calculator papers (and have no non-calculator papers like Analysis)
- Both SL and HL will have Paper 1 consisting of short questions and Paper 2 with longer style questions (similar to the current Maths Studies course).
- HL students will do an investigation style Paper 3 – potentially with the use of technology. This will lead students through an investigation on any topic on the syllabus.
- The Exploration coursework will remain – however the guidance is now that it should be 12-20 pages (rather than 6-12 previously).

**What does this all mean for Applications SL?**

If the IB follow through with their stated plan to have both new SL courses (Applications and Analysis) the same difficulty then either:

(a) The Analysis course will remain at the same level of difficulty as the current Maths SL and therefore many students who up until now have taken Maths Studies will find the new Applications course *extremely* challenging.

(b) The Analysis course will be made easier than the current Maths SL course, so that the new Applications course is also a little more accessible – though still harder than the current Maths Studies course.

I would predict that (b) is the more likely of these two – otherwise there will be a significant cohort of IB students (around 30%) who fail to get even a Level 4 in their maths. At the moment I would advise that all weaker students should definitely take this course (IGCSE Grade C and below), but it may be the case that it is a good option for more stronger students who have traditionally taken SL rather than Studies.

**What does this all mean for Applications HL?**

This is really hard to work out – if Applications SL remains accessible for students with low IGCSE grades, and Applications HL contains a subset of these questions, then that would suggest that the Applications HL would be significantly easier than the current HL course. However, there are a number of challenging topics on the Applications HL syllabus which could well be used to stretch top students. Again the stated aim of the IB is for the two HL courses to be the same difficulty – so this is one we will really have to wait and see with.

Based on the current information I would advice only students with an A* in IGCSE to take this course. It would appear to be aimed at students who need some mathematical skills for their university courses (such as biology, medicine or business) but who do not want to study mathematics or a field with substantial mathematics in it (such as engineering, physics, computer science etc).

**Resources for teachers and students**

This will be a work in progress – but to get started we have:

**General resources**

- A very useful condensed pdf of the Applications and Interpretations formula book for both SL and HL.
- An excellent overview of the changes to the new syllabus – including more detailed information as to the syllabus changes, differences between the two courses and also what 10 of the leading universities have said with regards to course preferences.
- University acceptance. Information collated by a group of IB teachers on university requirements as to which course they will require for different subjects (this may be not be up to date, so please check).

**Investigation resources for Paper 3 [Higher Level]**

- Old IA investigations

**Standard Level**

**[Links removed – hopefully the IB will provide these resources elsewhere]**

(a) All SL IA investigations from 1998 to 2009 : This is an excellent collection to start preparations for the new Paper 3.

(b) Specimen investigations: These are 8 specimen examples of IA investigations from 2006 with student answers and annotations.

(c) SL IA investigations 2011-2012: Some more investigations with teacher guidance.

(d) SL IA investigations 2012-2013: Some more investigations with teacher guidance.

(e) Koch snowflakes: This is a nice investigation into fractals.

**Higher Level **

(a) All HL IA investigations from 1998 to 2009: Lots more excellent investigations – with some more difficult mathematics.

(b) HL IA investigations 2011-2012: Some more investigations with teacher guidance.

(c) HL IA investigations 2012-2013: Some more investigations with teacher guidance.

]]>**IB Analysis and Approaches**

There is a significant cross-over between the current SL and HL courses and the new Analysis courses. The main differences are:

- The SL course will now be a complete sub-set of the HL course, and the HL exam will now include some of
*the same*questions as the SL exam. Previously whilst SL was almost a complete sub-set of the HL course, the questions on the HL paper were never the same as SL (and usually all significantly harder). - There are a few small additions to the HL Analysis syllabus compared to the old HL syllabus – such as binomials with fractional indices, partial fractions and regression. SL will be largely the same except that the unit on vectors has been taken out.
- The HL option unit has gone – and some of the old HL Calculus option has been added to the core syllabus (though only a relatively small proportion of it).
- HL students will instead do an investigation style Paper 3 – potentially with the use of technology. This will lead students through an investigation on any topic on the syllabus.
- The Exploration coursework will remain – however the guidance is now that it should be 12-20 pages (rather than 6-12 previously).

**What does this all mean?**

Until we start to see some past papers it’s difficult to be too confident on this – but based on the syllabus and specimen paper I would say that the two new courses remain pitched at the same level as for the old SL and HL courses. Therefore the Analysis and Approaches HL course is only suitable for the very best mathematicians who are looking to study either mathematics or a field with substantial mathematics in it (such as engineering, physics, computer science etc). These students would usually have an A* at IGCSE and have also studied Additional Mathematics prior to starting the course. The Analysis and Approaches SL course looks like it will still be a good quality mathematics course – and so will be aimed at students who need some mathematical skills for their university courses (such as biology, medicine or business). These students would usually have an A* – B at IGCSE.

**Resources for teachers and students**

This will be a work in progress – but to get started we have:

**General resources:**

1) A very useful condensed pdf of the Analysis and Approaches formula book for both SL and HL.

2) An excellent overview of the changes to the new syllabus – including more detailed information as to the syllabus changes, differences between the two courses and also what 10 of the leading universities have said with regards to course preferences.

3) University acceptance. Information collated by a group of IB teachers on university requirements as to which course they will require for different subjects (this may be not be up to date, so please check).

**Specific resources for the new HL and SL syllabus content:**

**1. Linear correlation (previously only SL, now SL and HL)**

a) A worksheet (docx file) on using a GDC to calculate regression lines and r values.

**2. Equation of regression line of x on y. (SL and HL)**

**3. Sampling (SL and HL)**

**4. Simple deductive proof (SL and HL)**

a) A deductive proof worksheet (docx file) with some simple examples of deductive proof.

**5. Partial fractions. (HL)**

a) A Partial Fractions worksheet (docx file) with notes and some partial fraction questions.

**6. Binomial expansion with fractional and negative indices (HL)**

a) A binomial expansion worksheet (docx file) requiring the use of fractional and negative indices, as well as use of the Maclaurin expansion.

**7. More rational functions (HL)**

**8. Graphing [f(x)] ^{2}**

**9. L’Hopital’s rule (Previously on the Calculus option now on HL)**

a) A Limits of functions worksheet (docx file) with some examples of simple limits and uses of L’Hopital’s rule. Markscheme here.

**10. Euler method for differential equations (Previously on the Calculus option now on HL)**

a) A worksheet (docx file) with some questions using Euler’s method to solve differential equations.

**11. Separating variables to solve differential equations (Previously on the Calculus option now on HL)**

a) A worksheet (docx file) with some questions separating variables to solve differential equations. Markscheme here.

**12. Solving differential equations by substitution (Previously on the Calculus option now on HL)**

a) A worksheet (docx file) with some questions using substitution to solve homogenous differential equations. Markscheme here.

**13. Solving differential equations by the integrating factor method (Previously on the Calculus option now on HL)**

a) A worksheet (docx file) with some questions using the integrating factor to solve differential equations. Markscheme here.

**14. Maclaurin series (Previously on the Calculus option now on HL)**

a) A worksheet (docx file) with some questions using the Maclaurin series. Markscheme here.

**Investigation resources for Paper 3 [Higher Level]**

- Old IA investigations

**Standard Level**

**[Links removed – hopefully the IB will provide these resources elsewhere]**

(a) All SL IA investigations from 1998 to 2009 : This is an excellent collection to start preparations for the new Paper 3.

(b) Specimen investigations: These are 8 specimen examples of IA investigations from 2006 with student answers and annotations.

(c) SL IA investigations 2011-2012: Some more investigations with teacher guidance.

(d) SL IA investigations 2012-2013: Some more investigations with teacher guidance.

(e) Koch snowflakes: This is a nice investigation into fractals.

**Higher Level **

(a) All HL IA investigations from 1998 to 2009: Lots more excellent investigations – with some more difficult mathematics.

(b) HL IA investigations 2011-2012: Some more investigations with teacher guidance.

(c) HL IA investigations 2012-2013: Some more investigations with teacher guidance.

]]>**Simulating a Football Season**

This is a nice example of how statistics are used in modeling – similar techniques are used when gambling companies are creating odds or when computer game designers are making football manager games. We start with some statistics. The soccer stats site has the data we need from the 2018-19 season, and we will use this to predict the outcome of the 2019-20 season (assuming teams stay at a similar level, and that no-one was relegated in 2018-19).

**Attack and defense strength**

For each team we need to calculate:

- Home attack strength
- Away attack strength
- Home defense strength
- Away defense strength.

For example for Liverpool (LFC)

LFC Home attack strength = (LFC home goals in 2018-19 season)/(average home goals in 2018-19 season)

LFC Away attack strength = (LFC away goals in 2018-19 season)/(average away goals in 2018-19 season)

LFC Home defense strength = (LFC home goals conceded in 2018-19 season)/(average home goals conceded in 2018-19 season)

LFC Away defense strength = (LFC away goals conceded in 2018-19 season)/(average away goals conceded in 2018-19 season)

**Calculating lamda**

We can then use a Poisson model to work out some probabilities. First though we need to find our lamda value. To make life easier we can also use the fact that the lamda value for a Poisson gives the mean value – and use this to give an approximate answer.

So, for example if Liverpool are playing at home to Arsenal we work out Liverpool’s lamda value as:

LFC home lamda = league average home goals per game x LFC home attack strength x Arsenal away defense strength.

We would work out Arsenal’s away lamda as:

Arsenal away lamda = league average away goals per game x Arsenal away attack strength x Liverpool home defense strength.

Putting in some values gives a home lamda for Liverpool as 3.38 and an away lamda for Arsenal as 0.69. So we would expect Liverpool to win 3-1 (rounding to the nearest integer).

**Using Excel**

I then used an Excel spreadsheet to work out the home goals in each fixture in the league season (green column represents the home teams)

and then used the same method to work out the away goals in each fixture in the league (yellow column represents the away team)

I could then round these numbers to the nearest integer and fill in the scores for each match in the table:

Then I was able to work out the point totals to produce a predicted table:

Here we had both Liverpool and Manchester City on 104 points, but with Manchester City having a better goal difference, so winning the league again.

**Using a Poisson model.**

The poisson model allows us to calculate probabilities. The mode is:

P(k goals) = (e^{-λ}λ^{k})/k!

λ is the symbol lamda which we calculated before.

So, for example with Liverpool at home to Arsenal we calculate

Liverpool’s home lamda = league average home goals per game x LFC home attack strength x Arsenal away defense strength.

**Liverpool’s home lamda = 1.57 x 1.84 x 1.17 = 3.38**

Therefore

P(Liverpool score 0 goals) = (e^{-3.38}3.38^{0})/0! = 0.034

P(Liverpool score 1 goal) = (e^{-3.38}3.38^{1})/1! = 0.12

P(Liverpool score 2 goals) = (e^{-3.38}3.38^{2})/2! = 0.19

P(Liverpool score 3 goals) = (e^{-3.38}3.38^{3})/3! = 0.22

P(Liverpool score 4 goals) = (e^{-3.38}3.38^{1})/1! = 0.19

P(Liverpool score 5 goals) = (e^{-3.38}3.38^{5})/5! = 0.13 etc.

**Arsenal’s away lamda = 1.25 x 1.30 x 0.42 = 0.68**

P(Arsenal score 0 goals) = (e^{-0.68}0.68^{0})/0! = 0.51

P(Arsenal score 1 goal) = (e^{-0.68}0.68^{1})/1! = 0.34

P(Arsenal score 2 goals) = (e^{-0.68}0.68^{2})/2! = 0.12

P(Arsenal score 3 goals) = (e^{-0.68}0.68^{3})/3! = 0.03 etc.

**Probability that Arsenal win**

Arsenal can win if:

Liverpool score 0 goals and Arsenal score 1 or more

Liverpool score 1 goal and Arsenal score 2 or more

Liverpool score 2 goals and Arsenal score 3 or more etc.

i.e the approximate probability of Arsenal winning is:

0.034 x 0.49 + 0.12 x 0.15 + 0.19 x 0.03 = 0.04.

Using the same method we could work out the probability of a draw and a Liverpool win. This is the sort of method that bookmakers will use to calculate the probabilities that ensure they make a profit when offering odds.

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

]]>

**The Van Eck Sequence**

This is a nice sequence as discussed in the Numberphile video above. There are only 2 rules:

- If you have not seen the number in the sequence before, add a 0 to the sequence.
- If you have seen the number in the sequence before, count how long since you last saw it.

You start with a 0.

0

You have never seen a 0 before, so the next number is 0.

00

You have seen a 0 before, and it was 1 step ago, so the next number is 1.

001

You have never seen a 1 before, so the next number is 0.

0010

You have seen a 0 before, it was 2 steps ago, so the next number is 2.

00102.

etc.

I can run a quick Python program (adapted from the entry in the Online Encyclopedia of Integer Sequences here) to find the first 100 terms.

```
```A181391 = [0, 0]

for n in range(1, 10**2):

for m in range(n-1, -1, -1):

if A181391[m] == A181391[n]:

A181391.append(n-m)

break

else:

A181391.append(0)

print(A181391)

This returns:

[0, 0, 1, 0, 2, 0, 2, 2, 1, 6, 0, 5, 0, 2, 6, 5, 4, 0, 5, 3, 0, 3, 2, 9, 0, 4, 9, 3, 6, 14, 0, 6, 3, 5, 15, 0, 5, 3, 5, 2, 17, 0, 6, 11, 0, 3, 8, 0, 3, 3, 1, 42, 0, 5, 15, 20, 0, 4, 32, 0, 3, 11, 18, 0, 4, 7, 0, 3, 7, 3, 2, 31, 0, 6, 31, 3, 6, 3, 2, 8, 33, 0, 9, 56, 0, 3, 8, 7, 19, 0, 5, 37, 0, 3, 8, 8, 1, 46, 0, 6, 23]

I then assigned each term an x coordinate value, i.e.:

0 , 0

1 , 0

2 , 1

3 , 0

4 , 2

5 , 0

6 , 2

7 , 2

8 , 1

9 , 6

10 , 0

11 , 5

12 , 0

13 , 2

14 , 6

15 , 5

16 , 4

17 , 0

18 , 5

19 , 3

20 , 0

etc.

This means that you can then plot the sequence as a line graph, with the y values corresponding to the sequence terms. As you can see, every time we hit a new peak the following value is 0, leading to the peaks and troughs seen below:

Let’s extend the sequence to the first 1000 terms:

We can see that the line y = x provides a reasonably good upper bound for this data:

But it is not known if every number would actually appear in the sequence somewhere – so this bound may not hold for larger values.

**Length of steps before new numbers appear.**

We can also investigate how long we have to wait to see each number for the first time by running the following Python code:

```
```A181391 = [0, 0]

for n in range(1, 10**3):

for m in range(n-1, -1, -1):

if A181391[m] == A181391[n]:

A181391.append(n-m)

break

else:

A181391.append(0)

for m in range(1,50):

if A181391[n]==m:

print(m, ",", n+1)

break

This returns the following data:

1 , 3

2 , 5

6 , 10

5 , 12

4 , 17

3 , 20

9 , 24

14 , 30

15 , 35

17 , 41

11 , 44

8 , 47

42 , 52

20 , 56

32 , 59

18 , 63

7 , 66

31 , 72

33 , 81

19 , 89

etc.

The first coordinate tells us the number we are interested in, and the second number tells us how long we have to wait in the sequence until it appears. So (1 , 3) means that we have to wait until 3 terms in the sequence to see the number 1 for the first time.

Plotting this for numbers 1-50 on a graph returns the following:

So, we can see (for example that we wait 66 terms to first see a 7, and 173 terms to first see a 12. There seems to be a general trend that as the numbers get larger we have to wait longer to see them. Testing this with a linear regression we can see a weak to moderate correlation:

Checking for the numbers up to 300 we get the following:

For example this shows that we have to wait 9700 terms until we see the number 254 for the first time. Testing this with a linear correlation we have a weaker positive correlation than previously.

So, a nice and quick investigation using a combination of sequences, coding, graphing and regression, with lots of areas this could be developed further.

]]>

Computers can brute force a lot of simple mathematical problems, so I thought I’d try and write some code to solve some of them. In nearly all these cases there’s probably a more elegant way of coding the problem – but these all do the job! You can run all of these with a Python editor such as Repl.it. Just copy and paste the below code and see what happens.

1) **Happy Numbers.**

Happy numbers are defined by the rule that you start with any positive integer, square each of the digits then add them together. Now do the same with the new number. Happy numbers will eventually spiral down to a number of 1. Numbers that don’t eventually reach 1 are called unhappy numbers.

As an example, say we start with the number 23. Next we do 2²+3² = 13. Now, 1²+3² = 10. Now 1²+0² = 1. 23 is therefore a happy number.

k= int(input("type a 2 digit number "))

a = int(k%10)

c = int(k//100)

b = int(k//10 -10*c)

print (a**2+b**2+c**2)

```
```for k in range (1,20):

` k = a**2+b**2 + c**2`

a = int(k%10)

c = int(k//100)

b = int(k//10 -10*c)

print (a**2+b**2+c**2)

2) **Sum of 3 cubes**

Most (though not all) numbers can be written as the sum of 3 cubes. For example:

1^{3} + 2^{3} + 2^{3} = 17. Therefore 17 can be written as the sum of 3 cubes.

This program allows you to see all the combinations possible when using the integers -10 to 10 and trying to make all the numbers up to 29.

for k in range(1,30):

```
```

` for a in range(-10, 10):`

for b in range(-10,10):

for c in range (-10, 10):

if a**3+b**3+c**3 == k :

print(k,a,b,c)

3) **Narcissistic Numbers**

A 3 digit narcissistic number is defined as one which the sum of the cubes of its digits equal the original number. This program allows you to see all 3 digit narcissistic numbers.

```
```for a in range (0,10):

for b in range(0, 10):

for c in range(0,10):

if a**3 + b**3 + c**3 ==100*a + 10*b + c:

print(int(100*a + 10*b + c))

4) **Pythagorean triples**

Pythagorean triples are integer solutions to Pythagoras’ Theorem. For example:

3^{2} + 4^{2} = 5^{2} is an integer solution to Pythagoras’ Theorem.

This code allows you to find all integer solutions to Pythagoras’ Theorem for the numbers in the range you specify.

```
```k = 100

`for a in range(1, k):`

for b in range(1,k):

for c in range (1, 2*k):

if a**2+b**2==c**2:

print(a,b,c)

5) **Perfect Numbers**

Perfect numbers are numbers whose proper factors (factors excluding the number itself) add to the number. This is easier to see with an example.

6 is a perfect number because its proper factors are 1,2,3 and 1+2+3 = 6

8 is not a perfect number because its proper factors are 1,2,4 and 1+2+4 = 7

Perfect numbers have been known about for about 2000 years – however they are exceptionally rare. The first 4 perfect numbers are 6, 28, 496, 8128. These were all known to the Greeks. The next perfect number wasn’t discovered until around 1500 years later – and not surprisingly as it’s 33,550,336.

The code below will find all the perfect numbers less than 10,000.

```
```for n in range(1,10000):

list = []

for i in range (1,n):

if n%i ==0:

list.append(i)

if sum(list)==n:

print(n)

**Friendly Numbers**

Friendly numbers are numbers which share a relationship with other numbers. They require the use of σ(a) which is called the divisor function and means the addition of all the factors of a. For example σ(7) = 1 + 7 = 8 and σ(10) = 1 +2 +5 + 10 = 18.

Friendly numbers therefore satisfy:

σ(a)/a = σ(b)/b

As an example:

σ(6) / 6 = (1+2+3+6) / 6 = 2,

σ(28) / 28 = (1+2+4+7+14+28) / 28 = 2

σ(496)/496 = (1+2+4+8+16+31+62+124+248+496)/496 = 2

Therefore 28 and 6 are friendly numbers because they share a common relationship.

This code will help find some Friendly numbers (though these are very difficult to find, as we need to check against every other integer until we find a relationship).

The code below will find some Friendly numbers less than 200, and their friendly pair less than 5000:

for n in range(1,5000):

list = []

```
``` for i in range (1,n+1):

if n%i ==0:

list.append(i)

Result1 = sum(list)

for m in range(1,200):

list2 = []

for j in range (1,m+1):

if m%j ==0:

list2.append(j)

Result2 = sum(list2)

` if Result2/m ==Result1/n:`

if n != m:

print(n,m)

**Hailstone numbers**

Hailstone numbers are created by the following rules:

if n is even: divide by 2

if n is odd: times by 3 and add 1

We can then generate a sequence from any starting number. For example, starting with 10:

10, 5, 16, 8, 4, 2, 1, 4, 2, 1…

we can see that this sequence loops into an infinitely repeating 4,2,1 sequence. Trying another number, say 58:

58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1…

and we see the same loop of 4,2,1.

The question is, does every number end in this loop? Well, we don’t know. Every number mathematicians have checked do indeed lead to this loop, but that is not a proof. Perhaps there is a counter-example, we just haven’t found it yet.

Run the code below, and by changing the value of n you can see how quickly the number enters the 4,2,1 loop.

n = 300

for k in range(1,40):

```
```

` if n%2 ==0:`

print(n/2)

n =n/2

elif n%2 !=0:

print(3*n+1)

n =3*n+1

**Generating the Golden ratio**

The Golden ratio can be approximated by dividing any 2 successive terms of the Fibonacci sequence. As we divide ever larger successive terms we get a better approximation for the Golden ratio. This code returns successive terms of the Fibonacci sequence and the corresponding approximation for the Golden ratio.

a = 0

b = 1

print(a)

print(b)

for k in range(1,30):

```
``` a = a+b

b = a+b

` print(a,b, b/a)`

**Partial sums**

We can use programs to see if sums to infinity converge. For example with the sequence 1/n, if I add the terms together I get: 1/1 + 1/2 + 1/3 + 1/4…In this case the series (surprisingly) diverges. The code below shows that the sum of the sequence 1/n^{2} converges to a number (pi^{2}/6).

```
```

`list = []`

for n in range(1,100):

n = 1/(n**2)

list.append(n)

print(sum(list))

**Returning to 6174**

This is a nice number trick. You take any 4 digit number, then rearrange the digits so that you make the largest number possible and also the smallest number possible. You then take away the smallest number from the largest number, and then start again. For example with the number 6785, the largest number we can make is 8765 and the smallest is 5678. So we do 8765 – 5678 = 3087. We then carry on with the same method. Eventually we will arrive at the number 6174!

k= int(input("type a 4 digit number "))

a = int(k%10)

d = int(k//1000)

c = int(k//100 - 10*d)

b = int(k//10 -10*c-100*d)

```
```for n in range(1,10):

list = []

list = [a,b,c,d]

list.sort()

a = list[0]

d = list[3]

c = list[2]

b = list[1]

print(1000*d+100*c+10*b+a -1000*a-100*b-10*c-d)

k = int(1000*d+100*c+10*b+a -1000*a-100*b-10*c-d)

a = int(k%10)

d = int(k//1000)

c = int(k//100 - 10*d)

b = int(k//10 -10*c-100*d)

list = []

list = [a,b,c,d]

list.sort()

a = list[0]

d = list[3]

c = list[2]

b = list[1]

` print(1000*d+100*c+10*b+a -1000*a-100*b-10*c-d)`

**Maximising the volume of a cuboid**

If we take a cuboid of length n, and cut squares of size x from the corner, what value of x will give the maximum volume? This code will look at initial squares of size 10×10 up to 90×90 and find the value of x for each which give the maximum volume.

def compute():

```
``` list1=[]

k=6

z = int(0.5*a*10**k)

for x in range(1,z):

list1.append((10*a-2*x/10**(k-1))*(10*a-2*x/10**(k-1))*(x/10**(k-1)))

print("length of original side is, ", 10*a)

y= max(list1)

print("maximum volume is, ", max(list1))

q = list1.index(y)

print("length of square removed from corner is, ", (q+1)/10**(k-1))

`for a in range(1,10):`

print(compute())