You are currently browsing the category archive for the ‘IB HL calculus’ category.

**Soap Bubbles and Catenoids**

Soap bubbles form such that they create a shape with the minimum surface area for the given constraints. For a fixed volume the minimum surface area is a sphere, which is why soap bubbles will form spheres where possible. We can also investigate what happens when a soap film is formed between 2 parallel circular lines like in the picture below: [Credit Wikimedia Commons, Blinking spirit]

In this case the shape formed is a catenoid – which provides the minimum surface area (for a fixed volume) for a 3D shape connecting the two circles. The catenoid can be defined in terms of parametric equations:

Where cosh() is the hyperbolic cosine function which can be defined as:

For our parametric equation, t and u are parameters which we vary, and c is a constant that we can change to create different catenoids. We can use Geogebra to plot different catenoids. Below is the code which will plot parametric curves when c =2 and t varies between -20pi and 20 pi.

We then need to create a slider for u, and turn on the trace button – and for every given value of u (between 0 and 2 pi) it will plot a curve. When we trace through all the values of u it will create a 3D shape – our catenoid.

**Individual curve (catenary)**

**Catenoid when c = 0.1**

**Catenoid when c = 0.5**

**Catenoid when c = 1**

**Catenoid when c = 2**

**Wormholes**

For those of you who know your science fiction, the catenoids above may look similar to a wormhole. That’s because the catenoid is a solution to the hypothesized mathematics of wormholes. These can be thought of as a “bridge” either through curved space-time to another part of the universe (potentially therefore allowing for faster than light travel) or a bridge connecting 2 distinct universes.

Above is the Morris-Thorne bridge wormhole [Credit The Image of a Wormhole].

**Further exploration:**

This is a topic with lots of interesting areas to explore – the individual curves (catenary) look similar to, but are distinct from parabola. These curves appear in bridge building and in many other objects with free hanging cables. Proving that catenoids form shapes with minimum surface areas requires some quite complicated undergraduate maths (variational calculus), but it would be interesting to explore some other features of catenoids or indeed to explore why the sphere is a minimum surface area for a given volume.

If you want to explore further you can generate your own Catenoids with the Geogebra animation I’ve made here.

**Normal Numbers – and random number generators**

Numberphile have a nice new video where Matt Parker discusses all different types of numbers – including “normal numbers”. Normal numbers are defined as irrational numbers for which the probability of choosing any given 1 digit number is the same, the probability of choosing any given 2 digit number is the same etc. For example in the normal number 0.12345678910111213141516… , if I choose any digit in the entire number at random P(1) = P(2) = P(3) = … P(9) = 1/10. Equally if I choose any 2 digit number at random I have P(10) = P(11) = P(12) = P(99) = 1/100.

It is incredibly hard to find normal numbers, but there is a formula to find some of them.

In base 10, we are restricted to choosing a value of c such that 10 and c are relatively prime (i.e share no common factors apart from 1). So if we choose c = 3 this gives:

We can now put this into Wolfram Alpha and see what number this gives us:

So we can put the first few digits into an online calculator to find the distributions

*0.000333333444444444444448148148148148148148148148148148148148148149382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049382716049827160493827160493827160479423863312 7572016460905349794238683127572016460905349794238683127572016460 9053497942386831275720164609053497942386831275720164609053497942*

4: 61

1: 41

8: 40

3: 38

0: 36

2: 33

7: 33

9: 33

6: 32

5: 10

We can see that we are already seeing a reasonably similar distribution of single digits, though with 4 and 5 outliers. As the number progressed we would expect these distributions to even up (otherwise it would not be a normal number).

One of the potential uses of normal numbers is in random number generators – if you can use a normal number and specify a digit (or number of digits) at random then this should give an equal chance of returning each number.

To finish off this, let’s prove that the infinite series:

does indeed converge to a number (if it diverged then it could not be used to represent a real number). To do that we can use the ratio test (only worry about this bit if you have already studied the Calculus Option for HL!):

We can see that in the last limit 3 to the power n+1 will grow faster than 3 to the power n, therefore as n increases the limit will approach 0. Therefore by the ratio test the series converges to a real number.

**Is pi normal?**

Interestingly we don’t know if numbers like e, pi and ln(2) are normal or not. We can analyse large numbers of digits of pi – and it looks like it will be normal, but as yet there is no proof. Here are the distribution of the first 100,000 digits of pi:

1: 10137

6: 10028

3: 10026

5: 10026

7: 10025

0: 9999

8: 9978

4: 9971

2: 9908

9: 9902

Which we can see are all very close to the expected value of 10,000 (+/- around 1%).

So, next I copied the first 1 million digits of pi into a character frequency counter which gives the following:

5: 100359

3: 100230

4: 100230

9: 100106

2: 100026

8: 99985

0: 99959

7: 99800

1: 99758

6: 99548

This is even closer to the expected values of 100,000 with most with +/- 0.25 %.

Proving that pi is normal would be an important result in number theory – perhaps you could be the one to do it!

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

Cartoon from here

**The Gini Coefficient – Measuring Inequality **

The Gini coefficient is a value ranging from 0 to 1 which measures inequality. 0 represents perfect equality – i.e everyone in a population has exactly the same wealth. 1 represents complete inequality – i.e 1 person has all the wealth and everyone else has nothing. As you would expect, countries will always have a value somewhere between these 2 extremes. The way its calculated is best seen through the following graph (from here):

The Gini coefficient is calculated as the area of A divided by the area of A+B. As the area of A decreases then the curve which plots the distribution of wealth (we can call this the Lorenz curve) approaches the line y = x. This is the line which represents perfect equality.

**Inequality in Thailand**

The following graph will illustrate how we can plot a curve and calculate the Gini coefficient. First we need some data. I have taken the following information on income distribution from the 2002 World Bank data on Thailand where I am currently teaching:

Thailand:

The bottom 20% of the population have 6.3% of the wealth

The next 20% of the population have 9.9% of the wealth

The next 20% have 14% of the wealth

The next 20% have 20.8% of the wealth

The top 20% have 49% of the wealth

I can then write this in a cumulative frequency table (converting % to decimals):

Here the x axis represents the cumulative percentage of the population (measured from lowest to highest), and the y axis represents the cumulative wealth. This shows, for example that the the bottom 80% of the population own 51% of the wealth. This can then be plotted as a graph below (using Desmos):

From the graph we can see that Thailand has quite a lot of inequality – after all the top 20% have just under 50% of the wealth. The blue line represents how a perfectly equal society would look.

To find the Gini Coefficient we first need to find the area between the 2 curves. The area underneath the blue line represents the area A +B. This is just the area of a triangle with length and perpendicular height 1, therefore this area is 0.5.

The area under the green curve can be found using the trapezium rule, 0.5(a+b)h. Doing this for the first trapezium we get 0.5(0+0.063)(0.2) = 0.0063. The second trapezium is 0.5(0.063+0.162)(0.2) and so on. Adding these areas all together we get a total trapezium area of 0.3074. Therefore we get the area between the two curves as 0.5 – 0.3074 ≈ 0.1926

The Gini coefficient is then given by 0.1926/0.5 = 0.3852.

The actual World Bank calculation for Thailand’s Gini coefficient in 2002 was 0.42 – so we have slightly underestimated the inequality in Thailand. We would get a more accurate estimate by taking more data points, or by fitting a curve through our plotted points and then integrating. Nevertheless this is a good demonstration of how the method works.

In this graph (from here) we can see a similar plot of wealth distribution – here we have quintiles on the x axis (1st quintile is the bottom 20% etc). This time we can compare Hungary – which shows a high level of equality (the bottom 80% of the population own 62.5% of the wealth) and Namibia – which shows a high level of inequality (the bottom 80% of the population own just 21.3% of the wealth).

**How unequal is the world?**

We can apply the same method to measure world inequality. One way to do this is to calculate the per capita income of all the countries in the world and then to work out the share of the total global per capita income the (say) bottom 20% of the countries have. This information is represented in the graph above (from here). It shows that there was rising inequality (i.e the richer countries were outperforming the poorer countries) in the 2 decades prior to the end of the century, but that there has been a small decline in inequality since then.

If you want to do some more research on the Gini coefficient you can use the following resources:

The intmaths site article on this topic – which goes into more detail and examples of how to calculate the Gini coefficient

The ConferenceBoard site which contains a detailed look at world inequality

The World Bank data on the Gini coefficients of different countries.

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

**Is Intergalactic space travel possible?**

The Andromeda Galaxy is around 2.5 million light years away – a distance so large that even with the speed of light at traveling as 300,000,000m/s it has taken 2.5 million years for that light to arrive. The question is, would it ever be possible for a journey to the Andromeda Galaxy to be completed in a human lifetime? With the speed of light a universal speed limit, it would be reasonable to argue that no journey greater than around 100 light years would be possible in the lifespan of a human – but this remarkably is not the case. We’re going to show that a journey to Andromeda would indeed be possible within a human lifespan. All that’s needed (!) is a rocket which is able to achieve constant acceleration and we can arrive with plenty of time to spare.

**Time dilation**

To understand how this is possible, we need to understand that as the speed of the journey increases, then time dilation becomes an important factor. The faster the rocket is traveling the greater the discrepancy between the internal body clock of the astronaut on the rocket and an observer on Earth. Let’s see how that works in practice by using the above equation.

Here we have

t(T): The time elapsed from the perspective of an observer on Earth

T: The time elapsed from the perspective of an astronaut on the rocket

c: The speed of light approx 300,000,000 m/s

a: The constant acceleration we assume for our rocket. For this example we will take a = 9.81 m/s^{2} which is the same as the gravity experienced on Earth. This would be the most natural for a human environment. The acceleration is measured relative to an inert observer.

Sinh(x): This is the hyperbolic sine function which can be defined as:

We should note that all our units are in meters, seconds and m/s^{2} therefore when the astronaut experiences 1 year passing on this rocket, we first need to convert this to seconds: 1 year = 60 x 60 x 24 x 365 = 31,536,000 seconds. Therefore T = 31,536,000 and:

which would give us the time experienced on Earth in seconds, therefore by dividing by (60 x 60 x 24 x 365) we can arrive at the time experienced on Earth in years:

Using either Desmos or Wolfram Alpha this gives an answer of 1.187. This means that 1 year experienced on the rocket is experienced as 1.19 years on Earth. Now we have our formula we can easily calculate other values. Two years is:

which gives an answer of 3.755 years. So 2 years on the rocket is experienced as 3.76 years on Earth. As we carry on with the calculations, and as we see the full effects of time dilation we get some startling results:

After 10 years on the space craft, civilization on Earth has advanced (or not) 15,000 years. After 20 years on the rocket, 445,000,000 years have passed on Earth and after 30 years 13,500,000,000,000 years, which around 1000 times greater than the age of the Universe post Big Bang. So, as we can see, time is no longer a great concern.

**Distance travelled**

Next let’s look at how far we can reach from Earth. This is given by the following equation:

Here we have

x(T): The distance travelled from Earth

T, c and a as before.

Cosh(x): This is the hyperbolic cosine function which can be defined as:

Again we note that we are measuring in meters and seconds. Therefore to find the distance travelled in one year we convert 1 year to seconds as before:

Next we note that this will give an answer in meters, so we can convert to light years by dividing by 9.461×10^{15}

Again using Wolfram Alpha or Desmos we find that after one year the spacecraft will be 0.563 light years from Earth. After two years we have:

which gives us 2.91 light years from Earth. Calculating the next values gives us the following table:

We can see that as our spacecraft approaches the speed of light, we will travel the expected number of light years as measured by an observer on Earth.

So we can see that we would easily reach the Andromeda Galaxy within 20 years on a spacecraft and could have spanned the size of the observable universe within 30 years. Now, all we need is to build a spaceship capable of constant acceleration, and resolve how a human body could cope with such forces and we’re there!

**How likely is this?**

Well, the technology needed to build a spacecraft capable of constant acceleration to get close to light speed is not yet available – but there are lots of interesting ideas about how these could be designed in theory. One of the most popular ideas is to make a “solar sail” – which would collect light from the Sun (or any future nearby stars) to propel it along on its journey. Another alternative would be a laser sail – which rather than relying on the Sun, would receive pin-point laser beams from the Earth.

Equally we are a long way from being able to send humans – much more likely is that the future of spaceflight will be carried out by machines. Scientists have suggested that if the spacecraft payload was around 1 gram (say either a miniaturized robot or digital data depending on the mission’s aim), a solar sail or laser sail could be feasibly built which would be sufficient to achieve 25% the speed of light.

NASA have begun launching continuous acceleration spacecraft powered by the Sun. In 2018 they launched the Near-Earth Asteroid Scout. This will unfurl a solar sail and be propelled to a speed of 28,600 m/s. Whilst this is a long way from near-light speeds, it is a proof of concept and does show one potential way that interstellar travel could be achieved.

You can read more about the current scientific advances on solar sails here, and some more on the mathematics of space travel here.

**IB Revision**

If you’re already thinking about your coursework then it’s probably also time to start planning some revision, either for the end of Year 12 school exams or Year 13 final exams. There’s a really great website that I would strongly recommend students use – you choose your subject (HL/SL/Studies if your exam is in 2020 or Applications/Analysis if your exam is in 2021), and then have the following resources:

The Questionbank takes you to a breakdown of each main subject area (e.g. Algebra, Calculus etc) and each area then has a number of graded questions. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. Really useful!

The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. This also has some harder exams for those students aiming for 6s and 7s and the Past IB Exams section takes you to full video worked solutions to every question on every past paper – and you can also get a prediction exam for the upcoming year.

I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think.

**The Folium of Descartes**

The folium of Descartes is a famous curve named after the French philosopher and mathematician Rene Descartes (pictured top right). As well as significant contributions to philosophy (“I think therefore I am”) he was also the father of modern geometry through the development of the x,y coordinate system of plotting algebraic curves. As such the Cartesian plane (as we call the x,y coordinate system) is named after him.

**Pascal and Descartes**

Descartes was studying what is now known as the folium of Descartes (folium coming from the Latin for leaf) in the first half of the 1600s. Prior to the invention of calculus, the ability to calculate the gradient at a given point was a real challenge. He placed a wager with Pierre de Fermat, a contemporary French mathematician (of Fermat’s Last Theorem fame) that Fermat would be unable to find the gradient of the curve – a challenge that Fermat took up and succeeded with.

**Calculus – implicit differentiation:**

Today, armed with calculus and the method of implicit differentiation, finding the gradient at a point for the folium of Descartes is more straightforward. The original Cartesian equation is:

which can be differentiated implicitly to give:

Therefore if we take (say) a =1 and the coordinate (1.5, 1.5) then we will have a gradient of -1.

**Parametric equations**

It’s sometimes easier to express a curve in a different way to the usual Cartesian equation. Two alternatives are polar coordinates and parametric coordinates. The parametric equations for the folium are given by:

In order to use parametric equations we simply choose a value of t (say t =1) and put this into both equations in order to arrive at a coordinate pair in the x,y plane. If we choose t = 1 and have set a = 1 as well then this gives:

x(1) = 3/2

y(1) = 3/2

therefore the point (1.5, 1.5) is on the curve.

You can read a lot more about famous curves and explore the maths behind them with the excellent “50 famous curves” from Bloomsburg University.

**IB Revision**

**The Remarkable Dirac Delta Function**

This is a brief introduction to the Dirac Delta function – named after the legendary Nobel prize winning physicist Paul Dirac. Dirac was one of the founding fathers of the mathematics of quantum mechanics, and is widely regarded as one of the most influential physicists of the 20th Century. This topic is only recommended for students confident with the idea of limits and was inspired by a Quora post by Arsh Khan.

Dirac defined the delta function as having the following 2 properties:

The first property as defined above is that the delta function is 0 for all values of t, except for t = 0, when it is infinite.

The second property defined above is that the integral of the delta function – and the area of the graph between 2 points (either side of 0) is 1. We can take the bottom integral where we integrate from negative to positive infinity as this will be more useful later.

The delta function (technically not a function in a normal sense!) can be represented as the following limit:

Whilst this looks a little intimidating, it just means that we take the limit of the function as epsilon (ε) approaches 0. Given this definition of the delta function we can check that the 2 properties outlined above hold.

For the first limit above we set t not equal to 0. Then, because it is a continuous function when t is not equal to 0, we can effectively replace epsilon with 0 in the first limit above to get a limit of 0. In the second limit when t = 0 we get a limit of infinity. Therefore the first property holds.

To show that the second property holds, we start with the following integral identity from HL Calculus:

Hopefully this will look similar to the function we are interested in. Let’s play a little fast and loose with the mathematics and ignore the limit of the function and just consider the following integral:

Therefore (using the fact that the graph of arctanx has horizontal asymptotes at positive and negative pi/2 for the final part) :

So we have shown above that the integral of every function of this form will have an integral of 1, regardless of the value of epsilon, thus satisfying our second property.

**The use of the Dirac Function**

So far so good. But what is so remarkable about the Dirac function? Well, it allows objects to be described in terms of a single zero width (and infinitely high) spike, but despite having zero width, this spike still has an area of 1. This then allows the representation of elementary particles which have zero size but finite mass (and other finite properties such as charge) to be represented mathematically. With the area under the curve = 1 it can also be thought of in terms of a probability density function – i.e representing the quantum world in terms of probability wave functions.

**A graphical representation:**

This is easier to understand graphically. Say for example we choose a value epsilon (ε) and gradually make it smaller (i.e we find the limit as ε approaches 0). When ε = 5 we have the following:

When ε = 1 we have the following:

When ε = 0.1 we have the following:

When ε = 0.01 we have the following:

You can see that as ε approaches 0 we get a function which is close to 0 everywhere except for a spike at zero. The total area under the function remains at 1 for all ε.

Therefore we can represent the Dirac Delta function with the above graph. In it we have a point with zero width but with infinite height – and still with an area under the curve of 1!

**IB Revision**

**Maths of Global Warming – Modeling Climate Change**

The above graph is from NASA’s climate change site, and was compiled from analysis of ice core data. Scientists from the National Oceanic and Atmospheric Administration (NOAA) drilled into thick polar ice and then looked at the carbon content of air trapped in small bubbles in the ice. From this we can see that over large timescales we have had large oscillations in the concentration of carbon dioxide in the atmosphere. During the ice ages we have had around 200 parts per million carbon dioxide, rising to around 280 in the inter-glacial periods. However this periodic oscillation has been broken post 1950 – leading to a completely different graph behaviour, and putting us on target for 400 parts per million in the very near future.

**Analysising the data**

One of the fields that mathematicians are always in demand for is data analysis. Understanding data, modeling with the data collected and using that data to predict future events. Let’s have a quick look at some very simple modeling. The graph above shows a superimposed sine graph plotted using Desmos onto the NOAA data.

y = -0.8sin(3x +0.1) – 1

Whilst not a perfect fit, it does capture the general trend of the data and its oscillatory behaviour until 1950. We can see that post 1950 we would then expect to be seeing a decline in carbon dioxide rather than the reverse – which on our large timescale graph looks close to vertical.

**Dampened Sine wave**

This is a dampened sine wave, achieved by adding e^{-x} to the front of the sine term. This achieves the result of progressively reducing the amplitude of the sine function. The above graph is:

y = e^{-0.06x} (-0.6sin(3x+0.1) -1 )

This captures the shape in the middle of the graph better than the original sine function, but at the expense of less accuracy at the left and right.

**Polynomial Regression**

We can make use of Desmos’ regression tools to fit curves to points. Here I have entered a table of values and then seen which polynomial gives the best fit:

We can see that the purple cubic fits the first 5 points quite well (with a high R² value). So we should be able to create a piecewise function to describe this graph.

**Piecewise Function**

Here I have restricted the domain of the first polynomial (entered below):

Second polynomial:

Third polynomial:

Fourth polynomial:

Finished model:

Shape of model:

We would then be able to fit this to the original model scale by applying a vertical translation (i.e add 280), vertical and horizontal stretch. It would probably have been easier to align the scales at the beginning! Nevertheless we have the shape we wanted.

**Analysing the models**

Our piecewise function gives us a good data fit for the domain we were working in – so if we then wanted to use some calculus to look at non horizontal inflections (say), this would be a good model to use. If we want to analyse what we would have expected to happen without human activity, then the sine models at the very start are more useful in capturing the trend of the oscillations.

**Post 1950s**

Looking on a completely different scale, we can see the general tend of carbon dioxide concentration post 1950 is pretty linear. This time I’ll scale the axis at the start. Here 1960 corresponds with x = 0, and 1970 corresponds with x = 5 etc.

Actually we can see that a quadratic fits the curve better than a linear graph – which is bad news, implying that the rates of change of carbon in the atmosphere will increase. Using our model we can predict that on current trends in 2030 there will be 500 parts per million of carbon in the atmosphere.

**Stern Report**

According to the Stern Report, 500ppm is around the upper limit of what we need to aim to stabalise the carbon levels at (450ppm-550ppm of carbon equivalent) before the economic and social costs of climate change become economically catastrophic. The Stern Report estimates that it will cost around 1% of global GDP to stablise in this range. Failure to do that is predicted to lock in massive temperature rises of between 3 and 10 degrees by the end of the century.

If you are interested in doing an investigation on this topic:

- Plus Maths have a range of articles on the maths behind climate change
- The Stern report is a very detailed look at the evidence, graphical data and economic costs.

**IB Revision**

**A longer look at the Si(x) function**

Sinx/x can’t be integrated into an elementary function – instead we define:

Where Si(x) is a special function. This may sound strange – but we already come across another similar case with the integral of 1/x. In this case we define the integral of 1/x as ln(x). ln(x) is a function with its own graph and I can use it to work out definite integrals of 1/x. For example the integral of 1/x from 1 to 5 will be ln(5) – ln(1) = ln(5).

The graph of Si(x) looks like this:

Or, on a larger scale:

You can see that it is symmetrical about the y axis, has an oscillating motion and as x gets large approaches a limit. In fact this limit is pi/2.

Because Si(0) = 0, you can write the following integrals as:

**How to integrate sinx/x ?**

It’s all very well to define a new function – and say that this is the integral of sinx/x – but how was this function generated in the first place?

Well, one way to integrate difficult functions is to use Taylor and Maclaurin expansions. For example the Maclaurin expansion of sinx/x for values near x=0 is:

This means that in the domain close to x = 0, the function sinx/x behaves in a similar way to the polynomial above. The last part of this expression O( ) just means everything else in this expansion will be x^6 or greater.

**Graph of sinx/x**

**Graph of 1 – x^2/6 + x^4/120**

In the region close to x=0 these functions behave in a very similar manner (this would be easier to see with similar scales so let’s look on a GDC):

So for the region above (x between 0 and 2) the 2 graphs are virtually indistinguishable.

Therefore if we want to integrate sinx/x for values close to 0 we can just integrate our new function 1 – x^2/6 + x^4/120 and get a good approximation.

Let’s try how accurate this is. We can use Wolfram Alpha to tell us that:

and let’s use Wolfram to work out the integral as well:

Our approximation is accurate to 3 dp, 1.371 in both cases. If we wanted greater accuracy we would simply use more terms in the Maclaurin expansion.

So, by using the Maclaurin expansion for terms near x = 0 and the Taylor expansion for terms near x = a we can build up information as to the values of the Si(x) function.

**IB Revision**

This was the last question on the May 2016 Calculus option paper for IB HL. It’s worth nearly a quarter of the entire marks – and is well off the syllabus in its difficulty. You could make a case for this being the most difficult IB HL question ever. As such it was a terrible exam question – but would make a very interesting exploration topic. So let’s try and understand it!

**Part (a)**

First I’m going to go through a solution to the question – this was provided by another HL maths teacher, Daniel – who worked through a very nice answer. For the first part of the question we need to try and understand what is actually happening – we have the sum of an integral – where we are summing a sequence of definite integrals. So when n = 0 we have the single integral from 0 to pi of sint/t. When n = 1 we have the single integral from pi to 2pi of sint/t. The summation of the first n terms will add the answers to the first n integrals together.

This is the plot of y = sinx/x from 0 to 6pi. Using the GDC we can find that the roots of this function are n(pi). This gives us the first mark in the question – as when we are integrating from 0 to pi the graph is above the x axis and so the integral is positive. When we integrate from pi to 2pi the graph is below the x axis and so the integral is negative. Since our sum consists of alternating positive and negative terms, then we have an alternating series.

**Part (b i)**

This is where it starts to get difficult! You might be tempted to try and integrate sint/t – which is what I presume a lot of students will have done. It looks like integration by parts might work on this. However this was a nasty trap laid by the examiners – integrating by parts is a complete waste of time as this function is non-integrable. This means that there is no elementary function or standard basic integration method that will integrate it. (We will look later at how it can be integrated – it gives something called the Si(x) function). Instead this is how Daniel’s method progresses:

Hopefully the first 2 equalities make sense – we replace n with n+1 and then replace t with T + pi. dt becomes dT when we differentiate t = T + pi. In the second integral we have also replaced the limits (n+1)pi and (n+2)pi with n(pi) and (n+1)pi as we are now integrating with respect to T and so need to change the limits as follows:

t = (n+1)(pi)

T+ pi = (n+1)(pi)

T = n(pi). This is now the lower integral value.

The third integral uses the fact that sin(T + pi) = – sin(T).

The fourth integral then uses graphical logic. y = -sinx/x looks like this:

This is the same as y = sinx/x but reflected in the x axis. Therefore the absolute value of the integral of y = -sinx/x will be the same as the absolute integral of y = sinx/x. The fourth integral has also noted that we can simply replace T with t to produce an equivalent integral. The last integral then notes that the integral of sint/(t+pi) will be less than the integral of sint/t. This then gives us the inequality we desire.

Don’t worry if that didn’t make complete sense – I doubt if more than a handful of IB students in the whole world got that in exam conditions. Makes you wonder what the point of that question was, but let’s move on.

**Part (b ii)**

OK, by now most students will have probably given up in despair – and the next part doesn’t get much easier. First we should note that we have been led to show that we have an alternating series where the absolute value of u_n+1 is less than the absolute value of u_n. Let’s check the requirements for proving an alternating series converges:

We already have shown it’s an absolute decreasing sequence, so we just now need to show the limit of the sequence is 0.

OK – here we start by trying to get a lower and upper bound for u_n. We want to show that as n gets large, the limit of u_n = 0. In the second integral we have used the fact that the absolute value of an integral of a function is always less than or equal to the integral of an absolute value of a function. That might not make any sense, so let’s look graphically:

This graph above is y = sinx/x. If we integrate this function then the parts under the x axis will contribute a negative amount.

But this graph is y = absolute (sinx/x). Here we have no parts under the x axis – and so the integral of absolute (sinx/x) will always be greater than or equal to the integral of y = sinx/x.

To get the third integral we note that absolute (sinx) is bounded between 0 and 1 and so the integral of 1/x will always be greater than or equal to the integral of absolute (sinx)/x.

We next can ignore the absolute value because 1/x is always positive for positive x, and so we integrate 1/x to get ln(x). Substituting the values of the definite integral gives us a function of ln which as n approaches infinity approaches 0. Therefore as this limit approaches 0, and this function was always greater than or equal to absolute u_n, then the limit of absolute u_n must also be 0.

Therefore we have satisfied the requirements for the Alternating Series test and so the series is convergent.

**Part (c)**

Part (c) is at least accessible for level 6 and 7 students as long as you are still sticking with the question. Here we note that we have been led through steps to prove we have an alternating and convergent series. Now we use the fact that the sum to infinity of a convergent alternating series lies between any 2 successive partial sums. Then we can use the GDC to find the first few partial sums:

And there we are! 14 marks in the bag. Makes you wonder who the IB write their exams for – this was so far beyond sixth form level as to be ridiculous. More about the Si(x) function in the next post.

**IB Revision**

**IB HL Calculus P3 May 2016: The Hardest IB Paper Ever?**

IB HL Paper 3 Calculus May 2016 was a very poor paper. It was unduly difficult and missed off huge chunks of the syllabus. You can see question 5 posted above. (I work through the solution to this in the next post). This is so far off the syllabus as to be well into undergraduate maths. Indeed it wouldn’t look out of place in an end of first year or end of second year undergraduate calculus exam. So what’s it doing on a sixth form paper for 17-18 year olds? The examiners completely abandoned their remit to produce a test of the syllabus content – and instead decided that a one hour exam was the time to introduce extensions to that syllabus, whilst virtually ignoring all the core content of the course.

**A breakdown of the questions**

1) Maclaurin- on the syllabus. This was reasonable. As was using it to find the limit of a fraction. Part (c) requires use of Lagrange error – which students find difficult and forms a very small part of the course. If this was the upper level of the challenge in the paper then fair enough, but it was far from it.

2) Fundamental Theorem of Calculus – barely on the syllabus – and unpredictable in advance as to what is going to be asked on this. This has never been asked before on any paper, there is no guidance in the syllabus, there was no support in the specimen paper and most textbooks do not cover this in any detail. This seems like an all or nothing question – students will either get 7 or 0 on this question. Part (c) for an extra 3 marks seems completely superfluous.

3) Mean Value Theorem – a small part of the syllabus given dispropotionate exam question coverage because the examiners seem to like proof questions. This seems like an all or nothing question as well – if you get the concept then it’s 7 marks, if not it’ll likely be 0.

4) Differential equations – This question would have been much better if they had simply been given the integrating factor /separate variables question in part (b), leaving some extra marks to test something else on part (a) – perhaps Euler’s Method?

5) An insane extension to the syllabus which took the question well into undergraduate mathematics – and hid within it a “trap” to make students try to integrate a function that can’t actually be integrated. This really should have been nowhere near the exam. At 14 marks this accounted for nearly a quarter of the exam.

**Content unassessed**

The syllabus is only 48 hours and all schools spend that time ploughing through limits and differentiability of functions, L’Hopital’s rule, Riemann sums, Rolle’s Theorem, standard differential equations, isoclines, slope fields, the squeeze theorem, absolute and conditional convergence, error bounds, indefinite integrals, the ratio test, power series, radius of convergence. All of these went pretty much unassessed. I would say that the exam tested around 15% of the syllabus content. Even the assessment of alternating series convergence was buried inside question 5 – making is effectively inaccessible to all students.

The result of this is that there will be a huge squash in the grade boundaries – perhaps as low as 50-60% for a Level 6 and 25-35% for a level 4. The last 20 marks on the paper will probably be completely useless – separating no students at all. This then produces huge unpredictability as dropping 4-5 marks might take from from a level 5 to level 3 or level 6 to level 4.

**Teachers no longer have any confidence in the IB HL examiners**

One of my fellow HL teachers posted this following the Calculus exam:

*At various times throughout the year I joke with my students about how the HL Mathematics examiners must be like a group of comic book villains sitting in a lair, devising new ways to form cruel questions to make students suffer and this exam leads me to believe that this is not too far fetched of a concept.*

And I would tend to agree. Who wants students to be demoralised with low scores and questions they can’t succeed on. Surely that should not be an aim when creating an exam!

I’ve taught the HL Calculus Option for the last 4 years – I think the course is a good one. It’s difficult but a rewarding syllabus which introduces some of the tools needed for undergraduate maths. However I no longer have any confidence in the IB or the IB examiners to produce a fair test to examine this content. Many other HL teachers feel the same way. So what choice is left? Abandon the Calculus option and start again from scratch with another option? Or continue to put our trust in the IB, when they continue to let teachers (and more importantly the students) down?