You are currently browsing the category archive for the ‘modelling’ category.

The Folium of Descartes

The folium of Descartes is a famous curve named after the French philosopher and mathematician Rene Descartes (pictured top right).  As well as significant contributions to philosophy (“I think therefore I am”) he was also the father of modern geometry through the development of the x,y coordinate system of plotting algebraic curves.  As such the Cartesian plane (as we call the x,y coordinate system) is named after him.

Screen Shot 2018-02-25 at 6.59.40 PM

Pascal and Descartes

Descartes was studying what is now known as the folium of Descartes (folium coming from the Latin for leaf) in the first half of the 1600s.  Prior to the invention of calculus, the ability to calculate the gradient at a given point was a real challenge.  He placed a wager with Pierre de Fermat, a contemporary French mathematician (of Fermat’s Last Theorem fame) that Fermat would be unable to find the gradient of the curve – a challenge that Fermat took up and succeeded with.

Calculus – implicit differentiation:

Today, armed with calculus and the method of implicit differentiation, finding the gradient at a point for the folium of Descartes is more straightforward.  The original Cartesian equation is:

Screen Shot 2018-02-25 at 6.59.46 PM

which can be differentiated implicitly to give:

Screen Shot 2018-02-25 at 7.25.54 PM

Therefore if we take (say) a =1 and the coordinate (1.5, 1.5) then we will have a gradient of -1.

Parametric equations

It’s sometimes easier to express a curve in a different way to the usual Cartesian equation.  Two alternatives are polar coordinates and parametric coordinates.  The parametric equations for the folium are given by:

Screen Shot 2018-02-25 at 6.59.50 PM

In order to use parametric equations we simply choose a value of t (say t =1) and put this into both equations in order to arrive at a coordinate pair in the x,y plane.  If we choose t = 1 and have set a = 1 as well then this gives:

x(1) = 3/2

y(1) = 3/2

therefore the point (1.5, 1.5) is on the curve.

You can read a lot more about famous curves and explore the maths behind them with the excellent “50 famous curves” from Bloomsburg University.

Screen Shot 2018-01-11 at 8.55.53 PM

Spotting Asset Bubbles

Asset bubbles are formed when a service, product or company becomes massively over-valued only to crash, taking with it most of its investors’ money.  There are many examples of asset bubbles in history – the Dutch tulip bulb mania and the South Sea bubble are two of the most famous historical examples.  In the tulip mania bubble of 1636-37, the price of tulip bulbs became astronomically high – as people speculated that the rising prices would keep rising yet further.  At its peak a single tulip bulb was changing hands for around 10 times the annual wage of a skilled artisan, before crashing to become virtually worthless.

More recent bubble include the Dotcom crash of the early 2000s – where investors piled in trying to spot in what ways the internet would revolutionise businesses.  Huge numbers of internet companies tried to ride this wave by going public with share offerings.  This led to massive overvaluation and a crash when investors realised that many of these companies were worthless.  Pets.com is often given as an example of this exuberance – its stock collapsed from $11 to $0.19 in just 6 months, taking with it $300 million of venture capital.

Therefore spotting the next bubble is something which economists take very seriously.  You want to spot the next bubble, but equally not to miss out on the next big thing – a difficult balancing act!  The graph at the top of the page is given as a classic bubble.  It contains all the key phases – an initial slow take-off, a steady increase as institutional investors like banks and hedge funds get involved, an exponential growth phase as the public get involved, followed by a crash and a return to its long term mean value.

Comparing the Bitcoin graph to an asset bubble

Screen Shot 2018-01-11 at 9.00.58 PM

The above graph is charting the last year of Bitcoin growth.  We can see several similarities – so let’s try and plot this on the same axis as the model.  The orange dots represent data points for the initial model – and then I’ve fitted the Bitcoin graph over the top:

Screen Shot 2018-01-11 at 9.22.07 PM

It’s not a bad fit – if this was going to follow the asset bubble model then it would be about to crash rapidly before returning to the long term mean of around $4000.  Whether that happens or it continues to rise, you can guarantee that there will be thousands of economists and stock market analysts around the world doing this sort of analysis (albeit somewhat more sophisticated!) to decide whether Bitcoin really will become the future of money – or yet another example of an asset bubble to be studied in economics textbooks of the future.

 

Measuring the Distance to the Stars

This is a very nice example of some very simple mathematics achieving something which  for centuries appeared impossible – measuring the distance to the stars.  Before we start we need a few definitions:

  • 1  Astronomical Unit (AU) is the average distance from the Sun to the Earth.  This is around 150,000,000km.
  • 1 Light Year is the distance that light travels in one year.  This is around 9,500,000,000,000km.  We have around 63000AU = 1 Light Year.
  • 1 arc second is measurement for very small angles and is 1/3600 of one degree.
  • Parallax is the angular difference in measurement when viewing an object from different locations.  In astronomy parallax is used to mean the half the angle formed when a star is viewed from opposite sides of the Earth’s solar orbit (marked on the diagram below).Screen Shot 2017-12-09 at 8.28.33 PM

With those definitions it is easy to then find the distance to stars.  The parallax method requires that you take a measurement of the angle to a given star, and then wait until 6 months later and take the same measurement.  The two angles will be slightly different – divide this difference by 2 and you have the parallax.

Let’s take 61 Cyngi – which Friedrick Bessel first used this method on in the early 1800s.  This has a parallax of 287/1000 arc seconds.  This is equivalent to 287/1000 x 1/3600 degree or approximately 0.000080 degrees.  So now we can simply use trigonometry – we have a right angled triangle with opposite side = 1 AU and angle = 0.0000080.  Therefore the distance is given by:

tanΦ = opp/adj

tan(0.000080) = 1/d

d = 1/tan(0.000080)

d = 720000 AU

which is approximately 720000/63000 = 11 light years away.

That’s pretty incredible!  Using this method and armed with nothing more than a telescope and knowledge of the Earth’s orbital diameter,  astronomers were able to judge the distance of stars in faraway parts of the universe – indeed they used this method to prove that other galaxies apart from our own also existed.

Orion’s Belt

The constellation of Orion is one of the most striking in the Northern Hemisphere.  It contains the “belt” of 3 stars in a line, along with the brightly shining Rigel and the red super giant Betelgeuse.  The following 2 graphics are taken from the great student resource from the Royal Observatory Greenwich:

The angles marked in the picture are in arc seconds – so to convert them into degrees we need to multiply by 1/3600.  For example, Betelgeuse the red giant has a parallax of 0.0051 x 1/3600 = 0.0000014 (2sf) degrees.  Therefore the distance to Betelgeuse is:

tanΦ = opp/adj

tan(0.0000014) = 1/d

d = 1/tan(0.0000014)

d = 41,000,000 AU

which is approximately 41,000,000/63000 = 651 light years away.  If we were more accurate with our rounding we would get 643 light years.  That means that when we look into the sky we are seeing Betelgeuse as it was 643 years ago.

Screen Shot 2017-11-28 at 1.16.32 PM

The Rise of Bitcoin

Bitcoin is in the news again as it hits $10,000 a coin – the online crypto-currency has seen huge growth over the past 1 1/2 years, and there are now reports that hedge funds are now investing part of their portfolios in the currency.   So let’s have a look at some regression techniques to predict the future price of the currency.

Screen Shot 2017-11-28 at 1.29.36 PM

Here the graph has been inserted into Desmos and the scales aligned.  1 on the y axis corresponds to $1000 and 1 on the x axis corresponds to 6 months.  2013 is aligned with  (0,0).

Screen Shot 2017-11-28 at 1.57.47 PM

Next, I plot some points to fit the curve through.

Screen Shot 2017-11-28 at 2.00.09 PM

Next, we use Desmos’ regression for y = aebx+d. This gives the line above with equation:

y = 5.10 x 10-7 e1.67x + 0.432.

I included the vertical translation (d) because without it the graph didn’t fit the early data points well.

So, If I want to predict what the price will be in December 2019, I use x = 12

y = 5.10 x 10-7 e1.67(12) + 0.432 = 258

and as my scale has 1 unit on the y axis equal to $1000, this is equal to $258,000.

So what does this show?  Well it shows that Bitcoin is currently in a very steep exponential growth curve – which if sustained even over the next 12 months would result in astronomical returns.  However we also know that exponential growth models are very poor at predicting long term trends – as they become unfeasibly large very quickly.   The two most likely scenarios are:

  1. continued growth following a polynomial rather than exponential model
  2. a price crash

Predicting which of these 2 outcomes are most likely is probably best left to the experts!  If you do choose to buy bitcoins you should be prepared for significant price fluctuations – which could be down as well as up.  I’ll revisit this post in a few months and see what has happened.

If you are interested in some more of the maths behind Bitcoin, you can read about the method that is used to encrypt these currencies (a method called elliptical curve cryptography).

 

Screen Shot 2017-06-15 at 10.54.40 AM

This is a quick example of how using Tracker software can generate a nice physics-related exploration.  I took a spring, and attached it to a stand with a weight hanging from the end.  I then took a video of the movement of the spring, and then uploaded this to Tracker.

Height against time

The first graph I generated was for the height of the spring against time.  I started the graph when the spring was released from the low point.  To be more accurate here you can calibrate the y axis scale with the actual distance.  I left it with the default settings.

Screen Shot 2017-06-15 at 9.06.25 AM

You can see we have a very good fit for a sine/cosine curve.  This gives the approximate equation:

y = -65cos10.5(t-3.4) – 195

(remembering that the y axis scale is x 100).

This oscillating behavior is what we would expect from a spring system – in this case we have a period of around 0.6 seconds.

Momentum against velocity

Screen Shot 2017-06-15 at 10.31.20 AM

For this graph I first set the mass as 0.3kg – which was the weight used – and plotted the y direction momentum against the y direction velocity.  It then produces the above linear relationship, which has a gradient of around 0.3.  Therefore we have the equation:

p = 0.3v

If we look at the theoretical equation linking momentum:

p = mv

(Where m = mass).  We can see that we have almost perfectly replicated this theoretical equation.

Height against velocity

Screen Shot 2017-06-15 at 10.35.43 AM

I generated this graph with the mass set to the default 1kg.  It plots the y direction against the y component velocity.  You can see from the this graph that the velocity is 0 when the spring is at the top and bottom of its cycle.  We can then also see that it reaches its maximum velocity when halfway through its cycle.  If we were to model this we could use an ellipse (remembering that both scales are x100 and using x for vy):

Screen Shot 2017-06-15 at 11.45.41 AM

If we then wanted to develop this as an investigation, we could look at how changing the weight or the spring extension affected the results and look for some general conclusions for this.  So there we go – a nice example of how tracker can quickly generate some nice personalised investigations!

Predicting the UK election using linear regression

The above data is the latest opinion poll data from the Guardian.  The UK will have (another) general election on June 8th.  So can we use the current opinion poll data to predict the outcome?

Longer term data trends

Let’s start by looking at the longer term trend following the aftermath of the Brexit vote on June 23rd 2016.  I’ll plot some points for Labour and the Conservatives and see what kind of linear regression we get.  To keep things simple I’ve looked at randomly chosen poll data approximately every 2 weeks – assigning 0 to July 1st 2016, 1 to mid July, 2 to August 1st etc.  This has then been plotted using the fantastic Desmos.

Labour

You can see that this is not a very good fit – it’s a very weak correlation.  Nevertheless let’s see what we would get if we used this regression line to predict the outcome in June.  With the x axis scale I’ve chosen, mid June 2017 equates to 23 on the x axis.  Therefore we predict the percentage as

y = -0.130(23) + 30.2

y  = 27%

Clearly this would be a disaster for Labour – but our model is not especially accurate so perhaps nothing to worry about just yet.

Conservatives

As with Labour we have a weak correlation – though this time we have a positive rather than negative correlation.  If we use our regression model we get a prediction of:

y = 0.242(23) + 38.7

y = 44%

So, we are predicting a crushing victory for the Conservatives – but could we get some more accurate models to base this prediction on?

Using moving averages

The Guardian’s poll tracker at the top of the page uses moving averages to smooth out poll fluctuations between different polls and to arrive at an averaged poll figure.  Using this provides a stronger correlation:

Labour

This model doesn’t take into account a (possible) late surge in support for Labour but does fir better than our last graph.  Using the equation we get:

y = -0.0764(23) + 28.8

y = 27%

Conservatives

We can have more confidence in using this regression line to predict the election.  Putting in the numbers we get:

y = 0.411(23) + 36.48

y = 46%

Conclusion

Our more accurate models merely confirm what we found earlier – and indeed what all the pollsters are predicting – a massive win for the Conservatives.  Even allowing for a late narrowing of the polls the Conservatives could be on target for winning by over 10% points – which would result in a very large majority.  Let’s see what happens!

Modelling a Nuclear War 

With the current saber rattling from Donald Trump in the Korean peninsula and the instability of North Korea under Kim Jong Un (incidentally a former IB student!) the threat of nuclear war is once again in the headlines.  Post Cold War we’ve got somewhat used to the peace afforded by the idea of mutually assured destruction – but this peace only holds with rational actors in charge of pushing the buttons.  The closest we have got to a nuclear war between 2 nuclear powers was in the 1962 Cuban missile crisis – and given the enormous nuclear arsenals of the US and the then USSR this could have pretty much ended civilization as we know it.  In that period, modelling of the effects of nuclear war was a real priority.  So let’s have a look at some current modelling  predictions for the effects of a nuclear war.  Those of a nervous disposition may wish to look away!

Nuclear blast radius

The picture at the top of the post is the nuclear blast radius calculated from this site.  It shows the effects of a 100 megaton airburst (equivalent to 100 million tonnes of TNT explosive).  This is the biggest nuclear bomb that the USSR ever tested.  If dropped on London it would have a fireball radius of 6km, an air blast radius of 33 km (destroying most buildings) and a thermal radiation radius of 74km.  The site estimates that this single bomb would cause 6 million deaths and another 6 million injuries.  And remember this is a single bomb – there are collectively around 15,000 nuclear weapons in the world (the majority shared between the US and Russia).

Nuclear Winter

Whilst the effects of a single bomb would be absolutely catastrophic for both a country and also for the global economy, it would not be an extinction event for humanity – however scientists have modelled the consequences of a nuclear war which would effect the climate to such an extent that it could lead to global mass extinctions.

Let’s have a look at one of those papers – the pessimistically titled:

Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences.

In this paper the authors look at 2 scenarios – the long term climate effect of (a) the detonation of 1/3 of the world’s arsenal of nuclear weapons and (b) the detonation of the full arsenal of the world’s nuclear weapons.  Let’s leave to the side that this would almost certainly end civilisation as we know it – but what would be in store for those lucky (?) enough to survive such an event?

Changes to global temperature and rainfall

This above graphic is a double line graph – with the red lines relating to changes in temperature and the black line corresponding to the changes in precipitation.  The middle 2 lines relate to case (a) and the bottom 2 lines  relate to case (b).  The y axis relates to years.  You can see from this graph that a large nuclear war where 1/3 of the nuclear arsenal was released would have a significant effect on both global temperatures and rainfall.  5 years after the detonations you would have a global temperature 3-4 degrees lower than normal, and even a decade later it would still be a degree lower than normal.  For the full nuclear arsenal case the effects would be catastrophic – a average global drop in temperature of close to 9 degrees 2-3 years after the event.  To put this in context – the last ice age had global temperatures around 5 degrees lower than present.   Meanwhile the average rainfall would drop by around 1.6mm/day equivalent to a 45% global drop in rainfall.

Localised effects of changes in precipitation 

This above graphic shows the distribution of the effects of precipitation following the detonation of the full nuclear arsenal one year on.  You can see that not all parts of the globe are equally effected. The countries near the equator see a massive drop in rainfall (more than 3.5mm/day) along with large parts of North America and Western Europe

Localised effects in the change in temperatures:

This above graphic shows the distribution of the effects of temperature following the detonation of the full nuclear arsenal one year on.  As with the rainfall, you can see startling changes – parts of North America would be 20-30 degrees colder than average, parts of Russia 30-35 degrees colder.  You can see the misleading nature of global temperature averages here.  The global average temperature drop after 1 year was “only” 5 degree – but the parts where the majority of people live see temperature drops many times this.  The global average is brought up by the relatively small change in global ocean temperatures.

Results of a nuclear winter

These changes to the climate alone would be sufficient to destroy agricultural production for the global food chain for a number of years.  One gloomy assessment in 1986 referenced in the paper is that the majority of people who had somehow survived the nuclear bombs and radiation would in any case die in the following years of starvation as crops failed across the globe.  So in short given have the ability to cause our own extinction event, let’s hope those with their fingers on the nuclear buttons are rational enough never to press them.

traffic simulation

Simulations -Traffic Jams and Asteroid Impacts

This is a really good online Java app which has been designed by a German mathematician to study the mathematics behind traffic flow.  Why do traffic jams form?  How does the speed limit or traffic lights or the number of lorries on the road affect road conditions?   You can run a number of different simulations – looking at ring road traffic, lane closures and how robust the system is by applying an unexpected perturbation (like an erratic driver).

There is a lot of scope for investigation – with some prompts on the site.  For example, just looking at one variable – the speed limit – what happens in the lane closure model?  Interestingly, with a homogenous speed of 80 km/h there is no traffic congestion – but if the speed is increased to 140km/h then large congestion builds up quickly as cars are unable to change lanes.   This is why reduced speed limits  are applied on motorways during lane closures.

Another investigation is looking at how the style of driving affects the models.  You can change the politeness of the drivers – do they change lanes recklessly?  How many perturbations (erratic incidents) do you need to add to the simulation to cause a traffic jam?

This is a really good example of mathematics used in a real life context – and also provides some good opportunities for a computer based investigation looking at the altering one parameter at a time to note the consequences.

asteriod

Another good simulation is on the Impact: Earth page.  This allows you to investigate the consequences of various asteroid impacts on Earth – choosing from different parameters such as diameter, velocity, density and angle of impact.  It then shows a detailed breakdown of thee consequences – such as crater size and energy released.   You can also model some famous impacts from history and see their effects.   Lots of scope for mathematical modelling – and also for links with physics.  Also possible discussion re the logarithmic Richter scale – why is this useful?

Student Handout

Asteroid Impact – Why is this important?
Comets and asteroids impact with Earth all the time – but most are so small that we don’t even notice. On a cosmic scale however, the Earth has seen some massive impacts – which were they to happen again today could wipe out civilisation as we know it.

The website Impact Earth allows us to model what would happen if a comet or asteroid hit us again. Jay Melosh professor of Physics and Earth Science says that we can expect “fairly large” impact events about every century. The last major one was in Tunguska Siberia in 1908 – which flattened an estimated 80 million trees over an area of 2000 square km. The force unleashed has been compared to around 1000 Hiroshima nuclear bombs. Luckily this impact was in one of the remotest places on Earth – had the impact been near a large city the effects could be catastrophic.

Jay says that, ”The biggest threat in our near future is the asteroid Apophis, which has a small chance of striking the Earth in 2036. It is about one-third of a mile in diameter.”

Task 1: Watch the above video on a large asteroid impact – make some notes.

Task 2:Research about Apophis – including the dimensions and likely speed of the asteroid and probability of collision. Use this data to enter into the Impact Earth simulation and predict the damage that this asteroid could do.

Task 3: Investigate the Tunguska Event. When did it happen? What was its diameter? Likely speed? Use the data to model this collision on the Impact Earth Simulation. Additional: What are the possible theories about Tunguska? Was it a comet? Asteroid? Death Ray?

Task 4: Conduct your own investigation on the Impact Earth Website into what factors affect the size of craters left by impacts. To do this you need to change one variable and keep all the the other variables constant.  The most interesting one to explore is the angle of impact.  Keep everything else the same and see what happens to the crater size as the angle changes from 10 degrees to 90 degrees.  What angle would you expect to cause the most damage?  Were you correct?  Plot the results as a graph.

If you enjoyed this post you might also like:

Champagne Supernovas and the Birth of the Universe – some amazing photos from space.

Fractals, Mandelbrot and the Koch Snowflake – using maths to model infinite patterns.

screen-shot-2017-01-28-at-7-46-54-am

Maths of Global Warming – Modeling Climate Change

The above graph is from NASA’s climate change site, and was compiled from analysis of ice core data. Scientists from the National Oceanic and Atmospheric Administration (NOAA) drilled into thick polar ice and then looked at the carbon content of air trapped in small bubbles in the ice. From this we can see that over large timescales we have had large oscillations in the concentration of carbon dioxide in the atmosphere. During the ice ages we have had around 200 parts per million carbon dioxide, rising to around 280 in the inter-glacial periods. However this periodic oscillation has been broken post 1950 – leading to a completely different graph behaviour, and putting us on target for 400 parts per million in the very near future.

Analysising the data

screen-shot-2017-01-28-at-7-40-53-am

One of the fields that mathematicians are always in demand for is data analysis. Understanding data, modeling with the data collected and using that data to predict future events. Let’s have a quick look at some very simple modeling. The graph above shows a superimposed sine graph plotted using Desmos onto the NOAA data.

y = -0.8sin(3x +0.1) – 1

Whilst not a perfect fit, it does capture the general trend of the data and its oscillatory behaviour until 1950. We can see that post 1950 we would then expect to be seeing a decline in carbon dioxide rather than the reverse – which on our large timescale graph looks close to vertical.

Dampened Sine wave

screen-shot-2017-01-28-at-8-33-06-am

This is a dampened sine wave, achieved by adding e-x to the front of the sine term.  This achieves the result of progressively reducing the amplitude of the sine function.  The above graph is:

y = e-0.06x (-0.6sin(3x+0.1) -1 )

This captures the shape in the middle of the graph better than the original sine function, but at the expense of less accuracy at the left and right.

Polynomial Regression

screen-shot-2017-01-29-at-7-07-21-am

We can make use of Desmos’ regression tools to fit curves to points.  Here I have entered a table of values and then seen which polynomial gives the best fit:

screen-shot-2017-01-29-at-7-10-35-am

screen-shot-2017-01-29-at-7-10-20-am

We can see that the purple cubic fits the first 5 points quite well (with a high R² value).  So we should be able to create a piecewise function to describe this graph.

Piecewise Function

screen-shot-2017-01-29-at-7-18-17-am

Here I have restricted the domain of the first polynomial (entered below):

screen-shot-2017-01-29-at-7-34-21-am

Second polynomial:

screen-shot-2017-01-29-at-7-34-54-am

screen-shot-2017-01-29-at-7-34-36-am

Third polynomial:

screen-shot-2017-01-29-at-8-00-19-am

screen-shot-2017-01-29-at-7-59-29-am

Fourth polynomial:

screen-shot-2017-01-29-at-8-00-35-am

screen-shot-2017-01-29-at-8-02-43-am

Finished model:

screen-shot-2017-01-29-at-8-03-48-am

Shape of model:

screen-shot-2017-01-29-at-8-06-38-am

We would then be able to fit this to the original model scale by applying a vertical translation (i.e add 280), vertical and horizontal stretch.  It would probably have been easier to align the scales at the beginning!  Nevertheless we have the shape we wanted.

Analysing the models

Our piecewise function gives us a good data fit for the domain we were working in – so if we then wanted to use some calculus to look at non horizontal inflections (say), this would be a good model to use.  If we want to analyse what we would have expected to happen without human activity, then the sine models at the very start are more useful in capturing the trend of the oscillations.

Post 1950s

screen-shot-2017-01-28-at-8-07-47-am

Looking on a completely different scale, we can see the general tend of carbon dioxide concentration post 1950 is pretty linear.  This time I’ll scale the axis at the start.  Here 1960 corresponds with x = 0, and 1970 corresponds with x = 5 etc.

screen-shot-2017-01-29-at-8-53-08-am

screen-shot-2017-01-29-at-9-15-20-am

Actually we can see that a quadratic fits the curve better than a linear graph – which is bad news, implying that the rates of change of carbon in the atmosphere will increase.  Using our model we can predict that on current trends in 2030 there will be 500 parts per million of carbon in the atmosphere.

Stern Report

According to the Stern Report, 500ppm is around the upper limit of what we need to aim to stabalise the carbon levels at (450ppm-550ppm of carbon equivalent) before the economic and social costs of climate change become economically catastrophic.  The Stern Report estimates that it will cost around 1% of global GDP to stablise in this range.  Failure to do that is predicted to lock in massive temperature rises of between 3 and 10 degrees by the end of the century.

If you are interested in doing an investigation on this topic:

  1. Plus Maths have a range of articles on the maths behind climate change
  2. The Stern report is a very detailed look at the evidence, graphical data and economic costs.

Screen Shot 2018-03-19 at 5.29.06 PM

IB Maths Revision

I’d strongly recommend starting your revision of topics from Y12 – certainly if you want to target a top grade in Y13.  My favourite revision site is Revision Village – which has a huge amount of great resources – questions graded by level, full video solutions, practice tests, and even exam predictions.  Standard Level students and Higher Level students have their own revision areas.  Have a look!

 

Website Stats

  • 4,979,845 views

Recent Posts

Follow IB Maths Resources from British International School Phuket on WordPress.com