You are currently browsing the category archive for the ‘Real life maths’ category.

fourier5

Fourier Transform

The Fourier Transform and the associated Fourier series is one of the most important mathematical tools in physics. Physicist Lord Kelvin remarked in 1867:

“Fourier’s theorem is not only one of the most beautiful results of modern analysis, but it may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics.”

The Fourier Transform deals with time based waves – and these are one of the fundamental building blocks of the natural world. Sound, light, gravity, radio signals, Earthquakes and digital compression are just some of the phenomena that can be understood through waves. It’s not an exaggeration therefore to see the study of waves as one of the most important applications of mathematics in our modern life.

Here are some real life applications in a wide range of fields:

JPEG picture and MP3 sound compression – to allow data to reduced in size.

Analysing DNA sequences – to allow identification of specific regions of genetic code

Apps like Shazam which can recognise a song from a sample of music

Processing mobile phone network data and WIFI data

Signal processing – in everything from acoustic guitar amps or electrical currents through capacitors

Radio telescopes – used to construct images of the night sky

Building’s natural frequencies – architects can design buildings to better withstand earthquakes.

Medical imaging such as MRI scans

There are many more applications – this Guardian article is a good introduction to some others.

So, what is the Fourier Transform? It takes a graph like the graph f(t) = cos(at) below:

 

fourier1

and transforms it into:

fourier2

From the above cosine graph we can see that it is periodic time based function. Time is plotted on the x axis, and this graph will tell us the value of f(t) at any given time. The graph below with 2 spikes represents this same information in a different way. It shows the frequency (plotted on the x axis) of the cosine graph. Now the frequency of a function measures how many times it repeats per second. So for a graph f(t) = cos(at) it can be calculated as the inverse of the period. The period of cos(at) is 2pi/a so it has a frequency of a/2pi.

Therefore the frequency graph for cos(ax) will have spikes at a/2pi and -a/2pi.

But how does this new representation help us? Well most real life waves are much more complicated than simple sine or cosine waves – like this trumpet sound wave below:

fourier3

But the remarkable thing is that every continuous wave can be modelled as the sum of sine and cosine waves. So we can break-down the very complicated wave above into (say) cos(x) + sin(2x) + 2cos(4x) . This new representation would be much easier to work with mathematically.

The way to find out what these constituent sine and cosine waves are that make up a complicated wave is to use the Fourier Transform. By transforming a function into one which shows the frequency peaks we can work out what the sine and cosine parts are for that function.

fourier4

For example, this transformed graph above would show which frequency sine and cosine functions to use to model our original function. Each peak represents a sine or cosine function of a specific frequency. Add them all together and we have our function.

The maths behind this does get a little complicated. I’ll try and talk through the method using the function f(t) = cos(at).

\\1.\ f(t) = cosat\\

So, the function we want to break down into its constituent cosine and sine waves is cos(at). Now, obviously this function can be represented just with cos(at) – but this is a good demonstration of how to use the maths for the Fourier Transform. We already know that this function has a frequency of a/2pi – so let’s see if we can find this frequency using the Transform.

\\2.\ F(\xi) = \int_{-\infty}^{\infty} f(t)(e^{-2\pi i\xi t})dt\\

This is the formula for the Fourier Transform. We “simply” replace the f(t) with the function we want to transform – then integrate.

\\3.\ f(t)= 0.5({e}^{iat}+ {e}^{-iat})\\

To make this easier we use the exponential formula for cosine. When we have f(t) = cos(at) we can rewrite this as the function above in terms of exponential terms.

\\4.\ F(\xi) = 0.5\int_{-\infty}^{\infty} (e^{iat}+e^{-iat})(e^{-2\pi i\xi t})dt\\

We substitute this version of f(t) into the formula.

\\5.\ F(\xi) = \frac{1}{2} \int_{-\infty}^{\infty} e^{it(a-2\pi \xi) }dt + \frac{1}{2} \int_{-\infty}^{\infty}e^{it(-a-2\pi \xi)}dt\\

Next we multiply out the exponential terms in the bracket (remember the laws of indices), and then split the integral into 2 parts. The reason we have grouped the powers in this way is because of the following step.

\\6.\ \delta (a-2\pi \xi) = \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{it(a-2\pi \xi)}\\

This is the delta function – which as you can see is very closely related to the integrals we have. Multiplying both sides by pi will get the integral in the correct form. The delta function is a function which is zero for all values apart from when the domain is zero.

\\7.\ F(\xi) =\pi [ \delta (a-2\pi \xi ) + \delta (a+2\pi \xi ) ]\\

So, the integral can be simplified as this above.

\\8.\ a-2\pi \xi = 0 \ or \ a+2\pi \xi = 0\\

So, our function F will be zero for all values except when the delta function is zero. This gives use the above equations.

\\9.\ \xi = \pm\frac{a}{2\pi }\\

Therefore solving these equations we get an answer for the frequency of the graph.

\\10.\ frequency\ of\ cosat = \frac{a}{2\pi }

This frequency agrees with the frequency we already expected to find for cos(at).

A slightly more complicated example would be to follow the same process but this time with the function f(t) = cos(at) + cos(bt). If the Fourier transform works correctly it should recognise that this function is composed of one cosine function with frequency a/2pi and another cosine function of b/2pi. If we follow through exactly the same method as above (we can in effect split the function into cos(at) and cos(bt) and do both separately), we should get:

\\7.\ F(\xi) =\pi [ \delta (a-2\pi \xi ) + \delta (a+2\pi \xi ) + \delta (b-2\pi \xi ) + \delta (b+2\pi \xi ) ]\\

This therefore is zero for all values except for when we have frequencies of a/2pi and b/2pi. So the Fourier Transform has correctly identified the constituent parts of our function.

If you want to read more about Fourier Transforms, then the Better Explained article is an excellent start.

euclidean

Non Euclidean Geometry – An Introduction

It wouldn’t be an exaggeration to describe the development of non-Euclidean geometry in the 19th Century as one of the most profound mathematical achievements of the last 2000 years.  Ever since Euclid (c. 330-275BC) included in his geometrical proofs an assumption (postulate) about parallel lines, mathematicians had been trying to prove that this assumption was true.  In the 1800s however, mathematicians including Gauss started to wonder what would happen if this assumption was false – and along the way they discovered a whole new branch of mathematics.  A mathematics where there is an absolute measure of distance, where straight lines can be curved and where angles in triangles don’t add up to 180 degrees.  They discovered non-Euclidean geometry.

Euclid’s parallel postulate (5th postulate)

Euclid was a Greek mathematician – and one of the most influential men ever to live.  Through his collection of books, Elements, he created the foundations of geometry as a mathematical subject.  Anyone who studies geometry at secondary school will still be using results that directly stem from Euclid’s Elements – that angles in triangles add up to 180 degrees, that alternate angles are equal, the circle theorems, how to construct line and angle bisectors.  Indeed you might find it slightly depressing that you were doing nothing more than re-learn mathematics well understood over 2000 years ago!

All of Euclid’s results were based on rigorous deductive mathematical proof – if A was true, and A implied B, then B was also true.  However Euclid did need to make use of a small number of definitions (such as the definition of a line, point, parallel, right angle) before he could begin his first book  He also needed a small number of postulates (assumptions given without proof) – such as:  “(It is possible) to draw a line between 2 points” and “All right angles are equal”

Now the first 4 of these postulates are relatively uncontroversial in being assumed as true.  The 5th however drew the attention of mathematicians for centuries – as they struggled in vain to prove it.  It is:

If a line crossing two other lines makes the interior angles on the same side less than two right angles, then these two lines will meet on that side when extended far enough. 

euclid3

This might look a little complicated, but is made a little easier with the help of the sketch above.  We have the line L crossing lines L1 and L2, and we have the angles A and B such that A + B is less than 180 degrees.  Therefore we have the lines L1 and L2 intersecting.  Lines which are not parallel will therefore intersect.

Euclid’s postulate can be restated in simpler (though not quite logically equivalent language) as:

At most one line can be drawn through any point not on a given line parallel to the given line in a plane.

euclid2

In other words, if you have a given line (l) and a point (P), then there is only 1 line you can draw which is parallel to the given line and through the point (m).

Both of these versions do seem pretty self-evident, but equally there seems no reason why they should simply be assumed to be true.  Surely they can actually be proved?  Well, mathematicians spent the best part of 2000 years trying without success to do so.

Why is the 5th postulate so important? 

Because Euclid’s proofs in Elements were deductive in nature, that means that if the 5th postulate was false, then all the subsequent “proofs” based on this assumption would have to be thrown out.  Most mathematicians working on the problem did in fact believe it was true – but were keen to actually prove it.

As an example, the 5th postulate can be used to prove that the angles in a triangle add up to 180 degrees.

euclid3

The sketch above shows that if A + B are less than 180 degrees the lines will intersect.  Therefore because of symmetry (if one pair is more than 180 degrees, then other side will have a pair less than 180 degrees), a pair of parallel lines will have A + B = 180.  This gives us:

euclid4

This is the familiar diagram you learn at school – with alternate and corresponding angles.   If we accept the diagram above as true, we can proceed with proving that the angles in a triangle add up to 180 degrees.

euclid5

Once, we know that the two red angles are equal and the two green angles are equal, then we can use the fact that angles on a straight line add to 180 degrees to conclude that the angles in a triangle add to 180 degrees.  But it needs the parallel postulate to be true!

In fact there are geometries in which the parallel postulate is not true  – and so we can indeed have triangles whose angles don’t add to 180 degrees.  More on this in the next post.

If you enjoyed this you might also like:

Non-Euclidean Geometry II – Attempts to Prove Euclid – The second part in the non-Euclidean Geometry series.

The Riemann Sphere – The Riemann Sphere is a way of mapping the entire complex plane onto the surface of a 3 dimensional sphere.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.

penalties2

Statistics to win penalty shoot-outs

With the World Cup upon us again we can perhaps look forward to yet another heroic defeat on penalties by England. England are in fact the worst country of any of the major footballing nations at taking penalties, having won only 1 out of 7 shoot-outs at the Euros and World Cup. In fact of the 35 penalties taken in shoot-outs England have missed 12 – which is a miss rate of over 30%. Germany by comparison have won 5 out of 7 – and have a miss rate of only 15%.

With the stakes in penalty shoot-outs so high there have been a number of studies to look at optimum strategies for players.

Shoot left when ahead

One study published in Psychological Science looked at all the penalties taken in penalty shoot-outs in the World Cup since 1982. What they found was pretty incredible – goalkeepers have a subconscious bias for diving to the right when their team is behind.

penalties6

As is clear from the graphic, this is not a small bias towards the right, but a very strong one. When their team is behind the goalkeeper apparently favours his (likely) strong side 71% of the time. The strikers’ shot meanwhile continues to be placed either left or right with roughly the same likelihood as in the other situations. So, this built in bias makes the goalkeeper much less likely to help his team recover from a losing position in a shoot-out.

Shoot high

Analysis by Prozone looking at the data from the World Cups and European Championships between 1998 and 2010 compiled the following graphics:

penalties3

The first graphic above shows the part of the goal that scoring penalties were aimed at. With most strikers aiming bottom left and bottom right it’s no surprise to see that these were the most successful areas.

penalties4

The second graphic which shows where penalties were saved shows a more complete picture – goalkeepers made nearly all their saves low down. A striker who has the skill and control to lift the ball high makes it very unlikely that the goalkeeper will save his shot.

penalties5

The last graphic also shows the risk involved in shooting high. This data shows where all the missed penalties (which were off-target) were being aimed. Unsurprisingly strikers who were aiming down the middle of the goal managed to hit the target! Interestingly strikers aiming for the right corner (as the goalkeeper stands) were far more likely to drag their shot off target than those aiming for the left side. Perhaps this is to do with them being predominantly right footed and the angle of their shooting arc?

Win the toss and go first

The Prozone data also showed the importance of winning the coin toss – 75% of the teams who went first went on to win. Equally, missing the first penalty is disastrous to a team’s chances – they went on to lose 81% of the time. The statistics also show a huge psychological role as well. Players who needed to score to keep their teams in the competition only scored a miserable 14% of the time. It would be interesting to see how these statistics are replicated over a larger data set.

Don’t dive

A different study which looked at 286 penalties from both domestic leagues and international competitions found that goalkeepers are actually best advised to stay in the centre of the goal rather than diving to one side. This had quite a significant affect on their ability to save the penalties – increasing the likelihood from around 13% to 33%. So, why don’t more goalkeepers stay still? Well, again this might come down to psychology – a diving save looks more dramatic and showcases the goalkeeper’s skill more than standing stationary in the centre.

penalties7

So, why do England always lose on penalties?

There are some interesting psychological studies which suggest that England suffer more than other teams because English players are inhibited by their high public status (in other words, there is more pressure on them to perform – and hence that pressure is harder to deal with).  One such study noted that the best penalty takers are the ones who compose themselves prior to the penalty.  England’s players start to run to the ball only 0.2 seconds after the referee has blown – making them much less composed than other teams.

However, I think you can put too much analysis on psychology – the answer is probably simpler – that other teams beat England because they have technically better players.  English footballing culture revolves much less around technical skill than elsewhere in Europe and South America – and when it comes to the penalty shoot-outs this has a dramatic effect.

As we can see from the statistics, players who are technically gifted enough to lift their shots into the top corners give the goalkeepers virtually no chance of saving them.  England’s less technically gifted players have to rely on hitting it hard and low to the corner – which gives the goalkeeper a much higher percentage chance of saving them.

Test yourself

You can test your penalty taking skills with this online game from the Open University – choose which players are best suited to the pressure, decide what advice they need and aim your shot in the best position.

If you liked this post you might also like:

Championship Wages Predict League Position? A look at how statistics can predict where teams finish in the league.

Premier League Wages Predict League Positions? A similar analysis of Premier League teams.

Screen Shot 2018-05-16 at 10.36.10 AM

Modelling more Chaos

This post was inspired by Rachel Thomas’ Nrich article on the same topic.  I’ll carry on the investigation suggested in the article.  We’re going to explore chaotic behavior – where small changes to initial conditions lead to widely different outcomes.  Chaotic behavior is what makes modelling (say) weather patterns so complex.

f(x) = sin(x)

This time let’s do the same with f(x) = sin(x).

Screen Shot 2018-05-16 at 10.12.48 AM

Starting value of x = 0.2

 

Screen Shot 2018-05-16 at 10.10.41 AM

 

Starting value of x = 0.2001

 

Screen Shot 2018-05-16 at 10.10.48 AM

 

Both graphs superimposed 

 

Screen Shot 2018-05-16 at 10.10.48 AM

This time the graphs do not show any chaotic behavior over the first 40 iterations – a small difference in initial condition has made a negligible difference to the output.  Even after 200 iterations we get the 2 values x = 0.104488151 and x = 0.104502319.

f(x) = tan(x)

Now this time with f(x) = tan(x).

Screen Shot 2018-05-16 at 10.21.55 AM

Starting value of x = 0.2

 

Screen Shot 2018-05-16 at 10.25.52 AM

 

Starting value of x = 0.2001

 

Screen Shot 2018-05-16 at 10.23.03 AM

 

Both graphs superimposed 

 

Screen Shot 2018-05-16 at 10.23.14 AM

This time both graphs remained largely the same up until around the 38th data point – with large divergence after that.  Let’s see what would happen over the next 50 iterations:

 

Screen Shot 2018-05-16 at 10.29.49 AM

Therefore we can see that tan(x) is much more susceptible to small initial state changes than sin(x).  This makes sense by considering the graphs of tan(x) and sin(x).  Sin(x) remains bounded between -1 and 1, whereas tan(x) is unbounded with asymptotic behaviour as we approach pi/2.

Screen Shot 2018-05-16 at 10.36.10 AM

Modelling Chaos

This post was inspired by Rachel Thomas’ Nrich article on the same topic.  I’ll carry on the investigation suggested in the article.  We’re going to explore chaotic behavior – where small changes to initial conditions lead to widely different outcomes.  Chaotic behavior is what makes modelling (say) weather patterns so complex.

Let’s start as in the article with the function:

f(x) = 4x(1-x)

We can then start an iterative process where we choose an initial value, calculate f(x) and then use this answer to calculate a new f(x) etc. For example when I choose x = 0.2, f(0.2) = 0.64. I then use this value to find a new value f(0.64) = 0.9216. I used a spreadsheet to plot 40 iterations for the starting values of x = 0.2 and x = 0.2001. This generated the following spreadsheet (cut to show the first 10 terms):

Screen Shot 2018-05-16 at 7.44.29 AM

I then imported this table into Desmos to map how the change in the starting value from 0.2 to 0.2001 affected the resultant graph.

Starting value of x = 0.2

Screen Shot 2018-05-16 at 7.45.17 AM

Starting value of x = 0.2001

Screen Shot 2018-05-16 at 7.45.04 AM

Both graphs superimposed 

Screen Shot 2018-05-16 at 7.45.31 AM

We can see that for the first 10 terms the graphs are virtually the same – but then we get a wild divergence, before the graphs seem to synchronize more closely again.  One thing we notice is that the data is bounded between 0 and 1.  Can we prove why this is?

If we start with a value of x such that:

0<x<1.

then when we plot f(x) = 4x – 4x2 we can see that the graph has a maximum at x = 1/2:
Screen Shot 2018-05-16 at 9.34.26 AM.

Therefore any starting value of x between 0 and 1 will also return a new value bounded between 0 and 1.  Starting values of x > 1 and x < -1 will tend to negative infinity because x2 grows much more rapidly than x.

f(x) = ax(1-x)

Let’s now explore what happens as we change the value of a whilst keeping our initial starting values of x = 0.2 and x = 0.2001

a = 0.8

Screen Shot 2018-05-16 at 10.51.49 AM

both graphs are superimposed but are identical at the scale we are using.  We can see that both values are attracted to 0 (we can say that 0 is an attractor for our system).

a = 1.2

Screen Shot 2018-05-16 at 10.57.01 AM

Again both graphs are superimposed but are identical at the scale we are using.  We can see that both values are attracted to 1/6 (we can say that 1/6 is an attractor for our system).

In general, for f(x) = ax(1-x) with -1≤x≤1, the attractors are given by x = 0 and x = 1 – 1/a, but it depends on the starting conditions as to whether we will end up being attracted to this point.

f(x) = 0.8x(1-x)

So, let’s look at f(x) = 0.8x(1-x) for different starting values 1≤x≤1.  Our attractors are given by x = 0 and x = 1 – 1/0.8 = -0.25.

When our initial value is x = 0 we remain at the point x = 0.

When our initial value is x = -0.25 we remain at the point x = -0.25.

When our initial value is x < -0.25 we tend to negative infinity.

When our initial value is  -0.25 < x ≤ 1 we tend towards x = 0.

Starting value of x = -0.249999:

Screen Shot 2018-05-16 at 12.02.53 PM

Therefore we can say that x = 0 is a stable attractor, initial values close to x = 0 will still tend to 0.

However x = -0.25 is a fixed point rather than a stable attractoras

x = -0.250001 will tend to infinity very rapidly,

x = -0.25 stays at x = -0.25.

x = -0.249999 will tend towards 0.

Therefore there is a stable equilibria at x = 0 and an unstable equilibria at x = -0.25.

 

The Folium of Descartes

The folium of Descartes is a famous curve named after the French philosopher and mathematician Rene Descartes (pictured top right).  As well as significant contributions to philosophy (“I think therefore I am”) he was also the father of modern geometry through the development of the x,y coordinate system of plotting algebraic curves.  As such the Cartesian plane (as we call the x,y coordinate system) is named after him.

Screen Shot 2018-02-25 at 6.59.40 PM

Pascal and Descartes

Descartes was studying what is now known as the folium of Descartes (folium coming from the Latin for leaf) in the first half of the 1600s.  Prior to the invention of calculus, the ability to calculate the gradient at a given point was a real challenge.  He placed a wager with Pierre de Fermat, a contemporary French mathematician (of Fermat’s Last Theorem fame) that Fermat would be unable to find the gradient of the curve – a challenge that Fermat took up and succeeded with.

Calculus – implicit differentiation:

Today, armed with calculus and the method of implicit differentiation, finding the gradient at a point for the folium of Descartes is more straightforward.  The original Cartesian equation is:

Screen Shot 2018-02-25 at 6.59.46 PM

which can be differentiated implicitly to give:

Screen Shot 2018-02-25 at 7.25.54 PM

Therefore if we take (say) a =1 and the coordinate (1.5, 1.5) then we will have a gradient of -1.

Parametric equations

It’s sometimes easier to express a curve in a different way to the usual Cartesian equation.  Two alternatives are polar coordinates and parametric coordinates.  The parametric equations for the folium are given by:

Screen Shot 2018-02-25 at 6.59.50 PM

In order to use parametric equations we simply choose a value of t (say t =1) and put this into both equations in order to arrive at a coordinate pair in the x,y plane.  If we choose t = 1 and have set a = 1 as well then this gives:

x(1) = 3/2

y(1) = 3/2

therefore the point (1.5, 1.5) is on the curve.

You can read a lot more about famous curves and explore the maths behind them with the excellent “50 famous curves” from Bloomsburg University.

Project Euler: Coding to Solve Maths Problems

Project Euler, named after one of the greatest mathematicians of all time, has been designed to bring together the twin disciplines of mathematics and coding.  Computers are now become ever more integral in the field of mathematics – and now creative coding can be a method of solving mathematics problems just as much as creative mathematics has always been.

The first problem on the Project Euler Page is as follows:

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.

Find the sum of all the multiples of 3 or 5 below 1000.

This is a reasonably straight forward maths problem which we can solve using the summation of arithmetic sequences (I’ll solve it below!) but more interestingly is how a computer code can be written to solve this same problem.  Given that I am something of a coding novice, I went to the Project Nayuki website which has an archive of solutions.  Here is a slightly modified version of the solution given on Project Nayki, designed to run in JAVA:

 

The original file can be copied from here, I then pasted this into an online JAVA site jdoodle. The only modification necessary was to replace:

public final class p001 implements EulerSolution with public class p001

Then after hitting execute you get the following result:

i.e the solution is returned as 233,168. Amazing!

But before we get carried away, let’s check the answer using some more old fashioned maths. We can break down the question into simply trying to find the sum of multiples of 3 under 1000, the sum of the multiples of 5 under 1000 and then subtracting the multiples of 15 under 1000 (as these will have been double counted). i.e:

(3 + 6 + 9 + … 999)  +  (5 + 10 + 15 + … 995)  – (15 + 30 + 45 + …990)

This gives:

S_333 = 333/2 (2(3)+ 332(3)) = 166,833
+
S_199 = 199/2 (2(5) + 198(5)) = 99, 500

S_66 = 66/2 (2(15) +65(15) = 33, 165.

166,833 + 99, 500 – 33, 165 = 233, 168 as required.

Now that we have seen that this works we can modify the original code.  For example if we replace:

if (i % 3 == 0 || i % 3 == 0)
with
if (i % 5 == 0 || i % 7 == 0)

This will find the sum of all the multiples of 5 or 7 below 1000.  Which returns the answer 156,361.

Replacing the same line with:

if (i % 5 == 0 || i % 7 == 0 || i % 3 == 0)

will find the sum of all the multiples of 3 or 5 or 7 below 1000, which returns the answer 271,066.  To find this using the previous method we would have to do:

Sum of 3s + Sum of 5s – Sum of 15s + Sum of 7s – Sum of 21s – Sum 35s – Sum of 105s. Which starts to show why using a computer makes life easier.

This would be a nice addition to any investigation on Number Theory – or indeed a good project for anyone interested in Computer Science as a possible future career.

Screen Shot 2018-01-11 at 8.55.53 PM

Spotting Asset Bubbles

Asset bubbles are formed when a service, product or company becomes massively over-valued only to crash, taking with it most of its investors’ money.  There are many examples of asset bubbles in history – the Dutch tulip bulb mania and the South Sea bubble are two of the most famous historical examples.  In the tulip mania bubble of 1636-37, the price of tulip bulbs became astronomically high – as people speculated that the rising prices would keep rising yet further.  At its peak a single tulip bulb was changing hands for around 10 times the annual wage of a skilled artisan, before crashing to become virtually worthless.

More recent bubble include the Dotcom crash of the early 2000s – where investors piled in trying to spot in what ways the internet would revolutionise businesses.  Huge numbers of internet companies tried to ride this wave by going public with share offerings.  This led to massive overvaluation and a crash when investors realised that many of these companies were worthless.  Pets.com is often given as an example of this exuberance – its stock collapsed from $11 to $0.19 in just 6 months, taking with it $300 million of venture capital.

Therefore spotting the next bubble is something which economists take very seriously.  You want to spot the next bubble, but equally not to miss out on the next big thing – a difficult balancing act!  The graph at the top of the page is given as a classic bubble.  It contains all the key phases – an initial slow take-off, a steady increase as institutional investors like banks and hedge funds get involved, an exponential growth phase as the public get involved, followed by a crash and a return to its long term mean value.

Comparing the Bitcoin graph to an asset bubble

Screen Shot 2018-01-11 at 9.00.58 PM

The above graph is charting the last year of Bitcoin growth.  We can see several similarities – so let’s try and plot this on the same axis as the model.  The orange dots represent data points for the initial model – and then I’ve fitted the Bitcoin graph over the top:

Screen Shot 2018-01-11 at 9.22.07 PM

It’s not a bad fit – if this was going to follow the asset bubble model then it would be about to crash rapidly before returning to the long term mean of around $4000.  Whether that happens or it continues to rise, you can guarantee that there will be thousands of economists and stock market analysts around the world doing this sort of analysis (albeit somewhat more sophisticated!) to decide whether Bitcoin really will become the future of money – or yet another example of an asset bubble to be studied in economics textbooks of the future.

 

Screen Shot 2017-12-02 at 9.17.02 PM

The Remarkable Dirac Delta Function

This is a brief introduction to the Dirac Delta function – named after the legendary Nobel prize winning physicist Paul Dirac. Dirac was one of the founding fathers of the mathematics of quantum mechanics, and is widely regarded as one of the most influential physicists of the 20th Century.  This topic is only recommended for students confident with the idea of limits and was inspired by a Quora post by Arsh Khan.

Dirac defined the delta function as having the following 2 properties:

Screen Shot 2017-12-02 at 9.22.27 PMThe first property as defined above is that the delta function is 0 for all values of t, except for t = 0, when it is infinite.

Screen Shot 2017-12-02 at 9.22.36 PM

The second property defined above is that the integral of the delta function – and the area of the graph between 2 points (either side of 0) is 1.    We can take the bottom integral where we integrate from negative to positive infinity as this will be more useful later.

The delta function (technically not a function in a normal sense!) can be represented as the following limit:

Screen Shot 2017-12-02 at 9.36.47 PM

Whilst this looks a little intimidating, it just means that we take the limit of the function as epsilon (ε) approaches 0.  Given this definition of the delta function we can check that the 2 properties outlined above hold.

Screen Shot 2017-12-02 at 9.36.53 PM

 

For the first limit above we set t not equal to 0.  Then, because it is a continuous function when t is not equal to 0, we can effectively replace epsilon with 0 in the first limit above to get a limit of 0.  In the second limit when t = 0 we get a limit of infinity.  Therefore the first property holds.

To show that the second property holds, we start with the following integral identity from HL Calculus:

Screen Shot 2017-12-02 at 9.47.52 PM

Hopefully this will look similar to the function we are interested in.  Let’s play a little fast and loose with the mathematics and ignore the limit of the function and just consider the following integral:

Screen Shot 2017-12-02 at 9.44.25 PM

Therefore (using the fact that the graph of arctanx has horizontal asymptotes at positive and negative pi/2 for the final part) :

Screen Shot 2017-12-02 at 9.44.37 PM

Screen Shot 2017-12-02 at 9.44.48 PM

So we have shown above that the integral of every function of this form will have an integral of 1, regardless of the value of epsilon, thus satisfying our second property.

The use of the Dirac Function

So far so good.  But what is so remarkable about the Dirac function?  Well, it allows objects to be described in terms of a single zero width (and infinitely high) spike, but despite having zero width, this spike still has an area of 1.   This then allows the representation of elementary particles which have zero size but finite mass (and other finite properties such as charge) to be represented mathematically.  With the area under the curve = 1 it can also be thought of in terms of a probability density function – i.e representing the quantum world in terms of probability wave functions.

A graphical representation:

This is easier to understand graphically.  Say for example we choose a value epsilon (ε) and gradually make it smaller (i.e we find the limit as ε approaches 0).  When ε = 5 we have the following:

Screen Shot 2017-12-02 at 10.15.51 PM

Screen Shot 2017-12-02 at 10.18.26 PM

When ε = 1 we have the following:

Screen Shot 2017-12-02 at 10.17.11 PMScreen Shot 2017-12-02 at 10.16.41 PM

When ε = 0.1 we have the following:

Screen Shot 2017-12-02 at 10.19.23 PMScreen Shot 2017-12-02 at 10.19.16 PM

When ε = 0.01 we have the following:

Screen Shot 2017-12-02 at 10.20.37 PMScreen Shot 2017-12-02 at 10.20.30 PM

You can see that as  ε approaches 0 we get a function which is close to 0 everywhere except for a spike at zero.  The total area under the function remains at 1 for all ε.

Screen Shot 2017-12-02 at 10.24.24 PM

Therefore we can represent the Dirac Delta function with the above graph.  In it we have a point with zero width but with infinite height – and still with an area under the curve of 1!

 

 

Screen Shot 2017-11-28 at 1.16.32 PM

The Rise of Bitcoin

Bitcoin is in the news again as it hits $10,000 a coin – the online crypto-currency has seen huge growth over the past 1 1/2 years, and there are now reports that hedge funds are now investing part of their portfolios in the currency.   So let’s have a look at some regression techniques to predict the future price of the currency.

Screen Shot 2017-11-28 at 1.29.36 PM

Here the graph has been inserted into Desmos and the scales aligned.  1 on the y axis corresponds to $1000 and 1 on the x axis corresponds to 6 months.  2013 is aligned with  (0,0).

Screen Shot 2017-11-28 at 1.57.47 PM

Next, I plot some points to fit the curve through.

Screen Shot 2017-11-28 at 2.00.09 PM

Next, we use Desmos’ regression for y = aebx+d. This gives the line above with equation:

y = 5.10 x 10-7 e1.67x + 0.432.

I included the vertical translation (d) because without it the graph didn’t fit the early data points well.

So, If I want to predict what the price will be in December 2019, I use x = 12

y = 5.10 x 10-7 e1.67(12) + 0.432 = 258

and as my scale has 1 unit on the y axis equal to $1000, this is equal to $258,000.

So what does this show?  Well it shows that Bitcoin is currently in a very steep exponential growth curve – which if sustained even over the next 12 months would result in astronomical returns.  However we also know that exponential growth models are very poor at predicting long term trends – as they become unfeasibly large very quickly.   The two most likely scenarios are:

  1. continued growth following a polynomial rather than exponential model
  2. a price crash

Predicting which of these 2 outcomes are most likely is probably best left to the experts!  If you do choose to buy bitcoins you should be prepared for significant price fluctuations – which could be down as well as up.  I’ll revisit this post in a few months and see what has happened.

If you are interested in some more of the maths behind Bitcoin, you can read about the method that is used to encrypt these currencies (a method called elliptical curve cryptography).

 

Website Stats

  • 5,595,224 views

Recent Posts

Follow IB Maths Resources from British International School Phuket on WordPress.com