**The Remarkable Dirac Delta Function**

This is a brief introduction to the Dirac Delta function – named after the legendary Nobel prize winning physicist Paul Dirac. Dirac was one of the founding fathers of the mathematics of quantum mechanics, and is widely regarded as one of the most influential physicists of the 20th Century. This topic is only recommended for students confident with the idea of limits and was inspired by a Quora post by Arsh Khan.

Dirac defined the delta function as having the following 2 properties:

The first property as defined above is that the delta function is 0 for all values of t, except for t = 0, when it is infinite.

The second property defined above is that the integral of the delta function – and the area of the graph between 2 points (either side of 0) is 1. We can take the bottom integral where we integrate from negative to positive infinity as this will be more useful later.

The delta function (technically not a function in a normal sense!) can be represented as the following limit:

Whilst this looks a little intimidating, it just means that we take the limit of the function as epsilon (ε) approaches 0. Given this definition of the delta function we can check that the 2 properties outlined above hold.

For the first limit above we set t not equal to 0. Then, because it is a continuous function when t is not equal to 0, we can effectively replace epsilon with 0 in the first limit above to get a limit of 0. In the second limit when t = 0 we get a limit of infinity. Therefore the first property holds.

To show that the second property holds, we start with the following integral identity from HL Calculus:

Hopefully this will look similar to the function we are interested in. Let’s play a little fast and loose with the mathematics and ignore the limit of the function and just consider the following integral:

Therefore (using the fact that the graph of arctanx has horizontal asymptotes at positive and negative pi/2 for the final part) :

So we have shown above that the integral of every function of this form will have an integral of 1, regardless of the value of epsilon, thus satisfying our second property.

**The use of the Dirac Function**

So far so good. But what is so remarkable about the Dirac function? Well, it allows objects to be described in terms of a single zero width (and infinitely high) spike, but despite having zero width, this spike still has an area of 1. This then allows the representation of elementary particles which have zero size but finite mass (and other finite properties such as charge) to be represented mathematically. With the area under the curve = 1 it can also be thought of in terms of a probability density function – i.e representing the quantum world in terms of probability wave functions.

**A graphical representation:**

This is easier to understand graphically. Say for example we choose a value epsilon (ε) and gradually make it smaller (i.e we find the limit as ε approaches 0). When ε = 5 we have the following:

When ε = 1 we have the following:

When ε = 0.1 we have the following:

When ε = 0.01 we have the following:

You can see that as ε approaches 0 we get a function which is close to 0 everywhere except for a spike at zero. The total area under the function remains at 1 for all ε.

Therefore we can represent the Dirac Delta function with the above graph. In it we have a point with zero width but with infinite height – and still with an area under the curve of 1!

]]>

**The Rise of Bitcoin**

Bitcoin is in the news again as it hits $10,000 a coin – the online crypto-currency has seen huge growth over the past 1 1/2 years, and there are now reports that hedge funds are now investing part of their portfolios in the currency. So let’s have a look at some regression techniques to predict the future price of the currency.

Here the graph has been inserted into Desmos and the scales aligned. 1 on the y axis corresponds to $1000 and 1 on the x axis corresponds to 6 months. 2013 is aligned with (0,0).

Next, I plot some points to fit the curve through.

Next, we use Desmos’ regression for y = ae^{bx}+d. This gives the line above with equation:

y = 5.10 x 10^{-7 }e^{1.67x }+ 0.432.

I included the vertical translation (d) because without it the graph didn’t fit the early data points well.

So, If I want to predict what the price will be in December 2019, I use x = 12

y = 5.10 x 10^{-7 }e^{1.67(12) }+ 0.432 = 258

and as my scale has 1 unit on the y axis equal to $1000, this is equal to $258,000.

So what does this show? Well it shows that Bitcoin is currently in a very steep exponential growth curve – which if sustained even over the next 12 months would result in astronomical returns. However we also know that exponential growth models are very poor at predicting long term trends – as they become unfeasibly large very quickly. The two most likely scenarios are:

- continued growth following a polynomial rather than exponential model
- a price crash

Predicting which of these 2 outcomes are most likely is probably best left to the experts! If you do choose to buy bitcoins you should be prepared for significant price fluctuations – which could be down as well as up. I’ll revisit this post in a few months and see what has happened.

If you are interested in some more of the maths behind Bitcoin, you can read about the method that is used to encrypt these currencies (a method called elliptical curve cryptography).

]]>

**The butterfly**

This is a slightly simpler version of the butterfly curve which is plotted using polar coordinates on Desmos as:

Polar coordinates are an alternative way of plotting functions – and are explored a little in HL Maths when looking at complex numbers. The theta value specifies an angle of rotation measured anti-clockwise from the x axis, and the r value specifies the distance from the origin. So for example the polar coordinates (90 degrees, 1) would specify a point 90 degrees ant clockwise from the x axis and a distance 1 from the origin (i.e the point (0,1) in our usual Cartesian plane).

2. **Fermat’s Spiral**

This is plotted by the polar equation:

The next 3 were all created by my students.

3. **Chaotic spiral (by Laura Y9)**

I like how this graph grows ever more tangled as it coils in on itself. This was created by the polar equation:

4. **The flower (by Felix Y9)**

Some nice rotational symmetries on this one. Plotted by:

5. **The heart (by Tiffany Y9)**

Simple but effective! This was plotted using the usual x,y coordinates:

You can also explore how to draw the Superman and Batman logos using Wolfram Alpha here.

]]>There is more than one way to define the mean of a number. The arithmetic mean is the mean we learn at secondary school – for 2 numbers a and b it is:

(a + b) /2.

The geometric mean on the other hand is defined as:

(x_{1}.x_{2}.x_{3}…x_{n})^{1/n}

So for example with the numbers 1,2,3 the geometric mean is (1 x 2 x 3)^{1/3}.

With 2 numbers, a and b, the geometric mean is (ab)^{1/2}.

We can then use the above diagram to prove that (a + b) /2 ≥ (ab)^{1/2} for all a and b. Indeed this inequality holds more generally and it can be proved that the Arithmetic mean ≥ Geometric mean.

Step (1) We draw a triangle as above, with the line MQ a diameter, and therefore angle MNQ a right angle (from the circle theorems). Let MP be the length a, and let PQ be the length b.

Step (2) We can find the length of the green line OR, because this is the radius of the circle. Given that the length a+b was the diameter, then (a+b) /2 is the radius.

Step (3) We then attempt to find an equation for the length of the purple line PN.

We find MN using Pythagoras: (MN)^{2} = a^{2} +x^{2}

We find NQ using Pythagoras: (NQ)^{2} = b^{2} +x^{2}

Therefore the length MQ can also be found by Pythagoras:

(MQ)^{2} = (MN)^{2 } + (NQ)^{2}

(MQ)^{2 } = a^{2} +x^{2} + b^{2} +x^{2}

But MQ = a + b. Therefore:

(a + b)^{2 } = a^{2} +x^{2} + b^{2} +x^{2}

a^{2}+ b^{2} + 2ab = a^{2} +x^{2} + b^{2} +x^{2}

2ab = x^{2} +x^{2}

ab = x^{2}

x = (ab)^{1/2}

Therefore our green line represents the arithmetic mean of 2 numbers (a+b) /2 and our purple line represents the geometric mean of 2 numbers (ab)^{1/2}. The green line will always be greater than the purple line (except when a = b which gives equality) therefore we have a geometrical proof of our inequality.

There is a more rigorous proof of the general case using induction you may wish to explore as well.

]]>**Euler’s 9 Point Circle**

This is a nice introduction to some of the beautiful constructions of geometry. This branch of mathematics goes in and out of favour – back in the days of Euclid, constructions using lines and circles were a cornerstone of mathematical proof, interest was later revived in the 1800s through Poncelot’s projective geometry – later leading to the new field of non Euclidean geometry. It’s once again somewhat out of fashion – but more accessible than ever due to programs like Geogebra (on which the below diagrams were plotted). The 9 point circle (or at least the 6 point circle was discovered by the German Karl Wilhelm von Feuerbach in the 1820s. Unfortunately for Feuerbach it’s often instead called the Euler Circle – after one of the greatest mathematicians of all time, Leonhard Euler.

So, how do you draw Euler’s 9 Point Circle? It’s a bit involved, so don’t give up!

Step 1: Draw a triangle:

Step 2: Draw the perpendicular bisectors of the 3 sides, and mark the point where they all intersect (D).

Step 3: Draw the circle through the point D.

Step 4: From each line of the triangle, draw the perpendicular line through its third angle. For example, for the line AC, draw the perpendicular line that goes through both AC and angle B. (The altitudes of the triangle). Join up the 3 altitudes which will meet at a point (E).

Step 5: Join up the mid points of each side of the triangle with the remaining angle. For example, find the mid point of AC and join this point with angle B. (The median lines of the triangle). Label the point where the 3 lines meet as F.

Step 6: Remove all the construction lines. You can now see we have 3 points in a line. D is the centre of the circle through the points ABC, E is where the altitudes of the triangle meet (the orthoocentre of ABC) and F is where the median lines meet (the centroid of ABC).

Step 7: Join up the 3 points – they are collinear (on the same line).

Step 8: Enlarge the circle through points A B C by a scale factor of -1/2 centered on point F.

Step 9: We now have the 9 point circle. Look at the points where the inner circle intersects the triangle ABC. You can see that the points M N O show the points where the feet of the altitudes (from step 4) meet the triangle.

The points P Q R show the points where the perpendicular bisectors of the lines start (i.e the midpoints of the lines AB, AC, BC)

We also have the points S T U on the circle which show the midpoints of the lines between E and the vertices A, B, C.

Step 10: We can drag the vertices of the triangle and the above relationships will still hold.

In the second case we have both E and D outside the triangle.

In the third case we have E and F at the same point.

In the fourth case we have D and E on opposite sides of the triangle.

So there we go – who says maths isn’t beautiful?

]]>**Log Graphs to Plot Planetary Patterns**

This post is inspired by the excellent Professor Stewart’s latest book, Calculating the Cosmos. In it he looks at some of the mathematics behind our astronomical knowledge.

**Astronomical investigations**

In the late 1760s and early 1770s, 2 astronomers Titius and Bode both noticed something quite strange – there seemed to be a relationship in the distances between the planets. There was no obvious reason as to why there would be – but nevertheless it appeared to be true. Here are the orbital distances from the Sun of the 6 planets known about in the 1760s:

Mercury: 0.39 AU

Venus: 0.72 AU

Earth: 1.00 AU

Mars: 1.52 AU

Jupiter: 5.20 AU

Saturn: 9.54 AU

In astronomy, 1 astronomical unit (AU) is defined as the mean distance from the center of the Earth to the centre of the Sun (149.6 million kilometers).

Now, at first glance there does not appear to be any obvious relationship here – it’s definitely not linear, but how about geometric? Well dividing the term above by the term below we get r values of:

1.8, 1.4, 1.5, 3.4, 1.8

4 of the numbers are broadly similar – and then we have an outlier of 3.4. So either there was no real pattern – or there was an undetected planet somewhere between Mars and Jupiter? And was there another planet beyond Saturn?

**Planet X**

Mercury: 0.39 AU

Venus: 0.72 AU

Earth: 1.00 AU

Mars: 1.52 AU

Planet X: x AU

Jupiter: 5.20 AU

Saturn: 9.54 AU

Planet Y: y AU

For a geometric sequence we would therefore want x/1.52 = 5.20/x. This gives x = 2.8 AU – so a missing planet should be 2.8 AU away from the Sun. This would give us r values of 1.8, 1.4, 1.5, 1.8, 1.9, 1.8. Let’s take r = 1.8, which would give Planet Y a distance of 17 AU.

So we predict a planet around 2.8 AU from the Sun, and another one around 17 AU from the Sun. In 1781, Uranus was discovered – 19.2 AU from the Sun, and in 1801 Ceres was discovered at 2.8 AU. Ceres is what is now classified as a dwarf planet – the largest object in the asteroid belt between Jupiter and Mars.

**Log Plots**

Using graphs is a good way to graphically see relationships. Given that we have a geometrical relationship in the form d = ab^n with a and b as constants, we can use the laws of logs to rearrange to give log d = log a + n log b.

Therefore we can plot log d on the y axis, and n on the x axis. If there is a geometrical relationship we will see us a linear relationship on the graph, with log a being the y intercept and the gradient being log b.

(n=1) Mercury: d = 0.39 AU. log d = -0.41

(n=2) Venus: d = 0.72 AU. log d = -0.14

(n=3) Earth: d = 1.00 AU. log d = 0

(n=4) Mars: d = 1.52 AU. log d = 0.18

(n=5) Ceres (dwarf): d = 2.8 AU. log d = 0.45

(n=6) Jupiter: d = 5.20 AU. log d = 0.72

(n=7) Saturn: d = 9.54 AU. log d = 0.98

(n=8) Uranus: d = 19.2 AU. log d = 1.28

We can use Desmos’ regression tool to find a very strong linear correlation – with y intercept as -0.68 and gradient as 0.24. Given that log a is the y intercept, this gives:

log a = -0.68

a = 0.21

and given that log b is the gradient this gives:

log b = 0.24

b = 1.74

So our final formula for the relationship for the spacing of the n ordered planets is:

d = ab^n

distance = 0.21 x (1.74)^n.

**Testing the formula**

So, using this formula we can predict what the next planetary distance would be. When n = 9 we would expect a distance of 30.7 AU. Indeed we find that Neptune is 30.1 AU – success! How about Pluto? Given that Pluto has a very eccentric (elliptical) orbit we might not expect this to be as accurate. When n = 10 we get a prediction of 53.4 AU. The average AU for Pluto is 39.5 – so our formula does not work well for Pluto. But looking a little more closely, we notice that Pluto’s distance from the Sun varies wildly – from 29.7 AU to 49.3 AU, so perhaps it is not surprising that this doesn’t follow our formula well.

**Other log relationships**

Interestingly other distances in the solar system show this same relationship. Plotting the ordered number of the planets against the log of their orbital period produces a linear graph, as does plotting the ordered moons of Uranus against their log distance from the planet. Why these relationships exist is still debated. Perhaps they are a coincidence, perhaps they are a consequence of resonance in orbital periods. Do some research and see what you find!

]]>This is a quick example of how using Tracker software can generate a nice physics-related exploration. I took a spring, and attached it to a stand with a weight hanging from the end. I then took a video of the movement of the spring, and then uploaded this to Tracker.

**Height against time**

The first graph I generated was for the height of the spring against time. I started the graph when the spring was released from the low point. To be more accurate here you can calibrate the y axis scale with the actual distance. I left it with the default settings.

You can see we have a very good fit for a sine/cosine curve. This gives the approximate equation:

y = -65cos10.5(t-3.4) – 195

(remembering that the y axis scale is x 100).

This oscillating behavior is what we would expect from a spring system – in this case we have a period of around 0.6 seconds.

**Momentum against velocity**

For this graph I first set the mass as 0.3kg – which was the weight used – and plotted the y direction momentum against the y direction velocity. It then produces the above linear relationship, which has a gradient of around 0.3. Therefore we have the equation:

p = 0.3v

If we look at the theoretical equation linking momentum:

p = mv

(Where m = mass). We can see that we have almost perfectly replicated this theoretical equation.

**Height against velocity**

I generated this graph with the mass set to the default 1kg. It plots the y direction against the y component velocity. You can see from the this graph that the velocity is 0 when the spring is at the top and bottom of its cycle. We can then also see that it reaches its maximum velocity when halfway through its cycle. If we were to model this we could use an ellipse (remembering that both scales are x100 and using x for vy):

If we then wanted to develop this as an investigation, we could look at how changing the weight or the spring extension affected the results and look for some general conclusions for this. So there we go – a nice example of how tracker can quickly generate some nice personalised investigations!

]]>**Predicting the UK election using linear regression**

The above data is the latest opinion poll data from the Guardian. The UK will have (another) general election on June 8th. So can we use the current opinion poll data to predict the outcome?

**Longer term data trends**

Let’s start by looking at the longer term trend following the aftermath of the Brexit vote on June 23rd 2016. I’ll plot some points for Labour and the Conservatives and see what kind of linear regression we get. To keep things simple I’ve looked at randomly chosen poll data approximately every 2 weeks – assigning 0 to July 1st 2016, 1 to mid July, 2 to August 1st etc. This has then been plotted using the fantastic Desmos.

**Labour**

You can see that this is not a very good fit – it’s a very weak correlation. Nevertheless let’s see what we would get if we used this regression line to predict the outcome in June. With the x axis scale I’ve chosen, mid June 2017 equates to 23 on the x axis. Therefore we predict the percentage as

y = -0.130(23) + 30.2

y = 27%

Clearly this would be a disaster for Labour – but our model is not especially accurate so perhaps nothing to worry about just yet.

**Conservatives**

As with Labour we have a weak correlation – though this time we have a positive rather than negative correlation. If we use our regression model we get a prediction of:

y = 0.242(23) + 38.7

y = 44%

So, we are predicting a crushing victory for the Conservatives – but could we get some more accurate models to base this prediction on?

**Using moving averages**

The Guardian’s poll tracker at the top of the page uses moving averages to smooth out poll fluctuations between different polls and to arrive at an averaged poll figure. Using this provides a stronger correlation:

**Labour**

This model doesn’t take into account a (possible) late surge in support for Labour but does fir better than our last graph. Using the equation we get:

y = -0.0764(23) + 28.8

y = 27%

**Conservatives**

We can have more confidence in using this regression line to predict the election. Putting in the numbers we get:

y = 0.411(23) + 36.48

y = 46%

**Conclusion**

Our more accurate models merely confirm what we found earlier – and indeed what all the pollsters are predicting – a massive win for the Conservatives. Even allowing for a late narrowing of the polls the Conservatives could be on target for winning by over 10% points – which would result in a very large majority. Let’s see what happens!

]]>A farmer has 40m of fencing. What is the maximum area he can enclose?

**Case 1: The rectangle:**

Reflection – the rectangle turns out to be a square, with sides 10m by 10m. Therefore the area enclosed is 100 metres squared.

**Case 2: The circle:**

Reflection: The area enclosed is greater than that of the square – this time we have around 127 metres squared enclosed.

**Case 3: The isosceles triangle:**

Reflection – our isosceles triangle turns out to be an equilateral triangle, and it only encloses an area of around 77 metres squared.

**Case 4, the n sided regular polygon**

Reflection: Given that we found the cases for a 3 sided and 4 sided shape gave us the regular shapes, it made sense to look for the n-sided regular polygon case. If we try to plot the graph of the area against n we can see that for n ≥3 the graph has no maximum but gets gets closer to an asymptote. By looking at the limit of this area (using Wolfram Alpha) as n gets large we can see that the limiting case is the circle. This makes sense as regular polygons become closer to circles the more sides they have.

**Proof of the limit using L’Hospital’s Rule**

Here we can prove that the limit is indeed 400/pi by using L’Hospital’s rule. We have to use it twice and also use a trig identity for sin(2x) – but pleasingly it agrees with Wolfram Alpha.

So, a simple example of how an investigation can develop – from a simple case, getting progressively more complex and finishing with some HL Calculus Option mathematics.

]]>**Cracking ISBN and Credit Card Codes**

ISBN codes are used on all books published worldwide. It’s a very powerful and useful code, because it has been designed so that if you enter the wrong ISBN code the computer will immediately know – so that you don’t end up with the wrong book. There is lots of information stored in this number. The first numbers tell you which country published it, the next the identity of the publisher, then the book reference.

**Here is how it works:**

Look at the 10 digit ISBN number. The first digit is 1 so do 1×1. The second digit is 9 so do 2×9. The third digit is 3 so do 3×3. We do this all the way until 10×3. We then add all the totals together. If we have a proper ISBN number then we can divide this final number by 11. If we have made a mistake we can’t. This is a very important branch of coding called error detection and error correction. We can use it to still interpret codes even if there have been errors made.

If we do this for the barcode above we should get 286. 286/11 = 26 so we have a genuine barcode.

**Check whether the following are ISBNs**

1) 0-13165332-6

2) 0-1392-4191-4

3) 07-028761-4

**Challenge (harder!) :**The following ISBN code has a number missing, what is it?

1) 0-13-1?9139-9

Answers in white text at the bottom, highlight to reveal!

Credit cards use a different algorithm – but one based on the same principle – that if someone enters a digit incorrectly the computer can immediately know that this credit card does not exist. This is obviously very important to prevent bank errors. The method is a little more complicated than for the ISBN code and is given below from computing site Hacktrix:

You can download a worksheet for this method here. Try and use this algorithm to validate which of the following 3 numbers are genuine credit cards:

1) 5184 8204 5526 6425

2) 5184 8204 5526 6427

3) 5184 8204 5526 6424

Answers in white text at the bottom, highlight to reveal!

ISBN:

1) Yes

2) Yes

3) No

1) 3 – using x as the missing number we end up with 5x + 7 = 0 mod 11. So 5x = 4 mod 11. When x = 3 this is solved.

Credit Card: The second one is genuine

If you liked this post you may also like:

NASA, Aliens and Binary Codes from the Stars – a discussion about how pictures can be transmitted across millions of miles using binary strings.

Cracking Codes Lesson – an example of 2 double period lessons on code breaking

]]>