You are currently browsing the category archive for the ‘Real life maths’ category.

NASA, Aliens and Binary Codes from the Star

The Drake Equation was intended by astronomer Frank Drake to spark a dialogue about the odds of intelligent life on other planets. He was one of the founding members of SETI – the Search for Extra Terrestrial Intelligence – which has spent the past 50 years scanning the stars looking for signals that could be messages from other civilisations.

In the following video, Carl Sagan explains about the Drake Equation:

The Drake equation is:
drake

where:

N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone);
R* = the average number of star formation per year in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fl = the fraction of planets that could support life that actually develop life at some point
fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space

The desire to encode and decode messages is a very important branch of mathematics – with direct application to all digital communications – from mobile phones to TVs and the internet.

All data content can be encoded using binary strings. A very simple code could be to have 1 signify “black” and 0 to signify “white” – and then this could then be used to send a picture. Data strings can be sent which are the product of 2 primes – so that the recipient can know the dimensions of the rectangle in which to fill in the colours.

If this sounds complicated, an example from the excellent Maths Illuminated handout on codes:

code3

If this mystery message was received from space, how could we interpret it? Well, we would start by noticing that it is 77 digits long – which is the product of 2 prime numbers, 7 and 11. Prime numbers are universal and so we would expect any advanced civilisation to know about their properties. This gives us either a 7×11 or 11×7 rectangular grid to fill in. By trying both possibilities we see that an 11×7 grid gives the message below.

code2

More examples can be downloaded from the Maths Illuminated section on Primes (go to the facilitator pdf).

A puzzle to try:

“If the following message was received from outer space, what would we conjecture that the aliens sending it looked like?”

0011000 0011000 1111111 1011001 0011001 0111100 0100100 0100100 0100100 1100110

Hint: also 77 digits long.

This is an excellent example of the universality of mathematics in communicating across all languages and indeed species. Prime strings and binary represent an excellent means of communicating data that all advanced civilisations would easily understand.

Answer in white text below (highlight to read)

Arrange the code into a rectangular array – ie a 11 rows by 7 columns rectangle. The first 7 numbers represent the 7 boxes in the first row etc. A 0 represents white and 1 represents black. Filling in the boxes and we end up with an alien with 2 arms and 2 legs – though with one arm longer than the other!
If you enjoyed this post you may also like:

Cracking Codes Lesson – a double period lesson on using and breaking codes

Cracking ISBN and Credit Card Codes– the mathematics behind ISBN codes and credit card codes

Benford’s Law – Using Maths to Catch Fraudsters

Benford’s Law is a very powerful and counter-intuitive mathematical rule which determines the distribution of leading digits (ie the first digit in any number).  You would probably expect that distribution would be equal – that a number 9 occurs as often as a number 1.  But this, whilst intuitive, is false for a large number of datasets.   Accountants looking for fraudulant activity and investigators looking for falsified data use Benford’s Law to catch criminals.

The probability function for Benford’s Law is:

benford 5

benford

This clearly shows that a 1 is by far the most likely number to occur – and that you have nearly a 60% chance of the leading digit being 3,2 or 1.   Any criminal trying to make up data who didn’t know this law would be easily caught out.

Scenario for students 1:

You are a corrupt bank manager who is secretly writing cheques to your own account.  You can write any cheques for any amount – but you want it to appear natural so as not to arouse suspicion.  Write yourself 20 cheque amounts.  Try not to get caught!

Look at the following fraudualent cheques that were written by an Arizona manager – can you see why he was caught?   

benford6

Scenario for students 2:

Use the formula for the probability density function to find the probability of the respective leading digits.  Look at the leading digits for the first 50 Fibonacci numbers.  Does the law hold?

benford 4

There is also an excellent Numberphile video on Benford’s Law.  Wikipedia has a lot more on the topic, as have the Journal of Accountancy.

If you enjoyed this topic you might also like:

Amanda Knox and Bad Maths in Courts – some other examples of mathematics and the criminal justice system.

Cesaro Summation: Does 1 – 1 + 1 – 1 … = 1/2? – another surprising mathematical result.

traffic simulation

Simulations -Traffic Jams and Asteroid Impacts

This is a really good online Java app which has been designed by a German mathematician to study the mathematics behind traffic flow.  Why do traffic jams form?  How does the speed limit or traffic lights or the number of lorries on the road affect road conditions?   You can run a number of different simulations – looking at ring road traffic, lane closures and how robust the system is by applying an unexpected perturbation (like an erratic driver).

There is a lot of scope for investigation – with some prompts on the site.  For example, just looking at one variable – the speed limit – what happens in the lane closure model?  Interestingly, with a homogenous speed of 80 km/h there is no traffic congestion – but if the speed is increased to 140km/h then large congestion builds up quickly as cars are unable to change lanes.   This is why reduced speed limits  are applied on motorways during lane closures.

Another investigation is looking at how the style of driving affects the models.  You can change the politeness of the drivers – do they change lanes recklessly?  How many perturbations (erratic incidents) do you need to add to the simulation to cause a traffic jam?

This is a really good example of mathematics used in a real life context – and also provides some good opportunities for a computer based investigation looking at the altering one parameter at a time to note the consequences.

asteriod

Another good simulation is on the Impact: Earth page.  This allows you to investigate the consequences of various asteroid impacts on Earth – choosing from different parameters such as diameter, velocity, density and angle of impact.  It then shows a detailed breakdown of thee consequences – such as crater size and energy released.   You can also model some famous impacts from history and see their effects.   Lots of scope for mathematical modelling – and also for links with physics.  Also possible discussion re the logarithmic Richter scale – why is this useful?

Student Handout

Asteroid Impact – Why is this important?
Comets and asteroids impact with Earth all the time – but most are so small that we don’t even notice. On a cosmic scale however, the Earth has seen some massive impacts – which were they to happen again today could wipe out civilisation as we know it.

The website Impact Earth allows us to model what would happen if a comet or asteroid hit us again. Jay Melosh professor of Physics and Earth Science says that we can expect “fairly large” impact events about every century. The last major one was in Tunguska Siberia in 1908 – which flattened an estimated 80 million trees over an area of 2000 square km. The force unleashed has been compared to around 1000 Hiroshima nuclear bombs. Luckily this impact was in one of the remotest places on Earth – had the impact been near a large city the effects could be catastrophic.

Jay says that, ”The biggest threat in our near future is the asteroid Apophis, which has a small chance of striking the Earth in 2036. It is about one-third of a mile in diameter.”

Task 1: Watch the above video on a large asteroid impact – make some notes.

Task 2:Research about Apophis – including the dimensions and likely speed of the asteroid and probability of collision. Use this data to enter into the Impact Earth simulation and predict the damage that this asteroid could do.

Task 3: Investigate the Tunguska Event. When did it happen? What was its diameter? Likely speed? Use the data to model this collision on the Impact Earth Simulation. Additional: What are the possible theories about Tunguska? Was it a comet? Asteroid? Death Ray?

Task 4: Conduct your own investigation on the Impact Earth Website into what factors affect the size of craters left by impacts. To do this you need to change one variable and keep all the the other variables constant.  The most interesting one to explore is the angle of impact.  Keep everything else the same and see what happens to the crater size as the angle changes from 10 degrees to 90 degrees.  What angle would you expect to cause the most damage?  Were you correct?  Plot the results as a graph.

If you enjoyed this post you might also like:

Champagne Supernovas and the Birth of the Universe – some amazing photos from space.

Fractals, Mandelbrot and the Koch Snowflake – using maths to model infinite patterns.

Time Travel and the Speed of Light

This is one of my favourite videos from the legendary Carl Sagan. He explains the consequences of near to speed of light travel.

This topic fits quite well into a number of mathematical topics – from graphing, to real life uses of equations, to standard form and unit conversions. It also challenges our notion of time as we usually experience it and therefore leads onto some interesting questions about the nature of reality. Below we can see the time dilation graph:

time dilation

which clearly shows that for low speeds there is very little time dilation, but when we start getting to within 90% of the speed of light, that there is a very significant time dilation effect. For more accuracy we can work out the exact dilation using the formula given – where v is the speed traveled, c is the speed of light, t is the time experienced in the observer’s own frame of reference (say, by looking at his watch) and t’ is the time experienced in a different, stationary time frame (say on Earth) . Putting some numbers in for real life examples:

1) A long working air steward spends a cumulative total of 5 years in the air – flying at an average speed of 900km/h. How much longer will he live (from a stationary viewpoint) compared to if he had been a bus driver?

2) Voyager 1, launched in 1977 and now currently about 1.8×10^10 km away from Earth is traveling at around 17km/s. How far does this craft travel in 1 hour? What would the time dilation be for someone onboard since 1977?

3) I built a spacecraft capable of traveling at 95% the speed of light. I said goodbye to my twin sister and hopped aboard, flew for a while before returning to Earth. If I experienced 10 years on the space craft, how much younger will I be than my twin?

Scroll to the bottom for the answers

Marcus De Sautoy also presents an interesting Horizon documentary on the speed of light, its history and the CERN experiments last year that suggested that some particles may have traveled faster than light:

There is a lot of scope for extra content on this topic – for example, looking at the distance of some stars visible in the night sky. For example, red super-giant star Belelgeuse is around 600 light years from Earth. (How many kilometres is that?) When we look at Betelgeuse we are actually looking 600 years “back in time” – so does it make sense to use time as a frame of reference for existence?

Answers

1) Convert 900km/h into km/s = 0.25km/s. Now substitute this value into the equation, along with the speed of light at 300,000km/s….and even using Google’s computer calculator we get a difference so negligible that the denominator rounds to 1.

2) With units already in km/s we substitute the values in – and using a powerful calculator find that denominator is 0.99999999839. Therefore someone traveling on the ship for what their watch recorded as 35 years would actually have been recorded as leaving Earth 35.0000000562 years ago. Which is about 1.78seconds! So still not much effect.

3) This time we get a denominator of 0.3122498999 and so the time experienced by my twin will be 32 years. In effect my sister will have aged 22 years more than me on my return. Amazing!

If you enjoyed this topic you might also like:

Michio Kaku – Universe in a Nutshell

Champagne Supernovas and the Birth of the Universe – some amazing pictures from space.

pigeon maths

Even Pigeons Can Do Maths

This is a really interesting study from a couple of years ago, which shows that even pigeons can deal with numbers as abstract quantities – in the study the pigeons counted groups of objects in their head and then classified the groups in terms of size. From the New York Times Article:

“Given groups of six and nine, they could pick, or peck, the images in the right order. This is one more bit of evidence of how smart birds really are, and it is intriguing because the pigeons’ performance was so similar to the monkeys’. “I was surprised,” Dr. Scarf said.

He and his colleagues wrote that the common ability to learn rules about numbers is an example either of different groups — birds and primates, in this case — evolving these abilities separately, or of both pigeons and primates using an ability that was already present in their last common ancestor.

That would really be something, because the common ancestor of pigeons and primates would have been alive around 300 million years ago, before dinosaurs and mammals. It may be that counting was already important, but Dr. Scarf said that if he had to guess, he would lean toward the idea that the numerical ability he tested evolved separately. “I can definitely see why both monkeys and pigeons could profit from this ability,” he said.”

To find mathematical ability amongst both monkeys and pigeons therefore raises two equally interesting possibilities.  Perhaps basic numeracy is a rare trait, but such a fundamentally important skill for life that it emerged  hundreds of millions of years ago.  Or perhaps basic numeracy is a relatively common trait – which can evolve independently in different species.

Either way, it is clear that there must be an evolutionary benefit for being able to process abstract quantities – most likely in terms of food.  A monkey who can look at two piles of coconuts and count 5 in one pile and 6 in the other and know that 6 is a bigger quantity than 5 can then choose the larger pile to sit alongside and eat.   Perhaps this evolutionary benefit is the true origin of our ability to do maths.

Another similar experiment looked at the ability of chimpanzees to both count numbers, and also demonstrated their remarkable photographic memory.

On the screen the monkeys are given a flash of 10 number for a fraction of a second, before the numbers are covered up, and they then proceed to correctly show the position of all numbers from 1-10.  They are much better at this task than humans.  This is a good task to try at school using the online game here  and would also make a good IB investigation.   Can you beat the chimps?

This all ties into the question about where mathematical ability comes from. If there had been no evolutionary ability for such abstract abilities with numbers, then perhaps today our brains would be physically incapable of higher level mathematical thinking.

If you enjoyed this post you might also like:

Does it Pay to be Nice? Game Theory and Evolution

Langton’s Ant – Order out of Chaos

screen-shot-2017-01-28-at-7-46-54-am

Maths of Global Warming – Modeling Climate Change

The above graph is from NASA’s climate change site, and was compiled from analysis of ice core data. Scientists from the National Oceanic and Atmospheric Administration (NOAA) drilled into thick polar ice and then looked at the carbon content of air trapped in small bubbles in the ice. From this we can see that over large timescales we have had large oscillations in the concentration of carbon dioxide in the atmosphere. During the ice ages we have had around 200 parts per million carbon dioxide, rising to around 280 in the inter-glacial periods. However this periodic oscillation has been broken post 1950 – leading to a completely different graph behaviour, and putting us on target for 400 parts per million in the very near future.

Analysising the data

screen-shot-2017-01-28-at-7-40-53-am

One of the fields that mathematicians are always in demand for is data analysis. Understanding data, modeling with the data collected and using that data to predict future events. Let’s have a quick look at some very simple modeling. The graph above shows a superimposed sine graph plotted using Desmos onto the NOAA data.

y = -0.8sin(3x +0.1) – 1

Whilst not a perfect fit, it does capture the general trend of the data and its oscillatory behaviour until 1950. We can see that post 1950 we would then expect to be seeing a decline in carbon dioxide rather than the reverse – which on our large timescale graph looks close to vertical.

Dampened Sine wave

screen-shot-2017-01-28-at-8-33-06-am

This is a dampened sine wave, achieved by adding e-x to the front of the sine term.  This achieves the result of progressively reducing the amplitude of the sine function.  The above graph is:

y = e-0.06x (-0.6sin(3x+0.1) -1 )

This captures the shape in the middle of the graph better than the original sine function, but at the expense of less accuracy at the left and right.

Polynomial Regression

screen-shot-2017-01-29-at-7-07-21-am

We can make use of Desmos’ regression tools to fit curves to points.  Here I have entered a table of values and then seen which polynomial gives the best fit:

screen-shot-2017-01-29-at-7-10-35-am

screen-shot-2017-01-29-at-7-10-20-am

We can see that the purple cubic fits the first 5 points quite well (with a high R² value).  So we should be able to create a piecewise function to describe this graph.

Piecewise Function

screen-shot-2017-01-29-at-7-18-17-am

Here I have restricted the domain of the first polynomial (entered below):

screen-shot-2017-01-29-at-7-34-21-am

Second polynomial:

screen-shot-2017-01-29-at-7-34-54-am

screen-shot-2017-01-29-at-7-34-36-am

Third polynomial:

screen-shot-2017-01-29-at-8-00-19-am

screen-shot-2017-01-29-at-7-59-29-am

Fourth polynomial:

screen-shot-2017-01-29-at-8-00-35-am

screen-shot-2017-01-29-at-8-02-43-am

Finished model:

screen-shot-2017-01-29-at-8-03-48-am

Shape of model:

screen-shot-2017-01-29-at-8-06-38-am

We would then be able to fit this to the original model scale by applying a vertical translation (i.e add 280), vertical and horizontal stretch.  It would probably have been easier to align the scales at the beginning!  Nevertheless we have the shape we wanted.

Analysing the models

Our piecewise function gives us a good data fit for the domain we were working in – so if we then wanted to use some calculus to look at non horizontal inflections (say), this would be a good model to use.  If we want to analyse what we would have expected to happen without human activity, then the sine models at the very start are more useful in capturing the trend of the oscillations.

Post 1950s

screen-shot-2017-01-28-at-8-07-47-am

Looking on a completely different scale, we can see the general tend of carbon dioxide concentration post 1950 is pretty linear.  This time I’ll scale the axis at the start.  Here 1960 corresponds with x = 0, and 1970 corresponds with x = 5 etc.

screen-shot-2017-01-29-at-8-53-08-am

screen-shot-2017-01-29-at-9-15-20-am

Actually we can see that a quadratic fits the curve better than a linear graph – which is bad news, implying that the rates of change of carbon in the atmosphere will increase.  Using our model we can predict that on current trends in 2030 there will be 500 parts per million of carbon in the atmosphere.

Stern Report

According to the Stern Report, 500ppm is around the upper limit of what we need to aim to stabalise the carbon levels at (450ppm-550ppm of carbon equivalent) before the economic and social costs of climate change become economically catastrophic.  The Stern Report estimates that it will cost around 1% of global GDP to stablise in this range.  Failure to do that is predicted to lock in massive temperature rises of between 3 and 10 degrees by the end of the century.

If you are interested in doing an investigation on this topic:

  1. Plus Maths have a range of articles on the maths behind climate change
  2. The Stern report is a very detailed look at the evidence, graphical data and economic costs.

screen-shot-2016-11-04-at-5-06-42-pm

Modelling Radioactive decay

We can model radioactive decay of atoms using the following equation:

N(t) = N0 e-λt

Where:

N0: is the initial quantity of the element

λ: is the radioactive decay constant

t: is time

N(t): is the quantity of the element remaining after time t.

So, for Carbon-14 which has a half life of 5730 years (this means that after 5730 years exactly half of the initial amount of Carbon-14 atoms will have decayed) we can calculate the decay constant λ.  

After 5730 years, N(5730) will be exactly half of N0, therefore we can write the following:

N(5730) = 0.5N0 = N0 e-λt

therefore:

0.5 = e-λt

and if we take the natural log of both sides and rearrange we get:

λ = ln(1/2) / -5730

λ ≈0.000121

We can now use this to solve problems involving Carbon-14 (which is used in Carbon-dating techniques to find out how old things are).

eg.  You find an old parchment and after measuring the Carbon-14 content you find that it is just 30% of what a new piece of paper would contain.  How old is this paper?

We have

N(t) = N0 e-0.000121t

N(t)/N0e-0.000121t

0.30e-0.000121t

t = ln(0.30)/(-0.000121)

t = 9950 years old.

screen-shot-2016-11-04-at-5-10-43-pm

Probability density functions

We can also do some interesting maths by rearranging:

N(t) = N0 e-λt

N(t)/N0 =  e-λt

and then plotting N(t)/N0 against time.

screen-shot-2016-11-04-at-4-21-41-pm

N(t)/N0 will have a range between 0 and 1 as when t = 0, N(0)N0 which gives N(0)/N(0) = 1.

We can then manipulate this into the form of a probability density function – by finding the constant a which makes the area underneath the curve equal to 1.

screen-shot-2016-11-04-at-4-48-31-pm

solving this gives a = λ.  Therefore the following integral:

screen-shot-2016-11-04-at-4-50-04-pm

will give the fraction of atoms which will have decayed between times t1 and t2.

We could use this integral to work out the half life of Carbon-14 as follows:

screen-shot-2016-11-04-at-4-52-07-pm

Which if we solve gives us t = 5728.5 which is what we’d expect (given our earlier rounding of the decay constant).

We can also now work out the expected (mean) time that an atom will exist before it decays.  To do this we use the following equation for finding E(x) of a probability density function:

screen-shot-2016-11-04-at-4-56-00-pm

and if we substitute in our equation we get:

screen-shot-2016-11-04-at-4-56-07-pm

Now, we can integrate this by parts:

screen-shot-2016-11-04-at-4-57-55-pm

So the expected (mean) life of an atom is given by 1/λ.  In the case of Carbon, with a decay constant λ ≈0.000121 we have an expected life of a Carbon-14 atom as:

E(t) = 1 /0.000121

E(t) = 8264 years.

Now that may sound a little strange – after all the half life is 5730 years, which means that half of all atoms will have decayed after 5730 years.  So why is the mean life so much higher?  Well it’s because of the long right tail in the graph – we will have some atoms with very large lifespans – and this will therefore skew the mean to the right.

Screen Shot 2016-05-07 at 8.18.44 PM

Could Trump be the next President of America?

There is a lot of statistical maths behind polling data to make it as accurate as possible – though poor sampling techniques can lead to unexpected results.   For example in the UK 2015 general election even though labour were predicted to win around 37.5% of the vote, they only polled 34%.  This was a huge political shock and led to a Conservative government when all the pollsters were predicting a hung parliament.   In the postmortem following the fallout of this failure, YouGov concluded that their sampling methods were at fault – leading to big errors in their predictions.

Trump versus Clinton

Screen Shot 2016-05-07 at 7.16.49 PM

The graph above from Real Clear Politics shows the current hypothetical face off between Clinton and Trump amongst American voters.  Given that both are now clear favourites to win their respective party nominations, attention has started to turn to how they fare against each other.

Normal distribution

Screen Shot 2016-05-07 at 7.27.26 PM

A great deal of statistics dealing with populations is based on the normal distribution.  The normal distribution has the bell curve shape above – with the majority of the population bunched around the mean value, and with symmetrical tails at each end.  For example most men in the UK will be between 5 feet 8 and 6 foot – with a symmetrical tail of men much taller and much smaller.  For polling data mathematicians usually use a sample of 1000 people – this is large enough to give a good approximation to the normal distribution whilst not being too large to be prohibitively expensive to conduct.

A Polling Example

The following example is from the excellent introduction to this topic from the University of Arizona.

So, say we have sample 1000 people asking them a simple Yes/No/Don’t Know type question.  Say for example we asked 1000 people if they would vote for Trump, Clinton or if they were undecided.  In our poll 675 people say, “Yes” to Trump – so what we want to know is what is our confidence interval for how accurate this prediction is.  Here is where the normal distribution comes in.  We use the following equations:

Screen Shot 2016-05-07 at 7.29.04 PM

We have μ representing the mean.

n = the number of people we asked which is 1000

p0 = our sample probability of “Yes” for Trump which is 0.675

Therefore  μ = 1000 x 0.675 = 675

We can use the same values to calculate the standard deviation σ:

σ = (1000(0.675)(1-0.675))0.5

σ = 14.811

We now can use the following table:

Screen Shot 2016-05-07 at 7.28.37 PM

This tells us that when we have a normal distribution, we can be 90% confident that the data will be within +/- 1.645 standard deviations of the mean.

So in our hypothetical poll we are 90% confident that the real number of people who will vote for Trump will be +/- 1.645 standard deviations from our sample mean of 675

This gives us the following:

upper bound estimate = 675 + 1.645(14.811) = 699.4

lower bound estimate  = 675 – 1.645(14.811) = 650.6

Therefore we can convert this back to a percent – and say that we can be 90% confident that between 65% and 70% of the population will vote for Trump.  We therefore have a prediction of 67.5% with a margin of error of +or – 2.5%.   You will see most polls that are published using a + – 2.5% margin of error – which means they are using a sample of 1000 people and a confidence interval of 90%.

Real Life

Screen Shot 2016-05-07 at 7.16.49 PM

Back to the real polling data on the Clinton, Trump match-up.  We can see that the current trend is a narrowing of the polls between the 2 candidates – 47.3% for Clinton and 40.8% for Trump.  This data is an amalgamation of a large number of polls – so should be reasonably accurate.  You can see some of the original data behind this:

Screen Shot 2016-05-07 at 8.25.27 PM

This is a very detailed polling report from CNN – and as you can see above, they used a sample of 1000 adults in order to get a margin of error of around 3%.  However with around 6 months to go it’s very likely these polls will shift.  Could we really have President Trump?  Only time will tell.

Screen Shot 2016-02-10 at 10.14.06 PM

Cartoon from here

The Gini Coefficient – Measuring Inequality 

The Gini coefficient is a value ranging from 0 to 1 which measures inequality. 0 represents perfect equality – i.e everyone in a population has exactly the same wealth.  1 represents complete inequality – i.e 1 person has all the wealth and everyone else has nothing.  As you would expect, countries will always have a value somewhere between these 2 extremes.  The way its calculated is best seen through the following graph (from here):

Screen Shot 2016-02-09 at 9.01.10 PM

The Gini coefficient is calculated as the area of A divided by the area of A+B.  As the area of A decreases then the curve which plots the distribution of wealth (we can call this the Lorenz curve) approaches the line y = x.  This is the line which represents perfect equality.

Inequality in Thailand

The following graph will illustrate how we can plot a curve and calculate the Gini coefficient.  First we need some data.  I have taken the following information on income distribution from the 2002 World Bank data on Thailand where I am currently teaching:

Thailand:

The bottom 20% of the population have 6.3% of the wealth
The next 20% of the population have 9.9% of the wealth
The next 20%  have 14% of the wealth
The next 20% have 20.8% of the wealth
The top 20% have 49% of the wealth

I can then write this in a cumulative frequency table (converting % to decimals):

Screen Shot 2016-02-10 at 9.33.16 PM

Here the x axis represents the cumulative percentage of the population (measured from lowest to highest), and the y axis represents the cumulative wealth.  This shows, for example that the the bottom 80% of the population own 51% of the wealth.  This can then be plotted as a graph below (using Desmos):

Screen Shot 2016-02-10 at 9.33.35 PM

From the graph we can see that Thailand has quite a lot of inequality – after all the top 20% have just under 50% of the wealth.  The blue line represents how a perfectly equal society would look.

To find the Gini Coefficient we first need to find the area between the 2 curves.  The area underneath the blue line represents the area A +B.  This is just the area of a triangle with length and perpendicular height 1, therefore this area is 0.5.

The area under the green curve can be found using the trapezium rule, 0.5(a+b)h.  Doing this for the first trapezium we get 0.5(0+0.063)(0.2) = 0.0063.  The second trapezium is 0.5(0.063+0.162)(0.2) and so on.  Adding these areas all together we get a total trapezium area of 0.3074.  Therefore we get the area between the two curves as 0.5 – 0.3074  ≈ 0.1926

The Gini coefficient is then given by 0.1926/0.5  = 0.3852.

The actual World Bank calculation for Thailand’s Gini coefficient in 2002 was 0.42 – so we have slightly underestimated the inequality in Thailand.  We would get a more accurate estimate by taking more data points, or by fitting a curve through our plotted points and then integrating.  Nevertheless this is a good demonstration of how the method works.

Screen Shot 2016-02-09 at 8.58.40 PM

In this graph (from here) we can see a similar plot of wealth distribution – here we have quintiles on the x axis (1st quintile is the bottom 20% etc).  This time we can compare Hungary – which shows a high level of equality (the bottom 80% of the population own 62.5% of the wealth) and Namibia – which shows a high level of inequality (the bottom 80% of the population own just 21.3% of the wealth).

How unequal is the world?

Screen Shot 2016-02-09 at 10.43.23 PM

We can apply the same method to measure world inequality.  One way to do this is to calculate the per capita income of all the countries in the world and then to work out the share of the total global per capita income the (say) bottom 20% of the countries have.  This information is represented in the graph above (from here).  It shows that there was rising inequality (i.e the richer countries were outperforming the poorer countries) in the 2 decades prior to the end of the century, but that there has been a small decline in inequality since then.

If you want to do some more research on the Gini coefficient you can use the following resources:

The intmaths site article on this topic – which goes into more detail and examples of how to calculate the Gini coefficient

The ConferenceBoard site which contains a detailed look at world inequality

The World Bank data on the Gini coefficients of different countries.

This is a classic puzzle which is discussed in some more detail by the excellent Wired article.  The puzzle is best represented by the picture below.  We have a hunter who whilst in the jungle stumbles across a monkey on a tree branch.  However he knows that the monkey, being clever, will drop from the branch as soon as he hears the shot being fired.  The question is therefore, at what angle should the hunter aim so that he still hits the monkey?

Screen Shot 2015-11-02 at 7.54.01 PM

(picture from the Wired article – originally from a UCLA physics textbook)

The surprising conclusion is that counter to what you would expect, you should actually still aim at the monkey on the branch – and in this way your bullet’s trajectory will still hit the monkey as it falls.  You can see a video of this experiment at the top of the page.

You can use tracking software (such as the free software tracker ) to show this working graphically.  Tracker provides a video demo with the falling monkey experiment:

Screen Shot 2015-11-02 at 7.52.44 PM

As you can see from the still frame, we have the gun in the bottom left corner, lined up with the origin, the red trace representing the bullet and the blue trace representing the falling monkey.

Screen Shot 2015-11-02 at 8.07.33 PM

We can then generate a graph to represent this data.  The red line is the height of the bullet with respect to time.  The faint blue line (with yellow dots) is the height of the monkey with respect to time.  We can see clearly that the red line can be modeled as a quadratic.  The blue line should in theory also be a quadratic (see below):

Screen Shot 2015-11-02 at 8.10.57 PM

but in our model, the blue line is so flat as to be better modeled as a linear approximation – which is shown in pink.  Now we can use regression technology to find the equation of both of these lines, to show not only that they do intersect, but also the time of that intersection.

We have the linear approximation as y = -18.5t + 14.5
and the quadratic approximation as y = -56t2+39t +0.1

So the 2 graphs will indeed intersect when -18.5t + 14.6 = -56t2+39t +0.1

which will be around 0.45 seconds after the gun is fired.

Screen Shot 2015-11-02 at 8.56.11 PM

(A more humane version, also from Wired – where we can throw the monkey a banana)

Newtonian Mathematics

The next question is can we prove this using some algebra?  Of course!  The key point is that the force of gravity will affect both the bullet and the falling monkey equally (it will not be affected by the different weights of the two – see the previous post here about throwing cannonballs from the Leaning Tower of Pisa).  So even thought the bullet deviates from the straight line path lined up in the gun sights, the distance the bullet deviates will be exactly the same distance that the monkey falls.  So they still collide.  Mathematically we have:

The vertical height of the bullet given by:

y = V0t – 0.5gt2

Where V0 is the initial vertical speed, t is the time, g is the gravitational force (9.8)

The vertical height of the monkey is given by:

y = h – 0.5gt2

where h is the initial vertical height of the monkey.

Therefore these will intersect when:

V0t – 0.5gt2 = h – 0.5gt2
V0t = h
V0/h = t

And for any given non-zero value of V0 we will have a t value – which represents the time of collision.

Well done – you have successfully shot the monkey!

If you like this you might also like:

Throwing cannonballs off the Leaning tower of Pisa – why weight doesn’t affect falling velocity

War Maths – how cannon operators used projectile motion to win wars

 

Website Stats

  • 3,808,527 views

Recent Posts

Follow IB Maths Resources from British International School Phuket on WordPress.com