**Sierpinski Triangle: A picture of infinity**

This pattern of a Sierpinski triangle pictured above was generated by a simple iterative program. I made it by modifying the code previously used to plot the Barnsley Fern. You can run the code I used on repl.it. What we are seeing is the result of 30,000 iterations of a simple algorithm. The algorithm is as follows:

**Transformation 1:**

x_{i+1} = 0.5x_{i}

y_{i+1}= 0.5y_{i}

**Transformation 2:**

x_{i+1} = 0.5x_{i} + 0.5

y_{i+1}= 0.5y_{i}+0.5

**Transformation 3:**

x_{i+1} = 0.5x_{i} +1

y_{i+1}= 0.5y_{i}

So, I start with (0,0) and then use a random number generator to decide which transformation to use. I can run a generator from 1-3 and assign 1 for transformation 1, 2 for transformation 2, and 3 for transformation 3. Say I generate the number 2 – therefore I will apply transformation 2.

x_{i+1} = 0.5(0) + 0.5

y_{i+1}= 0.5(0)+0.5

and my new coordinate is (0.5,0.5). I mark this on my graph.

I then repeat this process – say this time I generate the number 3. This tells me to do transformation 3. So:

x_{i+1} = 0.5(0.5) +1

y_{i+1}= 0.5(0.5)

and my new coordinate is (1.25, 0.25). I mark this on my graph and carry on again. The graph above was generated with 30,000 iterations.

**Altering the algorithm**

We can alter the algorithm so that we replace all the 0.5 coefficients of x and y with another number, *a*.

a = 0.3 has disconnected triangles:

When a = 0.7 we still have a triangle:

By a = 0.9 the triangle is starting to degenerate

By a = 0.99 we start to see the emergence of a line “tail”

By a = 0.999 we see the line dominate.

And when a = 1 we then get a straight line:

When a is greater than 1 the coordinates quickly become extremely large and so the scale required to plot points means the disconnected points are not visible.

If I alternatively alter transformations 2 and 3 so that I add b for transformation 2 and 2b for transformation 3 (rather than 0.5 and 1 respectively) then we can see we simply change the scale of the triangle.

When b = 10 we can see the triangle width is now 40 (we changed b from 0.5 to 10 and so made the triangle 20 times bigger in length):

**Fractal mathematics**

This triangle is an example of a self-similar pattern – i.e one which will look the same at different scales. You could zoom into a detailed picture and see the same patterns repeating. Amazingly fractal patterns don’t fit into our usual understanding of 1 dimensional, 2 dimensional, 3 dimensional space. Fractals can instead be thought of as having fractional dimensions.

The Hausdorff dimension is a measure of the “roughness” or “crinkley-ness” of a fractal. It’s given by the formula:

D = log(N)/log(S)

For the Sierpinski triangle, doubling the size (i.e S = 2), creates 3 copies of itself (i.e N =3)

This gives:

D = log(3)/log(2)

Which gives a fractal dimension of about 1.59. This means it has a higher dimension than a line, but a lower dimension than a 2 dimensional shape.

]]>

**Sphere packing problem: Pyramid design**

Sphere packing problems are a maths problems which have been considered over many centuries – they concern the optimal way of packing spheres so that the wasted space is minimised. You can achieve an average packing density of around 74% when you stack many spheres together, but today I want to explore the packing density of 4 spheres (pictured above) enclosed in a pyramid.

**Considering 2 dimensions**

First I’m going to consider the 2D cross section of the base 3 spheres. Each sphere will have a radius of 1. I will choose A so that it is at the origin. Using some basic Pythagoras this will give the following coordinates:

**Finding the centre**

Next I will stack my single sphere on top of these 3, with the centre of this sphere directly in the middle. Therefore I need to find the coordinate of D. I can use the fact that ABC is an equilateral triangle and so:

**3D coordinates**

Next I can convert my 2D coordinates into 3D coordinates. I define the centre of the 3 base circles to have 0 height, therefore I can add z coordinates of 0. E will be the coordinate point with the same x and y coordinates as D, but with a height, *a*, which I don’t yet know:

In order to find *a *I do a quick sketch, seen below:

Here I can see that I can find the length AD using trig, and then the height DE (which is my *a* value) using Pythagoras:

**Drawing spheres**

The general equation for spheres with centre coordinate (a,b,c) and radius 1 is:

Therefore the equation of my spheres are:

Plotting these on Geogebra gives:

**Drawing a pyramid**

Next I want to try to draw a pyramid such that it encloses the spheres. This is quite difficult to do algebraically – so I’ll use some technology and a bit of trial and error.

First I look at creating a base for my pyramid. I’ll try and construct an equilateral triangle which is a tangent to the spheres:

This gives me an equilateral triangle with lengths 5.54. I can then find the coordinate points of F,G,H and plot them in 3D. I’ll choose point E so that it remains in the middle of the shape, and also has a height of 5.54 from the base. This gives the following:

As we can see, this pyramid does not enclose the spheres fully. So, let’s try again, this time making the base a little bit larger than the 3 spheres:

This gives me an equilateral triangle with lengths 6.6. Taking the height of the pyramid to also be 6.6 gives the following shape:

This time we can see that it fully encloses the spheres. So, let’s find the density of this packing. We have:

Therefore this gives:

and we also have:

Therefore the density of our packaging is:

Given our diagram this looks about right – we are only filling less than half of the available volume with our spheres.

**Comparison with real data**

[Source: Minimizing the object dimensions in circle and sphere packing problems]

We can see that this task has been attempted before using computational power – the table above shows the average density for a variety of 2D and 3D shapes. The pyramid here was found to have a density of 46% – so our result of 44% looks pretty close to what we should be able to achieve. We could tweak our measurements to see if we could improve this density.

So, a nice mixture of geometry, graphical software, and trial and error gives us a nice result. You could explore the densities for other 2D and 3D shapes and see how close you get to the results in the table.

**Martingale II and Currency Trading**

We can use computer coding to explore game strategies and also to help understand the underlying probability distribution functions. Let’s start with a simple game where we toss a coin 4 times, stake 1 counter each toss and always call heads. This would give us a binomial distribution with 4 trials and the probability of success fixed as 1/2.

**Tossing a coin 4 time [simple strategy]**

For example the only way of losing 4 counters is a 4 coin streak of T,T,T,T. The probability of this happening is 1/16. We can see from this distribution that the most likely outcome is 0 (i.e no profit and no loss). If we work out the expected value, E(X) by multiplying profit/loss by frequencies and summing the result we get E(X) = 0. Therefore this is a fair game (we expect to neither make a profit nor a loss).

**Tossing a coin 4 time [Martingale strategy]**

This is a more complicated strategy which goes as follows:

1) You stake 1 counter on heads.

b) if you lose you stake 2 counters on heads

c) if you lose you stake 4 counters on heads

d) if you lose you stake 8 counters on heads.

If you win, the your next stake is always to go back to staking 1 counter.

**For example ****for the sequence: H,H,T,T **

First you bet 1 counter on heads. You win 1 counter

Next you bet 1 counter on heads. You win 1 counter

Next you bet 1 counters on heads. You lose 1 counter

Next you bet 2 counters on heads. You lose 2 counters

[overall loss is 1 counter]

**For example for the sequence: T,T,T,H **

First you bet 1 counter on heads. You lose 1 counter

Next you bet 2 counter on heads. You lose 2 counters

Next you bet 4 counters on heads. You lose 4 counter

Next you bet 8 counters on heads. You win 8 counters

[overall profit is 1 counter]

This leads to the following probabilities:

Once again we will have E(X) = 0, but a very different distribution to the simple 4 coin toss. We can see we have an 11/16 chance of making a profit after 4 coins – but the small chance of catastrophic loss (15 counters) means that the overall expectation is still zero.

**Iterated Martingale:**

Here we can do a computer simulation. This is the scenario this time:

We start with 100 counters, we toss a coin for a maximum of 3 times. We then define a completed round as when we get to a shaded box. We then repeat this process through 999 rounds, and model what happens. Here I used a Python program to simulate a player using this strategy.

We can see that we have periods of linear growth followed by steep falls – which is a very familiar pattern across many investment types. We can see that the initial starting 100 counters was built up to around 120 at the peak, but was closer to just 40 when we finished the simulation.

Let’s do another simulation to see what happens this time:

Here we can see that the 2nd player was actually performing significantly worse after around 600 rounds, but this time ended up with a finishing total of around 130 counters.

**Changing the ****multiplier**

We can also see what happens when rather than doubling stakes on losses we follow some other multiple. For example we might choose to multiply our stake by 5. This leads to much greater volatility as we can see below:

**Multiplier x5**

Here we have 2 very different outcomes for 2 players using the same model. Player 1 (in blue) may believe they have found a sure-fire method of making huge profits, but player 2 (green) went bankrupt after around 600 rounds.

**Multiplier x1.11**

Here we can see that if the multiplier is close to 1 we have much less volatility (as you would expect because your maximum losses per round are much smaller).

We can run the simulation across 5000 rounds – and here we can see that we have big winning and losing streaks, but that over the long run the account value oscillates around the starting value of 100 counters.

**Forex charts**

We can see similar graphs when we look at forex (currency exchange) charts. For example:

In this graph (from here) we plot the exchange between US dollar and Thai Baht. We can see the same sort of graph movements – with run of gains and losses leading to a similar jagged shape. This is not surprising as forex trades can also be thought of in terms of 2 binary outcomes like tossing a coin, and indeed huge amounts of forex trading is done through computer programs, some of which do use the Martingale system as a basis.

**The effect of commission on the model**

So, to finish off we can modify our system slightly so that we try to replicate forex trading. We will follow the same model as before, but this time we have to pay a very small commission for every trade we make. This now gives us:

E(X) = -0.000175. (0.0001 counters commission per trade)

E(X) = -0.00035. (0.0002 counters commission per trade)

Even though E(X) is very slightly negative, it means that in the long run we would expect to lose money. With the 0.0002 counters commission we would expect to lose around 20 counters over 50,000 rounds. The simulation graph above was run with 0.0002 counters commission – and in this case it led to bankruptcy before 3000 rounds.

**Computer code**

The Python code above can be used to generate data which can then be copied into Desmos. The above code simulates 1 player playing 999 rounds, starting with 100 counters, with a multiplier of 5. If you know a little bit about coding you can try and play with this yourselves!

I’ve also just added a version of this code onto repl. You can run this code – and also generate the graph direct (click on the graph png after running). It creates some beautiful images like that shown above.

]]>**Time dependent gravity and cosmology!**

In our universe we have a gravitational constant – i.e gravity is not dependent on time. If gravity changed with respect to time then the gravitational force exerted by the Sun on Earth would lessen (or increase) over time with all other factors remaining the same.

Interestingly time-dependent gravity was first explored by Dirac and some physicists have tried to incorporate time dependent gravity into cosmological models. As yet we have no proof that gravity is not constant, but let’s imagine a university where it is dependent on time.

**Inversely time dependent gravity**

The standard models for cosmology use G, where G is the gravitational constant. This fixes the gravitational force as a constant. However if gravity is inversely proportional to time we could have a relationship such as:

Where a is a constant. Let’s look at a very simple model, where we have a piecewise function as below:

This would create the graph at the top of the page. This is one (very simplistic) way of explaining the Big Bang. In the first few moments after t = 0, gravity would be negative and thus repulsive [and close to infinitely strong], which could explain the initial incredible universal expansion before “regular” attractive gravity kicked in (after t = 1). The Gravitational constant has only been measured to 4 significant figures:

G = 6.674 x 10^{-11}m^{3}kg^{-1}s^{-2}.

Therefore if there is a very small variation over time it is *possible* that we simply haven’t the accuracy to test this yet.

**Universal acceleration with a time dependent gravitational force**

Warning: This section is going to touch on some seriously complicated maths – not for the faint hearted! We’re going to explore whether having a gravitational force which decreases over time still allows us to have an accelerating expansion of the universe.

We can start with the following equation:

To work through an example:

This would show that when t = 1 the universe had an expansion scale factor of 2. Now, based on current data measured by astronomers we have evidence that the universe is both expanding and accelerating in its expansion. If the universal scale factor is accelerating in expansion that requires that we have:

**Modelling our universe**

We’re going to need 4 equations to model what happens when gravity is time dependent rather than just a constant.

**Equation 1**

This equation models a relationship between pressure and density in our model universe. We assume that our universe is homogenous (i.e the same) throughout.

**Equation 2**

This is one of the Friedmann equations for governing the expansion of space. We will take c =1 [i.e we will choose units such that we are in 1 light year etc]

**Equation 3**

This is another one of the Friedmann equations for governing the expansion of space. The original equation has P/(c squared) – but we we simplify again by taking c = 1.

**Equation 4**

This is our time dependent version of gravity.

**Finding alpha**

We can separate variables to solve equation (3).

**Substitution**

We can use this result, along with the equations (1) and (4) to substitute into equation (2).

**Our result**

Now, remember that if the second differential of r is positive then the universal expansion rate is accelerating. If Lamba is negative then we will have the second differential of r positive. However, all our constants G_0, a, B, t, r are greater than 0. Therefore in order for lamda to be negative we need:

What this shows is that even in a universe where gravity is time dependent (and decreasing), we would still be able to have an accelerating universe like we see today. the only factor that determines whether the universal expansion is accelerating is the value of gamma, not our gravity function.

This means that a time dependent gravity function can still gives us a result consistent with our experimental measurements of the universe.

**A specific case**

Solving the equation for the second differential of r is extremely difficult, so let’s look at a very simple case where we choose some constants to make life as easy as possible:

Substituting these into our equation (2) gives us:

We can then solve this to give:

So, finally we have arrived at our final equation. This would give us the universal expansion scale factor at time t, for a universe in which gravity follows the the equation G(t) = 1/t.

For this universe we can then see that when t = 5 for example, we would have a universal expansion scale factor of 28.5.

So, there we go – very complicated maths, way beyond IB level, so don’t worry if you didn’t follow that. And that’s just a simplified introduction to some of the maths in cosmology! You can read more about time dependent gravity here (also not for the faint hearted!)

]]>

**The Tusi couple – A circle rolling inside a circle**

Numberphile have done a nice video where they discuss some beautiful examples of trigonometry and circular motion and where they present the result shown above: a circle rolling within a circle, with the individual points on the small circle showing linear motion along the diameters of the large circle. So let’s see what maths we need to create the image above.

** Projection of points**

We can start with the equation of a unit circle centred at the origin:

and we can then define a point on this circle parametrically by the coordinate:

Here *t* is the angle measured from the horizontal.

If we then want to see the projection of this point along the y-axis we can also plot:

and to see the projection of this point along the x-axis we can also plot:

By then varying *t* from 0 to 2 pi gives the animation above – where the black dot on the circle moves around the circle and there is a projection of its x and y coordinates on the axes.

**Projection along angled lines**

I can then add a line through the origin at angle *a* to the horizontal:

and this time I can project so that the line joining up the black point on the edge of the large circle intersects the dotted line in a right angle.

In order to find the parametric coordinate of this point projection I can use some right angled triangles as follows:

The angle from the horizontal to my point A is *t*. The angle from the horizontal to the slanted line is *a*. The length of my radius BA is 1. This gives me the length of BC.

But I have the identity:

Therefore this gives:

And using some more basic trigonometry gives the following diagram:

Therefore the parametric form of the projection of the point can be given as:

**Adding more lines**

I can add several more slanted lines through the origin. You can see that each dot on the line is the right angle projection between the line and the point on the circle. As we do this we can notice that the points on the lines look as though they form a circle. By noticing that the new smaller circle is half the size of the larger circle, and that the centre of the smaller circle is half-way between the origin and the point on the large circle, we get:

We can the vary the position of the point on the large circle to then create our final image:

We have a connection between both linear motion and circular motion and create the impression of a circle rolling inside another.

You can play around with this demos graph here. All you need to do is either drag the black point around the circle, or press play for the *t* slider.

**More ideas on projective geometry:**

Ferenc Beleznay has made this nice geogebra file here which shows a different way of drawing a connection between a moving point on a large circle and a circle half the size. Here we connect the red dot with the origin and draw the perpendicular from this line to the other edge of the small circle. The point of intersection of the two lines is always on the small circle.

**Further exploration **

There is a lot more you can explore – start by looking into the Tusi Couple – which is what we have just drawn – and the more general case the hypocycloid.

]]>

The **IB Maths Exploration Guide** and the **IB Maths Modelling and Statistics Exploration Guide** are suitable for all IB students.

They are both written by an IB teacher with an MSc. in Mathematics, 10 years experience teaching IB Standard and Higher Level and who has worked as an IB examiner.

**Resource Number 1**

The **Exploration Guide** talks you through:

- An introduction to the essentials about the investigation,
- The new marking criteria,
- How to choose a topic,
- Examples of around 70 topics that could be investigated,
- Useful websites for use in the exploration,
- A student checklist for completing a good investigation,
- Common mistakes that students make and how to avoid them,
- General stats projects advice,
- A selection of some interesting exploration topics explored in more depth,
- Teacher advice for marking,
- Templates for draft submissions,
- Advice on how to use Geogebra, Desmos and Tracker in your exploration,
- Some examples of beautiful maths using Geogebra and Desmos.

Exploration Guide

A comprehensive 63 page pdf guide to help you get excellent marks on your maths investigation. [Will be emailed within the same day as ordered].

$5.50

**Resource Number 2**

The **Modelling and Statistics Guide** talks you through various techniques useful for statistical and modelling explorations. Topics included are:

- Linear regression
- Quadratic regression
- Cubic regression
- Exponential regression
- Linearisation using log scales
- Trigonometric regression
- Pearson’s Product investigations: Height and arm span
- Binomial investigations: ESP powers
- Poisson investigations: Customers in a shop
- 2 sample t tests: Reaction times
- Paired t tests: Reaction times
- Chi Squared: Efficiency of vaccines
- Bernoulli trials: Polling confidence intervals
- Spearman’s rank: Taste preference of cola
- Sampling techniques and experiment design.

Modelling and Statistics Guide

A 60 page pdf guide full of advice to help with modelling and statistics explorations. [Will be emailed within the same day as ordered].

$5.50

**Resource Number 3**

The Exploration Guide and the Modelling and Statistics Guide can be purchased together for a discount.

Exploration Guide AND the Modelling and Statistics Guide

Both guides included together for a discount. [Will be emailed within the same day as ordered].

$8.50

**The Martingale system**

The Martingale system was first used in France in 1700s gambling halls and remains used today in some trading strategies. I’ll look at some of the mathematical ideas behind this and why it has remained popular over several centuries despite having a long term expected return of zero.

**The scenario**

You go to a fair ground and play a simple heads-or-tails game. The probability of heads is 1/2 and tails is also 1/2. You place a stake of counters on heads. If you guess correctly you win that number of counters. If you lose, you double your stake of counters and then the coin is tossed again. Every time you lose you double up your stake of counters and stop when you finally win.

**Infinitely deep pockets model:**

You can see that in the example above we always have a 0.5 chance of getting heads on the first go, which gives a profit of 1 counter. But we also have a 0.5 chance of a profit of 1 counter as long as we keep doubling up our stake, and as long as we do indeed eventually throw heads. In the example here you can see that the string of losing throws don’t matter [when we win is arbitrary, we could win on the 2nd, 3rd, 4th etc throw]. By doubling up, when you do finally win you wipe out your cumulative losses and end up with a 1 counter profit.

This leads to something of a paradoxical situation, despite only having a 1/2 chance of guessing heads we end up with an expected value of 1 counter profit for every 1 counter that we *initially* stake in this system.

So what’s happening? This will always work but it requires that you have access to infinitely deep pockets (to keep your infinite number of counters) and also the assumption that if you keep throwing long enough you will indeed finally get a head (i.e you don’t throw an infinite number of tails!)

**Finite pockets model:**

Real life intrudes on the infinite pockets model – because in reality there will be a limit to how many counters you have which means you will need to bail out after a given number of tosses. Even if the probability of this string of tails is very small, the losses if it does occur will be catastrophic – and so the expected value for this system is still 0.

**Finite pockets model capped at 4 tosses:**

In the example above we only have a 1/16 chance of losing – but when we do we lose 15 counters. This gives an expected value of:

**Finite pockets model capped at n tosses:**

If we start with a 1 counter stake then we can represent the pattern we can see above for E(X) as follows:

Here we use the fact that the losses from n throws are the sum of the first (n-1) powers of 2. We can then notice that both of these are geometric series, and use the relevant formula to give:

Therefore the expected value for the finite pockets model is indeed always still 0.

**So why does this system remain popular?**

So, given that the real world version of this has an expected value of 0, why has it retained popularity over the past few centuries? Well, the system will on average return constant linear growth – up until a catastrophic loss. Let’s say you have 100,000 counters and stake 1 counter initially. You can afford a total of 16 consecutive losses. The probability of this is only:

but when you do lose, you’ll lose a total of:

So, the system creates a model that mimics linear growth, but really the small risk of catastrophic loss means that the system still has E(X) = 0. In the short term you would expect to see the following very simple linear relationship for profit:

With 100,000 counters and a base trading stake of 1 counter, if you made 1000 initial 1 counter trades a day you would expect a return of 1000 counters a day (i.e 1% return on your total counters per day). However the longer you continue this strategy the more likely you are to see a run of 16 tails – and see all your counters wiped out.

**Computer model**

I wrote a short Python code to give an idea as to what is happening. Here I started 9 people off with 1000 counters each. They have a loss limit of 10 consecutive losses. They made starting stakes of 1 counter each time, and then I recorded how long before they made a loss of 10 tosses in a row.

For anyone interested in the code here it is:

The program returned the following results. The first number is the number of starting trades until they tossed 10 tails in a row. The second number was their new account value (given that they had started with 1000 counters, every previous trade had increased their account by 1 counter and that they had then just lost 1023 counters).

1338, 1315

1159, 1136

243, 220

1676, 1653

432, 409

1023, 1000

976, 953

990, 967

60, 37

This was then plotted on Desmos. The red line is the trajectory their accounts were following before their loss. The horizontal dotted line is at y = 1000 which represents the initial account value. As you can see 6 people are now on or below their initial starting account value. You can also see that all these new account values are themselves on a line parallel to the red line but translated vertically down.

From this very simple simulation, we can see that on average a person was left with 884 counters following hitting 10 tails. i.e below initial starting account. Running this again with 99 players gave an average of 869.

**999 players**

I ran this again with 999 players – counting what their account value would be after their first loss. All players started with 1000 counters. The results were:

31 players bankrupt: 3%

385 players left with less than half their account value (less than 500): 39%

600 players with less than their original account value (less than 1000): 60%

51 players at least tripled their account (more than 3000): 5%

The top player ended up with 6903 counters after their first loss.

The average account this time was above starting value (1044.68). You can see clearly that the median is below 1000 – but that a small number of very lucky players at the top end skewed the mean above 1000.

**Second iteration**

I then ran the simulation again – with players continuing with their current stake. This would have been slightly off because my model allowed players who were bankrupt from the first round to carry on [in effect being loaned 1 counter to start again]. Nevertheless it now gave:

264 players bankrupt: 26%

453 players left with less than half their account value (less than 500): 45%

573 players with less than their original account value (less than 1000): 57%

95 players at least tripled their account (more than 3000): 10%

The top player ended up with 9583 counters after their second loss.

We can see a dramatic rise in bankruptcies – now over a quarter of all players. This would suggest the long term trend is towards a majority of players being bankrupted, though the lucky few at the top end may be able to escape this fate.

]]>This carries on our exploration of projectile motion – this time we will explore what happens if gravity is not fixed, but is instead a function of time. (This idea was suggested by and worked through by fellow IB teachers Daniel Hwang and Ferenc Beleznay). In our universe we have a gravitational constant – i.e gravity is not dependent on time. If gravity changed with respect to time then the gravitational force exerted by the Sun on Earth would lessen (or increase) over time with all other factors remaining the same.

Interestingly time-dependent gravity was first explored by Dirac and some physicists have tried to incorporate time dependent gravity into cosmological models. As yet we have no proof that gravity is not constant, but let’s imagine a university where it is dependent on time.

**Projectile motion when gravity is time dependent**

We can start off with the standard parametric equations for projectile motion. Here v is the initial velocity, theta is the angle of launch, t can be a time parameter and g is the gravitational constant (9.81 on Earth). We can see that the value for the vertical acceleration is the negative of the gravitational constant. So the question to explore is, what if the gravitational constant was time dependent? Another way to think about this is that gravity varies with respect to time.

**Linear relationship**

If we have the simplest time dependent relationship we can say that:

where **a is a constant**. If a is greater than 0 then gravity linearly increases as time increases, if a is less than 0 than gravity linearly decreases as time increases. For matters of slight convenience I’ll define gravity (or the vertical acceleration) as -3at. The following can then be arrived at by integration:

This will produce the following graph when we fix v = 10, a = 2 and vary theta:

Now we can use the same method as in our Projectile Motion Investigation II to explore whether these maximum points lie in a curve. (You might wish to read that post first for a step by step approach to the method).

therefore we can substitute back into our original parametric equations for x and y to get:

We can plot this with theta as a parameter. If we fix v = 4 and a =2 we get the following graph:

Compare this to the graph from Projectile Motion Investigation II, where we did this with gravity constant (and with v fixed as 10):

The Projectile Motion Investigation II formed a perfect ellipse, but this time it’s more of a kind of egg shaped elliptical curve – with a flat base. But it’s interesting to see that even with time dependent gravity we still have a similar relationship to before!

**Inverse relationship**

Let’s also look at what would happen if gravity was inversely related to time. (This is what has been explored by some physicists).

In this case we get the following results when we launch projectiles (Notice here we had to use the integration by parts trick to integrate ln(t)). As the velocity function doesn’t exist when t = 0, we can define v and theta in this case as the velocity and theta value when t = 1.

Now we use the same trick as earlier to find when the gradient is 0:

Substituting this back into the parametric equations gives:

The ratio v/a will therefore have the greatest effect on the maximum points.

**v/a ratio negative and close to zero:**

v = 40, a = -2000, v/a = -0.02

This gives us close to a circle, radius v, centred at (0,a).

v = 1, a = -10, v/a = -0.1

Here we can also see that the boundary condition for the maximum horizontal distance thrown is given by x = v(e).

**v/a ratio negative and large:**

v = 40, a = -2, v/a = -20.

We can see that we get an egg shape back – but this time with a flatter bulge at the top and the point at the bottom. Also notice how quickly the scale of the shape has increased.

**v/a ratio n/a (i.e a = 0)**

Here there is no gravitational force, and so projectiles travel in linear motion – with no maximum.

**Envelope of projectiles for the inverse relationship**

This is just included for completeness, don’t worry if you don’t follow the maths behind this bit!

Therefore when we plot the parametric equations for x and y in terms of theta we get the envelope of projectile motion when we are in a universe where gravity varies inversely to time. The following graph is generated when we take v = 300 and a = -10. The red line is the envelope of projectiles.

**A generalized power relationship**

Lastly, let’s look at what happens when we have a general power relationship i.e gravity is related to (a)t^{n}. Again for matters of slight convenience I’ll look at the similar relationship -0.5(n+1)(n+2)at^{n}.

This gives (following the same method as above:

As we vary n we will find the plot of the maximum points. Let’s take the velocity as 4 and a as 2. Then we get the following:

When n = 0:

When n = 1:

When n =2:

When n = 10:

We can see the general elliptical shape remains at the top, but we have a flattening at the bottom of the curve.

**When n approaches infinity:**

We get this beautiful result when we let n tend towards infinity – now we will have all the maximum points bounded on a circle (with the radius the same as the value chosen as the initial velocity. In the graph above we have a radius of 4 as the initial velocity is 4. Notice too we have projectiles traveling in straight lines – and then seemingly “bouncing” off the boundary!

If we want to understand this, there is only going to be a very short window (t less than 1) when the particle can upwards – when t is between 0 and 1 the effect of gravity is effectively 0 and so the particle would travel in a straight line (i.e if the initial velocity is 5 m/s it will travel 5 meters. Then as soon as t = 1, the gravity becomes crushingly heavy and the particle falls effectively vertically down.

]]>**Projectile Motion III: Varying gravity**

We can also do some interesting things with projectile motion if we vary the gravitational pull when we look at projectile motion. The following graphs are all plotted in parametric form.

Here t is the parameter, v is the initial velocity which we will keep constant, theta is the angle of launch which we will vary, and g is the gravitational constant which on Earth we will take as 9.81 m/s^{2}.

**Earth **

Say we take a projectile and launch it with a velocity of 10 m/s. When we vary the angle of launch we get the folowing graphs:

On the y axis we have the vertical height, and on the x axis the horizontal distance. Therefore we can see that the maximum height that we achieve is around 5m and the maximum horizontal distance is around 10m.

**Other planets and universal objects**

We have the following values for the gravitational pull of various objects:

Enceladus (Moon of Saturn): 0.111 m/s^{2}, The Moon: 1.62 m/s^{2}, Jupiter: 24.8 m/s^{2}, The Sun: 274 m/s^{2}, White dwarf black hole surface gravity: 7×10^{12}m/s^{2}.

So for each one let’s see what would happen if we launched a projectile with a velocity of 10 m/s. Note that the mass of the projectile is not relevant (though it would require more force to achieve the required velocity).

**Enceladus:**

**The Moon:**

**Jupiter:**

**The Sun:**

**Black hole surface gravity:**

This causes some issues graphically! I’ll use the equations derived in the last post to find the coordinates of the maximum point for a given launch angle theta:

Here we have v = 10 and g = 7×10^{12}m/s^{2}. For example if we take our launch angle (theta) as 45 degrees this will give the coordinates of the maximum point at:

(7.14×10^{-12}, 3.57×10^{-12}).

**Summary:**

We can see how dramatically life would be on each surface! Whilst on Earth you may be able to throw to a height of around 5m with a launch velocity of 10 m/s., Enceladus would see you achieve an incredible 450m. If you were on the surface of the Sun then probably the least of your worries would be how hight to throw an object, nevertheless you’d be struggling to throw it 20cm high. And as for the gravity at the surface of a black hole you wouldn’t get anywhere close to throwing it a nanometer high (1 billionth of a meter).

]]>

**Projectile Motion Investigation II**

Another example for investigating projectile motion has been provided by fellow IB teacher Ferenc Beleznay. Here we fix the velocity and then vary the angle, then to plot the maximum points of the parabolas. He has created a Geogebra app to show this (shown above). The locus of these maximum points then form an ellipse.

We can see that the maximum points of the projectiles all go through the dotted elliptical line. So let’s see if we can derive this equation.

Let’s start with the equations for projectile motion, usually given in parametric form:

Here v is the initial velocity which we will keep constant, theta is the angle of launch which we will vary, and g is the gravitational constant which we will take as 9.81.

We can plot these curves parametrically, and for each given value of theta (the angle of launch) we will create a projectile motion graph. If we plot lots of these graphs for different thetas together we get something like this:

We now want to see if the maximum points are in any sort of pattern. In order to find the maximum point we want to find when the gradient of dy/dx is 0. It’s going to be easier to keep things in parametric form, and use partial differentiation. We have:

Therefore we find the partial differentiation of both x and y with respect to t. (This simply means we can pretend theta is a constant).

We can then say that:

We then find when this has a gradient of 0:

We can then substitute this value of t back into the original parametric equations for x:

and also for y:

We now have the parametric equations in terms of theta for the locus of points of the maximum points. For example, with g= 9.81 and v =1 we have the following parametric equations:

This generates an ellipse (dotted line), which shows the maximum points generated by the parametric equations below (as we vary theta):

And here is the graph:

We can vary the velocity to create a new ellipse. For example the ellipse generated when v =4 creates the following graph:

So, there we go, we have shown that different ellipses will be created by different velocities. If you feel like a challenge, see if you can algebraically manipulate the parametric equations for the ellipse into the Cartesian form!

]]>