You are currently browsing the category archive for the ‘modelling’ category.

**3D Printing with Desmos: Stewie Griffin**

Using Desmos or Geogebra to design a picture or pattern is quite a nice exploration topic – but here’s an idea to make your investigation stand out from the crowd – how about converting your image to a 3D printed design?

**Step 1**

Create an image on Desmos or Geogebra. Remove the axes and grid pattern. This image is a pre-drawn image already on Desmos available here.

**Step 2**

Take a screen capture image of your picture (jpeg, gif, png). We need to convert this to a SVG file. You can convert these for free at sites like picsvg.

**Step 3**

Lastly we need to use a 3D editing site . You can join up with a site like Tinkercad for free.

**Step 4**

Making our 3D model. We import our SVG file and we get the image above. We can then resize this to whatever dimensions we wish – and also add 3D depth.

Lastly I would then save this file and send it to a 3D printer. You can see the finished file below:

So, if we printed this we’d get something like this:

**3D printing the Eiffel Tower**

Let’s use another Desmos art work. The Eiffel Tower above was a finalist in their annual art competition drawn by Jerry Yang from the USA.

This is then converted to the SVG file above.

And this is the result on Tinkercad when I add some depth and change the colour scheme. Let’s see what that would look like printed:

Pretty good- we’ve created a cheap tourist souvenir in about 5 minutes!

**Mathematical art**

I thought I’d have a go at making my own mathematical art. I started with using some polar coordinates to create this nice pattern:

Which then creates the following 3D shape:

This topic has a lot of scope for exploration and links with art, design technology and engineering. Thanks to our ever resourceful ICT wizz at school Jon for assistance, and also thanks for this excellent method which was posted by Ryan on Thingiverse. You can also explore huge numbers of ready made 3D templates on the site.

**Time dependent gravity and cosmology!**

In our universe we have a gravitational constant – i.e gravity is not dependent on time. If gravity changed with respect to time then the gravitational force exerted by the Sun on Earth would lessen (or increase) over time with all other factors remaining the same.

Interestingly time-dependent gravity was first explored by Dirac and some physicists have tried to incorporate time dependent gravity into cosmological models. As yet we have no proof that gravity is not constant, but let’s imagine a university where it is dependent on time.

**Inversely time dependent gravity**

The standard models for cosmology use G, where G is the gravitational constant. This fixes the gravitational force as a constant. However if gravity is inversely proportional to time we could have a relationship such as:

Where a is a constant. Let’s look at a very simple model, where we have a piecewise function as below:

This would create the graph at the top of the page. This is one (very simplistic) way of explaining the Big Bang. In the first few moments after t = 0, gravity would be negative and thus repulsive [and close to infinitely strong], which could explain the initial incredible universal expansion before “regular” attractive gravity kicked in (after t = 1). The Gravitational constant has only been measured to 4 significant figures:

G = 6.674 x 10^{-11}m^{3}kg^{-1}s^{-2}.

Therefore if there is a very small variation over time it is *possible* that we simply haven’t the accuracy to test this yet.

**Universal acceleration with a time dependent gravitational force**

Warning: This section is going to touch on some seriously complicated maths – not for the faint hearted! We’re going to explore whether having a gravitational force which decreases over time still allows us to have an accelerating expansion of the universe.

We can start with the following equation:

To work through an example:

This would show that when t = 1 the universe had an expansion scale factor of 2. Now, based on current data measured by astronomers we have evidence that the universe is both expanding and accelerating in its expansion. If the universal scale factor is accelerating in expansion that requires that we have:

**Modelling our universe**

We’re going to need 4 equations to model what happens when gravity is time dependent rather than just a constant.

**Equation 1**

This equation models a relationship between pressure and density in our model universe. We assume that our universe is homogenous (i.e the same) throughout.

**Equation 2**

This is one of the Friedmann equations for governing the expansion of space. We will take c =1 [i.e we will choose units such that we are in 1 light year etc]

**Equation 3**

This is another one of the Friedmann equations for governing the expansion of space. The original equation has P/(c squared) – but we we simplify again by taking c = 1.

**Equation 4**

This is our time dependent version of gravity.

**Finding alpha**

We can separate variables to solve equation (3).

**Substitution**

We can use this result, along with the equations (1) and (4) to substitute into equation (2).

**Our result**

Now, remember that if the second differential of r is positive then the universal expansion rate is accelerating. If Lamba is negative then we will have the second differential of r positive. However, all our constants G_0, a, B, t, r are greater than 0. Therefore in order for lamda to be negative we need:

What this shows is that even in a universe where gravity is time dependent (and decreasing), we would still be able to have an accelerating universe like we see today. the only factor that determines whether the universal expansion is accelerating is the value of gamma, not our gravity function.

This means that a time dependent gravity function can still gives us a result consistent with our experimental measurements of the universe.

**A specific case**

Solving the equation for the second differential of r is extremely difficult, so let’s look at a very simple case where we choose some constants to make life as easy as possible:

Substituting these into our equation (2) gives us:

We can then solve this to give:

So, finally we have arrived at our final equation. This would give us the universal expansion scale factor at time t, for a universe in which gravity follows the the equation G(t) = 1/t.

For this universe we can then see that when t = 5 for example, we would have a universal expansion scale factor of 28.5.

So, there we go – very complicated maths, way beyond IB level, so don’t worry if you didn’t follow that. And that’s just a simplified introduction to some of the maths in cosmology! You can read more about time dependent gravity here (also not for the faint hearted!)

**The Martingale system**

The Martingale system was first used in France in 1700s gambling halls and remains used today in some trading strategies. I’ll look at some of the mathematical ideas behind this and why it has remained popular over several centuries despite having a long term expected return of zero.

**The scenario**

You go to a fair ground and play a simple heads-or-tails game. The probability of heads is 1/2 and tails is also 1/2. You place a stake of counters on heads. If you guess correctly you win that number of counters. If you lose, you double your stake of counters and then the coin is tossed again. Every time you lose you double up your stake of counters and stop when you finally win.

**Infinitely deep pockets model:**

You can see that in the example above we always have a 0.5 chance of getting heads on the first go, which gives a profit of 1 counter. But we also have a 0.5 chance of a profit of 1 counter as long as we keep doubling up our stake, and as long as we do indeed eventually throw heads. In the example here you can see that the string of losing throws don’t matter [when we win is arbitrary, we could win on the 2nd, 3rd, 4th etc throw]. By doubling up, when you do finally win you wipe out your cumulative losses and end up with a 1 counter profit.

This leads to something of a paradoxical situation, despite only having a 1/2 chance of guessing heads we end up with an expected value of 1 counter profit for every 1 counter that we *initially* stake in this system.

So what’s happening? This will always work but it requires that you have access to infinitely deep pockets (to keep your infinite number of counters) and also the assumption that if you keep throwing long enough you will indeed finally get a head (i.e you don’t throw an infinite number of tails!)

**Finite pockets model:**

Real life intrudes on the infinite pockets model – because in reality there will be a limit to how many counters you have which means you will need to bail out after a given number of tosses. Even if the probability of this string of tails is very small, the losses if it does occur will be catastrophic – and so the expected value for this system is still 0.

**Finite pockets model capped at 4 tosses:**

In the example above we only have a 1/16 chance of losing – but when we do we lose 15 counters. This gives an expected value of:

**Finite pockets model capped at n tosses:**

If we start with a 1 counter stake then we can represent the pattern we can see above for E(X) as follows:

Here we use the fact that the losses from n throws are the sum of the first (n-1) powers of 2. We can then notice that both of these are geometric series, and use the relevant formula to give:

Therefore the expected value for the finite pockets model is indeed always still 0.

**So why does this system remain popular?**

So, given that the real world version of this has an expected value of 0, why has it retained popularity over the past few centuries? Well, the system will on average return constant linear growth – up until a catastrophic loss. Let’s say you have 100,000 counters and stake 1 counter initially. You can afford a total of 16 consecutive losses. The probability of this is only:

but when you do lose, you’ll lose a total of:

So, the system creates a model that mimics linear growth, but really the small risk of catastrophic loss means that the system still has E(X) = 0. In the short term you would expect to see the following very simple linear relationship for profit:

With 100,000 counters and a base trading stake of 1 counter, if you made 1000 initial 1 counter trades a day you would expect a return of 1000 counters a day (i.e 1% return on your total counters per day). However the longer you continue this strategy the more likely you are to see a run of 16 tails – and see all your counters wiped out.

**Computer model**

I wrote a short Python code to give an idea as to what is happening. Here I started 9 people off with 1000 counters each. They have a loss limit of 10 consecutive losses. They made starting stakes of 1 counter each time, and then I recorded how long before they made a loss of 10 tosses in a row.

For anyone interested in the code here it is:

The program returned the following results. The first number is the number of starting trades until they tossed 10 tails in a row. The second number was their new account value (given that they had started with 1000 counters, every previous trade had increased their account by 1 counter and that they had then just lost 1023 counters).

1338, 1315

1159, 1136

243, 220

1676, 1653

432, 409

1023, 1000

976, 953

990, 967

60, 37

This was then plotted on Desmos. The red line is the trajectory their accounts were following before their loss. The horizontal dotted line is at y = 1000 which represents the initial account value. As you can see 6 people are now on or below their initial starting account value. You can also see that all these new account values are themselves on a line parallel to the red line but translated vertically down.

From this very simple simulation, we can see that on average a person was left with 884 counters following hitting 10 tails. i.e below initial starting account. Running this again with 99 players gave an average of 869.

**999 players**

I ran this again with 999 players – counting what their account value would be after their first loss. All players started with 1000 counters. The results were:

31 players bankrupt: 3%

385 players left with less than half their account value (less than 500): 39%

600 players with less than their original account value (less than 1000): 60%

51 players at least tripled their account (more than 3000): 5%

The top player ended up with 6903 counters after their first loss.

The average account this time was above starting value (1044.68). You can see clearly that the median is below 1000 – but that a small number of very lucky players at the top end skewed the mean above 1000.

**Second iteration**

I then ran the simulation again – with players continuing with their current stake. This would have been slightly off because my model allowed players who were bankrupt from the first round to carry on [in effect being loaned 1 counter to start again]. Nevertheless it now gave:

264 players bankrupt: 26%

453 players left with less than half their account value (less than 500): 45%

573 players with less than their original account value (less than 1000): 57%

95 players at least tripled their account (more than 3000): 10%

The top player ended up with 9583 counters after their second loss.

We can see a dramatic rise in bankruptcies – now over a quarter of all players. This would suggest the long term trend is towards a majority of players being bankrupted, though the lucky few at the top end may be able to escape this fate.

This carries on our exploration of projectile motion – this time we will explore what happens if gravity is not fixed, but is instead a function of time. (This idea was suggested by and worked through by fellow IB teachers Daniel Hwang and Ferenc Beleznay). In our universe we have a gravitational constant – i.e gravity is not dependent on time. If gravity changed with respect to time then the gravitational force exerted by the Sun on Earth would lessen (or increase) over time with all other factors remaining the same.

Interestingly time-dependent gravity was first explored by Dirac and some physicists have tried to incorporate time dependent gravity into cosmological models. As yet we have no proof that gravity is not constant, but let’s imagine a university where it is dependent on time.

**Projectile motion when gravity is time dependent**

We can start off with the standard parametric equations for projectile motion. Here v is the initial velocity, theta is the angle of launch, t can be a time parameter and g is the gravitational constant (9.81 on Earth). We can see that the value for the vertical acceleration is the negative of the gravitational constant. So the question to explore is, what if the gravitational constant was time dependent? Another way to think about this is that gravity varies with respect to time.

**Linear relationship**

If we have the simplest time dependent relationship we can say that:

where **a is a constant**. If a is greater than 0 then gravity linearly increases as time increases, if a is less than 0 than gravity linearly decreases as time increases. For matters of slight convenience I’ll define gravity (or the vertical acceleration) as -3at. The following can then be arrived at by integration:

This will produce the following graph when we fix v = 10, a = 2 and vary theta:

Now we can use the same method as in our Projectile Motion Investigation II to explore whether these maximum points lie in a curve. (You might wish to read that post first for a step by step approach to the method).

therefore we can substitute back into our original parametric equations for x and y to get:

We can plot this with theta as a parameter. If we fix v = 4 and a =2 we get the following graph:

Compare this to the graph from Projectile Motion Investigation II, where we did this with gravity constant (and with v fixed as 10):

The Projectile Motion Investigation II formed a perfect ellipse, but this time it’s more of a kind of egg shaped elliptical curve – with a flat base. But it’s interesting to see that even with time dependent gravity we still have a similar relationship to before!

**Inverse relationship**

Let’s also look at what would happen if gravity was inversely related to time. (This is what has been explored by some physicists).

In this case we get the following results when we launch projectiles (Notice here we had to use the integration by parts trick to integrate ln(t)). As the velocity function doesn’t exist when t = 0, we can define v and theta in this case as the velocity and theta value when t = 1.

Now we use the same trick as earlier to find when the gradient is 0:

Substituting this back into the parametric equations gives:

The ratio v/a will therefore have the greatest effect on the maximum points.

**v/a ratio negative and close to zero:**

v = 40, a = -2000, v/a = -0.02

This gives us close to a circle, radius v, centred at (0,a).

v = 1, a = -10, v/a = -0.1

Here we can also see that the boundary condition for the maximum horizontal distance thrown is given by x = v(e).

**v/a ratio negative and large:**

v = 40, a = -2, v/a = -20.

We can see that we get an egg shape back – but this time with a flatter bulge at the top and the point at the bottom. Also notice how quickly the scale of the shape has increased.

**v/a ratio n/a (i.e a = 0)**

Here there is no gravitational force, and so projectiles travel in linear motion – with no maximum.

**Envelope of projectiles for the inverse relationship**

This is just included for completeness, don’t worry if you don’t follow the maths behind this bit!

Therefore when we plot the parametric equations for x and y in terms of theta we get the envelope of projectile motion when we are in a universe where gravity varies inversely to time. The following graph is generated when we take v = 300 and a = -10. The red line is the envelope of projectiles.

**A generalized power relationship**

Lastly, let’s look at what happens when we have a general power relationship i.e gravity is related to (a)t^{n}. Again for matters of slight convenience I’ll look at the similar relationship -0.5(n+1)(n+2)at^{n}.

This gives (following the same method as above:

As we vary n we will find the plot of the maximum points. Let’s take the velocity as 4 and a as 2. Then we get the following:

When n = 0:

When n = 1:

When n =2:

When n = 10:

We can see the general elliptical shape remains at the top, but we have a flattening at the bottom of the curve.

**When n approaches infinity:**

We get this beautiful result when we let n tend towards infinity – now we will have all the maximum points bounded on a circle (with the radius the same as the value chosen as the initial velocity. In the graph above we have a radius of 4 as the initial velocity is 4. Notice too we have projectiles traveling in straight lines – and then seemingly “bouncing” off the boundary!

If we want to understand this, there is only going to be a very short window (t less than 1) when the particle can upwards – when t is between 0 and 1 the effect of gravity is effectively 0 and so the particle would travel in a straight line (i.e if the initial velocity is 5 m/s it will travel 5 meters. Then as soon as t = 1, the gravity becomes crushingly heavy and the particle falls effectively vertically down.

**Using Maths to model the spread of Coronavirus (COVID-19)**

This coronavirus is the latest virus to warrant global fears over a disease pandemic. Throughout history we have seen pandemic diseases such as the Black Death in Middle Ages Europe and the Spanish Flu at the beginning of the 20th century. More recently we have seen HIV responsible for millions of deaths. In the last few years there have been scares over bird flu and SARS – yet neither fully developed into a major global health problem. So, how contagious is COVID-19, and how can we use mathematics to predict its spread?

Modelling disease outbreaks with real accuracy is an incredibly important job for mathematicians and all countries employ medical statisticians for this job . Understanding how diseases spread and how fast they can spread through populations is essential to developing effective medical strategies to minimise deaths. If you want to save lives maybe you should become a mathematician rather than a doctor!

Currently scientists know relatively little about the new virus – but they do know that it’s the same coronavirus family as SARS and MERS which can both cause serious respiratory problems. Scientists are particularly interested in trying to discover how infectious the virus is, how long a person remains contagious, and whether people can be contagious before they show any symptoms.

**In the case of COVID-19 we have the following early estimated values: **[From a paper published by medical statisticians in the UK on January 24]

**R _{0}. between 3.6 and 4.** This is defined as how many people an infectious person will pass on their infection to in a totally susceptible population. The higher the R

_{0}. value the more quickly an infection will spread. By comparison seasonal flu has a R

_{0}. value around 2.8.

**Total number infected** by January 21: prediction interval 9,217–14,245. Of these an estimated 3,050–4,017 currently with the virus and the others recovered (or died). This is based on an estimation that only around 5% of cases have been diagnosed. By February 4th they predict 132,751–273,649 will be infected.

**Transmission rate β** estimated at 1.07. β represents the transmission rate per day – so on average an infected person will infect another 1.07 people a day.

**Infectious period** estimated at 3.6 days. We can therefore calculate μ (the per capita recovery rate) by μ = 1/(3.6). This tells us how quickly people will be removed from the population (either recovered and become immune or died)

**SIR Model**

The basic model is based on the SIR model. The SIR model looks at how much of the population is susceptible to infection (S), how many of these go on to become infectious (I), and how many of these are removed (R) from the population being considered (i.e they either recover and thus won’t catch the virus again, or die).

The Guardian datablog have an excellent graphic to show the contagiousness relative to deadliness of different diseases [click to enlarge, or follow the link]. We can see that seasonal flu has an R_{0}. value of around 2.8 and a fatality rate of around 0.1%, whereas measles has an R_{0}. value of around 15 and a fatality rate of around 0.3%. This means that measles is much more contagious than seasonal flu.

You can notice that we have nothing in the top right hand corner (very deadly and very contagious). This is just as well as that could be enough to seriously dent the human population. Most diseases we worry about fall into 2 categories – contagious and not very deadly or not very contagious and deadly.

The equations above represent a SIR (susceptible, infectious, removed) model which can be used to model the spread of diseases like flu.

dS/dt represents the rate of change of those who are susceptible to the illness with respect to time. dI/dt represents the rate of change of those who are infected with respect to time. dR/dt represents the rate of change of those who have been removed with respect to time (either recovered or died).

For example, if dI/dt is high then the number of people becoming infected is rapidly increasing. When dI/dt is zero then there is no change in the numbers of people becoming infected (number of infections remain steady). When dI/dt is negative then the numbers of people becoming infected is decreasing.

**Modelling for COVID-19**

N is the total population. Let’s take as the population of Wuhan as 11 million.

μ is the per capita recovery (Calculated by μ = 1/(duration of illness) ). We have μ = 1/3.6 = 5/18.

β the transmission rate as approximately 1.07

Therefore our 3 equations for rates of change become:

dS/dt = -1.07 S I /11,000,000

dI/dt = 1.07 S I /11,000,000 – 5/18 I

dR/dt = 5/18 I

Unfortunately these equations are very difficult to solve – but luckily we can use a computer program or spreadsheet to plot what happens. We need to assign starting values for S, I and R – the numbers of people susceptible, infectious and removed. With the following values for January 21: S = 11,000,000, I = 3500, R = 8200, β = 1.07, μ = 5/18, I designed the following Excel spreadsheet (instructions on what formula to use here):

This gives a prediction that around 3.9 million people infected within 2 weeks! We can see that the SIR model that we have used is quite simplistic (and significantly different to the expert prediction of around 200,000 infected).

So, we can try and make things more realistic by adding some real life considerations. The current value of β (the transmission rate) is 1.07, i.e an infected person will infect another 1.07 people each day. We can significantly reduce this if we expect that infected people are quarantined effectively so that they do not interact with other members of the public, and indeed if people who are not sick avoid going outside. So, if we take β as (say) 0.6 instead we get the following table:

Here we can see that this change to β has had a dramatic effect to our model. Now we are predicting around 129,000 infected after 14 days – which is much more in line with the estimate in the paper above.

As we are seeing exponential growth in the spread, small changes to the parameters will have very large effects. There are more sophisticated SIR models which can then be used to better understand the spread of a disease. Nevertheless we can see clearly from the spreadsheet the interplay between susceptible, infected and recovered which is the foundation for understanding the spread of viruses like COVID-19.

[Edited in March to use the newly designated name COVID-19]

**Finding the volume of a rugby ball (prolate spheroid)**

With the rugby union World Cup currently underway I thought I’d try and work out the volume of a rugby ball using some calculus. This method works similarly for American football and Australian rules football. The approach is to consider the rugby ball as an ellipse rotated 360 degrees around the x axis to create a volume of revolution. We can find the equation of an ellipse centered at (0,0) by simply looking at the x and y intercepts. An ellipse with y-intercept (0,b) and x intercept (a,0) will have equation:

Therefore for our rugby ball with a horizontal “radius” (vertex) of 14.2cm and a vertical “radius” (co-vertex) of 8.67cm will have equation:

We can see that when we plot this ellipse we get an equation which very closely resembles our rugby ball shape:

Therefore we can now find the volume of revolution by using the following formula:

But we can simplify matters by starting the rotation at x = 0 to find half the volume, before doubling our answer. Therefore:

Rearranging our equation of the ellipse formula we get:

Therefore we have the following integration:

Therefore our rugby ball has a volume of around 4.5 litres. We can compare this with the volume of a football (soccer ball) – which has a radius of around 10.5cm, therefore a volume of around 4800 cubic centimeters.

We can find the general volume of any rugby ball (mathematically defined as a prolate spheroid) by the following generalization:

We can see that this is very closely related to the formula for the volume of a sphere, which makes sense as the prolate spheroid behaves like a sphere deformed across its axes. Our prolate spheroid has “radii” b, b and a – therefore r cubed in the sphere formula becomes b squared a.

**Prolate spheroids in nature**

The image above [wiki image NASA] is of the Crab Nebula – a distant Supernova remnant around 6500 light years away. The shape of Crab Nebula is described as a prolate spheroid.

**Soap Bubbles and Catenoids**

Soap bubbles form such that they create a shape with the minimum surface area for the given constraints. For a fixed volume the minimum surface area is a sphere, which is why soap bubbles will form spheres where possible. We can also investigate what happens when a soap film is formed between 2 parallel circular lines like in the picture below: [Credit Wikimedia Commons, Blinking spirit]

In this case the shape formed is a catenoid – which provides the minimum surface area (for a fixed volume) for a 3D shape connecting the two circles. The catenoid can be defined in terms of parametric equations:

Where cosh() is the hyperbolic cosine function which can be defined as:

For our parametric equation, t and u are parameters which we vary, and c is a constant that we can change to create different catenoids. We can use Geogebra to plot different catenoids. Below is the code which will plot parametric curves when c =2 and t varies between -20pi and 20 pi.

We then need to create a slider for u, and turn on the trace button – and for every given value of u (between 0 and 2 pi) it will plot a curve. When we trace through all the values of u it will create a 3D shape – our catenoid.

**Individual curve (catenary)**

**Catenoid when c = 0.1**

**Catenoid when c = 0.5**

**Catenoid when c = 1**

**Catenoid when c = 2**

**Wormholes**

For those of you who know your science fiction, the catenoids above may look similar to a wormhole. That’s because the catenoid is a solution to the hypothesized mathematics of wormholes. These can be thought of as a “bridge” either through curved space-time to another part of the universe (potentially therefore allowing for faster than light travel) or a bridge connecting 2 distinct universes.

Above is the Morris-Thorne bridge wormhole [Credit The Image of a Wormhole].

**Further exploration:**

This is a topic with lots of interesting areas to explore – the individual curves (catenary) look similar to, but are distinct from parabola. These curves appear in bridge building and in many other objects with free hanging cables. Proving that catenoids form shapes with minimum surface areas requires some quite complicated undergraduate maths (variational calculus), but it would be interesting to explore some other features of catenoids or indeed to explore why the sphere is a minimum surface area for a given volume.

If you want to explore further you can generate your own Catenoids with the Geogebra animation I’ve made here.

**The Van Eck Sequence**

This is a nice sequence as discussed in the Numberphile video above. There are only 2 rules:

- If you have not seen the number in the sequence before, add a 0 to the sequence.
- If you have seen the number in the sequence before, count how long since you last saw it.

You start with a 0.

0

You have never seen a 0 before, so the next number is 0.

00

You have seen a 0 before, and it was 1 step ago, so the next number is 1.

001

You have never seen a 1 before, so the next number is 0.

0010

You have seen a 0 before, it was 2 steps ago, so the next number is 2.

00102.

etc.

I can run a quick Python program (adapted from the entry in the Online Encyclopedia of Integer Sequences here) to find the first 100 terms.

```
```A181391 = [0, 0]

for n in range(1, 10**2):

for m in range(n-1, -1, -1):

if A181391[m] == A181391[n]:

A181391.append(n-m)

break

else:

A181391.append(0)

print(A181391)

This returns:

[0, 0, 1, 0, 2, 0, 2, 2, 1, 6, 0, 5, 0, 2, 6, 5, 4, 0, 5, 3, 0, 3, 2, 9, 0, 4, 9, 3, 6, 14, 0, 6, 3, 5, 15, 0, 5, 3, 5, 2, 17, 0, 6, 11, 0, 3, 8, 0, 3, 3, 1, 42, 0, 5, 15, 20, 0, 4, 32, 0, 3, 11, 18, 0, 4, 7, 0, 3, 7, 3, 2, 31, 0, 6, 31, 3, 6, 3, 2, 8, 33, 0, 9, 56, 0, 3, 8, 7, 19, 0, 5, 37, 0, 3, 8, 8, 1, 46, 0, 6, 23]

I then assigned each term an x coordinate value, i.e.:

0 , 0

1 , 0

2 , 1

3 , 0

4 , 2

5 , 0

6 , 2

7 , 2

8 , 1

9 , 6

10 , 0

11 , 5

12 , 0

13 , 2

14 , 6

15 , 5

16 , 4

17 , 0

18 , 5

19 , 3

20 , 0

etc.

This means that you can then plot the sequence as a line graph, with the y values corresponding to the sequence terms. As you can see, every time we hit a new peak the following value is 0, leading to the peaks and troughs seen below:

Let’s extend the sequence to the first 1000 terms:

We can see that the line y = x provides a reasonably good upper bound for this data:

But it is not known if every number would actually appear in the sequence somewhere – so this bound may not hold for larger values.

**Length of steps before new numbers appear.**

We can also investigate how long we have to wait to see each number for the first time by running the following Python code:

```
```A181391 = [0, 0]

for n in range(1, 10**3):

for m in range(n-1, -1, -1):

if A181391[m] == A181391[n]:

A181391.append(n-m)

break

else:

A181391.append(0)

for m in range(1,50):

if A181391[n]==m:

print(m, ",", n+1)

break

This returns the following data:

1 , 3

2 , 5

6 , 10

5 , 12

4 , 17

3 , 20

9 , 24

14 , 30

15 , 35

17 , 41

11 , 44

8 , 47

42 , 52

20 , 56

32 , 59

18 , 63

7 , 66

31 , 72

33 , 81

19 , 89

etc.

The first coordinate tells us the number we are interested in, and the second number tells us how long we have to wait in the sequence until it appears. So (1 , 3) means that we have to wait until 3 terms in the sequence to see the number 1 for the first time.

Plotting this for numbers 1-50 on a graph returns the following:

So, we can see (for example that we wait 66 terms to first see a 7, and 173 terms to first see a 12. There seems to be a general trend that as the numbers get larger we have to wait longer to see them. Testing this with a linear regression we can see a weak to moderate correlation:

Checking for the numbers up to 300 we get the following:

For example this shows that we have to wait 9700 terms until we see the number 254 for the first time. Testing this with a linear correlation we have a weaker positive correlation than previously.

So, a nice and quick investigation using a combination of sequences, coding, graphing and regression, with lots of areas this could be developed further.

Computers can brute force a lot of simple mathematical problems, so I thought I’d try and write some code to solve some of them. In nearly all these cases there’s probably a more elegant way of coding the problem – but these all do the job! You can run all of these with a Python editor such as Repl.it. Just copy and paste the below code and see what happens.

1) **Happy Numbers.**

Happy numbers are defined by the rule that you start with any positive integer, square each of the digits then add them together. Now do the same with the new number. Happy numbers will eventually spiral down to a number of 1. Numbers that don’t eventually reach 1 are called unhappy numbers.

As an example, say we start with the number 23. Next we do 2²+3² = 13. Now, 1²+3² = 10. Now 1²+0² = 1. 23 is therefore a happy number.

k= int(input("type a 2 digit number "))

a = int(k%10)

c = int(k//100)

b = int(k//10 -10*c)

print (a**2+b**2+c**2)

```
```for k in range (1,20):

` k = a**2+b**2 + c**2`

a = int(k%10)

c = int(k//100)

b = int(k//10 -10*c)

print (a**2+b**2+c**2)

2) **Sum of 3 cubes**

Most (though not all) numbers can be written as the sum of 3 cubes. For example:

1^{3} + 2^{3} + 2^{3} = 17. Therefore 17 can be written as the sum of 3 cubes.

This program allows you to see all the combinations possible when using the integers -10 to 10 and trying to make all the numbers up to 29.

for k in range(1,30):

```
```

` for a in range(-10, 10):`

for b in range(-10,10):

for c in range (-10, 10):

if a**3+b**3+c**3 == k :

print(k,a,b,c)

3) **Narcissistic Numbers**

A 3 digit narcissistic number is defined as one which the sum of the cubes of its digits equal the original number. This program allows you to see all 3 digit narcissistic numbers.

```
```for a in range (0,10):

for b in range(0, 10):

for c in range(0,10):

if a**3 + b**3 + c**3 ==100*a + 10*b + c:

print(int(100*a + 10*b + c))

4) **Pythagorean triples**

Pythagorean triples are integer solutions to Pythagoras’ Theorem. For example:

3^{2} + 4^{2} = 5^{2} is an integer solution to Pythagoras’ Theorem.

This code allows you to find all integer solutions to Pythagoras’ Theorem for the numbers in the range you specify.

```
```k = 100

`for a in range(1, k):`

for b in range(1,k):

for c in range (1, 2*k):

if a**2+b**2==c**2:

print(a,b,c)

5) **Perfect Numbers**

Perfect numbers are numbers whose proper factors (factors excluding the number itself) add to the number. This is easier to see with an example.

6 is a perfect number because its proper factors are 1,2,3 and 1+2+3 = 6

8 is not a perfect number because its proper factors are 1,2,4 and 1+2+4 = 7

Perfect numbers have been known about for about 2000 years – however they are exceptionally rare. The first 4 perfect numbers are 6, 28, 496, 8128. These were all known to the Greeks. The next perfect number wasn’t discovered until around 1500 years later – and not surprisingly as it’s 33,550,336.

The code below will find all the perfect numbers less than 10,000.

```
```for n in range(1,10000):

list = []

for i in range (1,n):

if n%i ==0:

list.append(i)

if sum(list)==n:

print(n)

**Friendly Numbers**

Friendly numbers are numbers which share a relationship with other numbers. They require the use of σ(a) which is called the divisor function and means the addition of all the factors of a. For example σ(7) = 1 + 7 = 8 and σ(10) = 1 +2 +5 + 10 = 18.

Friendly numbers therefore satisfy:

σ(a)/a = σ(b)/b

As an example:

σ(6) / 6 = (1+2+3+6) / 6 = 2,

σ(28) / 28 = (1+2+4+7+14+28) / 28 = 2

σ(496)/496 = (1+2+4+8+16+31+62+124+248+496)/496 = 2

Therefore 28 and 6 are friendly numbers because they share a common relationship.

This code will help find some Friendly numbers (though these are very difficult to find, as we need to check against every other integer until we find a relationship).

The code below will find some Friendly numbers less than 200, and their friendly pair less than 5000:

for n in range(1,5000):

list = []

```
``` for i in range (1,n+1):

if n%i ==0:

list.append(i)

Result1 = sum(list)

for m in range(1,200):

list2 = []

for j in range (1,m+1):

if m%j ==0:

list2.append(j)

Result2 = sum(list2)

` if Result2/m ==Result1/n:`

if n != m:

print(n,m)

**Hailstone numbers**

Hailstone numbers are created by the following rules:

if n is even: divide by 2

if n is odd: times by 3 and add 1

We can then generate a sequence from any starting number. For example, starting with 10:

10, 5, 16, 8, 4, 2, 1, 4, 2, 1…

we can see that this sequence loops into an infinitely repeating 4,2,1 sequence. Trying another number, say 58:

58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1…

and we see the same loop of 4,2,1.

The question is, does every number end in this loop? Well, we don’t know. Every number mathematicians have checked do indeed lead to this loop, but that is not a proof. Perhaps there is a counter-example, we just haven’t found it yet.

Run the code below, and by changing the value of n you can see how quickly the number enters the 4,2,1 loop.

n = 300

for k in range(1,40):

```
```

` if n%2 ==0:`

print(n/2)

n =n/2

elif n%2 !=0:

print(3*n+1)

n =3*n+1

**Generating the Golden ratio**

The Golden ratio can be approximated by dividing any 2 successive terms of the Fibonacci sequence. As we divide ever larger successive terms we get a better approximation for the Golden ratio. This code returns successive terms of the Fibonacci sequence and the corresponding approximation for the Golden ratio.

a = 0

b = 1

print(a)

print(b)

for k in range(1,30):

```
``` a = a+b

b = a+b

` print(a,b, b/a)`

**Partial sums**

We can use programs to see if sums to infinity converge. For example with the sequence 1/n, if I add the terms together I get: 1/1 + 1/2 + 1/3 + 1/4…In this case the series (surprisingly) diverges. The code below shows that the sum of the sequence 1/n^{2} converges to a number (pi^{2}/6).

```
```

`list = []`

for n in range(1,100):

n = 1/(n**2)

list.append(n)

print(sum(list))

**Returning to 6174**

This is a nice number trick. You take any 4 digit number, then rearrange the digits so that you make the largest number possible and also the smallest number possible. You then take away the smallest number from the largest number, and then start again. For example with the number 6785, the largest number we can make is 8765 and the smallest is 5678. So we do 8765 – 5678 = 3087. We then carry on with the same method. Eventually we will arrive at the number 6174!

k= int(input("type a 4 digit number "))

a = int(k%10)

d = int(k//1000)

c = int(k//100 - 10*d)

b = int(k//10 -10*c-100*d)

```
```for n in range(1,10):

list = []

list = [a,b,c,d]

list.sort()

a = list[0]

d = list[3]

c = list[2]

b = list[1]

print(1000*d+100*c+10*b+a -1000*a-100*b-10*c-d)

k = int(1000*d+100*c+10*b+a -1000*a-100*b-10*c-d)

a = int(k%10)

d = int(k//1000)

c = int(k//100 - 10*d)

b = int(k//10 -10*c-100*d)

list = []

list = [a,b,c,d]

list.sort()

a = list[0]

d = list[3]

c = list[2]

b = list[1]

` print(1000*d+100*c+10*b+a -1000*a-100*b-10*c-d)`

**Maximising the volume of a cuboid**

If we take a cuboid of length n, and cut squares of size x from the corner, what value of x will give the maximum volume? This code will look at initial squares of size 10×10 up to 90×90 and find the value of x for each which give the maximum volume.

def compute():

```
``` list1=[]

k=6

z = int(0.5*a*10**k)

for x in range(1,z):

list1.append((10*a-2*x/10**(k-1))*(10*a-2*x/10**(k-1))*(x/10**(k-1)))

print("length of original side is, ", 10*a)

y= max(list1)

print("maximum volume is, ", max(list1))

q = list1.index(y)

print("length of square removed from corner is, ", (q+1)/10**(k-1))

`for a in range(1,10):`

print(compute())