You are currently browsing the tag archive for the ‘chaos’ tag.

**Modelling more Chaos**

This post was inspired by Rachel Thomas’ Nrich article on the same topic. I’ll carry on the investigation suggested in the article. We’re going to explore chaotic behavior – where small changes to initial conditions lead to widely different outcomes. Chaotic behavior is what makes modelling (say) weather patterns so complex.

**f(x) = sin(x)**

This time let’s do the same with f(x) = sin(x).

**Starting value of x = 0.2**

**Starting value of x = 0.2001**

**Both graphs superimposed **

This time the graphs do not show any chaotic behavior over the first 40 iterations – a small difference in initial condition has made a negligible difference to the output. Even after 200 iterations we get the 2 values x = 0.104488151 and x = 0.104502319.

**f(x) = tan(x)**

Now this time with f(x) = tan(x).

**Starting value of x = 0.2**

**Starting value of x = 0.2001**

**Both graphs superimposed **

This time both graphs remained largely the same up until around the 38th data point – with large divergence after that. Let’s see what would happen over the next 50 iterations:

Therefore we can see that tan(x) is much more susceptible to small initial state changes than sin(x). This makes sense by considering the graphs of tan(x) and sin(x). Sin(x) remains bounded between -1 and 1, whereas tan(x) is unbounded with asymptotic behaviour as we approach pi/2.

**Modelling Chaos**

This post was inspired by Rachel Thomas’ Nrich article on the same topic. I’ll carry on the investigation suggested in the article. We’re going to explore chaotic behavior – where small changes to initial conditions lead to widely different outcomes. Chaotic behavior is what makes modelling (say) weather patterns so complex.

Let’s start as in the article with the function:

**f(x) = 4x(1-x)**

We can then start an iterative process where we choose an initial value, calculate f(x) and then use this answer to calculate a new f(x) etc. For example when I choose x = 0.2, f(0.2) = 0.64. I then use this value to find a new value f(0.64) = 0.9216. I used a spreadsheet to plot 40 iterations for the starting values of x = 0.2 and x = 0.2001. This generated the following spreadsheet (cut to show the first 10 terms):

I then imported this table into Desmos to map how the change in the starting value from 0.2 to 0.2001 affected the resultant graph.

**Starting value of x = 0.2**

**Starting value of x = 0.2001**

**Both graphs superimposed **

We can see that for the first 10 terms the graphs are virtually the same – but then we get a wild divergence, before the graphs seem to synchronize more closely again. One thing we notice is that the data is bounded between 0 and 1. Can we prove why this is?

If we start with a value of x such that:

0<x<1.

then when we plot f(x) = 4x – 4x^{2} we can see that the graph has a maximum at x = 1/2:

.

Therefore any starting value of x between 0 and 1 will also return a new value bounded between 0 and 1. Starting values of x > 1 and x < -1 will tend to negative infinity because x^{2} grows much more rapidly than x.

**f(x) = ax(1-x)**

Let’s now explore what happens as we change the value of a whilst keeping our initial starting values of x = 0.2 and x = 0.2001

a = 0.8

both graphs are superimposed but are identical at the scale we are using. We can see that both values are attracted to 0 (we can say that 0 is an **attractor** for our system).

a = 1.2

Again both graphs are superimposed but are identical at the scale we are using. We can see that both values are attracted to 1/6 (we can say that 1/6 is an **attractor** for our system).

In general, for f(x) = ax(1-x) with -1≤x≤1, the attractors are given by x = 0 and x = 1 – 1/a, but it depends on the starting conditions as to whether we will end up being attracted to this point.

**f(x) = 0.8x(1-x)**

So, let’s look at f(x) = 0.8x(1-x) for different starting values 1≤x≤1. Our attractors are given by x = 0 and x = 1 – 1/0.8 = -0.25.

When our initial value is x = 0 we remain at the point x = 0.

When our initial value is x = -0.25 we remain at the point x = -0.25.

When our initial value is x < -0.25 we tend to negative infinity.

When our initial value is -0.25 < x ≤ 1 we tend towards x = 0.

**Starting value of x = -0.249999:**

Therefore we can say that x = 0 is a **stable attractor**, initial values close to x = 0 will still tend to 0.

However x = -0.25 is a **fixed point** rather than a stable attractor**, **as

x = -0.250001 will tend to infinity very rapidly,

x = -0.25 stays at x = -0.25.

x = -0.249999 will tend towards 0.

Therefore there is a stable equilibria at x = 0 and an unstable equilibria at x = -0.25.

This is another fascinating branch of mathematics – which uses computing to illustrate complexity (and order) in nature. Langton’s Ant shows how very simple initial rules (ie a deterministic system) can have very unexpected consequences. Langton’s Ant follows two simple rules:

1) At a white square, turn 90° right, flip the color of the square, move forward one unit

2) At a black square, turn 90° left, flip the color of the square, move forward one unit.

The ant exists on an infinite grid – and is able to travel N,S,E or W. You might expect the pattern generated to either appear completely random, or to replicate a fixed pattern. What actually happens is you have a chaotic pattern for around 10,000 iterations – and then all of a sudden a diagonal “highway” emerges – and then continues forever. In other words there is emergent behavior – order from chaos. What is even more remarkable is that you can populate the initial starting grid with any number of black squares – and you will still end up with the same emergent pattern of an infinitely repeating diagonal highway.

See a JAVA app demonstration (this uses a flat screen where exiting the end of one side allows you to return elsewhere – so this will ultimately lead to disruption of the highway pattern)

Such cellular automatons are a way of using computational power to try and replicate the natural world – The Game of Life is another well known automaton which starts of with very simple rules – designed to replicate (crudely) bacterial population growth. Small changes to the initial starting conditions result in wildly different outcomes – and once again you see patterns emerging from apparent random behavior. Such automatons can themselves be used as “computers” to calculate the solution to problems. One day could we design a computer program that replicates life itself? Could that then be said to be alive?