You are currently browsing the tag archive for the ‘average distance’ tag.

**Finding the average distance in a polygon**

Over the previous couple of posts I’ve looked at the average distance in squares, rectangles and equilateral triangles. The logical extension to this is to consider a regular polygon with sides 1. Above is pictured a regular pentagon with sides 1 enclosed in a 2 by 2 square. The points N and O represent 2 randomly chosen points which we find the distance between. On average what is the distance between these randomly chosen points N and O?

**Starting with a hexagon**

It’s a little easier to start with a hexagon as we get some nicer coordinate points. So, our first challenge is to find the coordinates of a regular hexagon with sides 1. Luckily we can use the complex roots of unity to do this. We start by finding the 6th roots of unity and then converting these to coordinates in an Argand diagram:

This then allows us to plot the following:

We can then work out the inequalities which define the inside of the hexagon when we generate points within the 2×2 square centred at (0,0). This gives:

We can then run the following code to find the average distance:

This gives the following result:

We can check this result as the exact value is:

which is 0.8262589495. So we can see we are accurate here to 3 sf.

**Pentagon**

For the pentagon we can find the coordinates by finding the 5th roots of unity:

We then need to scale all coordinate points by a factor, because in a pentagon the distance from the centre to the points is not 1 (as is the case in roots of unity). We can find the distance from the centre to the edge of a pentagon by the following trigonometry:

So, when we scale all coordinate points by this factor we get:

And we can then do the same method as before and run the following Python code:

This gives:

**n-sided polygon**

We can now consider an n-sided polygon with sides 1. Let’s start with the values we’ve found for an equilateral triangle (0.364), a square (0.522), a pentagon (0.697) and a hexagon (0.826.

When we plot these they appear to follow a linear relationship:

average distance = 0.14n

We can check that this is correct by considering the fact that an n sided polygon will approximate a circle when n gets large. So an n sided polygon with sides length 1 can be approximated by a circle with circumference n. This allows us to work out the radius.

We can then substitute this into the equation for the average distance of 2 points in a circle.

So we would expect the average distance between 2 points in a regular polygon of sides 1 to approach the equation (as n gets large):

average distance = 0.144101239n

And we’ve finished! Everything cross-checks and works nicely. We’ve been able to use a mixture of complex numbers, geometry, coding and trigonometry to achieve this result.

**Finding the average distance in an equilateral triangle**

In the previous post I looked at the average distance between 2 points in a rectangle. In this post I will investigate the average distance between 2 randomly chosen points in an equilateral triangle.

**Drawing a sketch.**

The first step is to start with an equilateral triangle with sides 1. This is shown above. I sketched this using Geogebra – and used some basic Pythagoras to work out the coordinates of point C.

I can then draw a square of sides 1 around this triangle as shown above. I’m then going to run a Python program to randomly select points and then work out the distance between them – but I need to make sure that the 2 points chosen are both inside this triangle. For this I need to work out the equation of the line AC and CB.

Using basic coordinate geometry we can see that the line AC has equation y = √3x. We want the inequality y < √3x so that we are on the correct side of this line.

The line BC has equation y = -√3x + √3. Therefore the triangle must also satisfy the inequality y < -√3x + √3.

I can then run the following code on Python, with finds the average distance between points (a,c) and (b,d) both within the unit square but also subject to the 2 inequality constraints above.

When this is run it performs 999,999 trials and then finds the average distance. This returns the following value:

So we can see that the average distance is just over a third of a unit.

**Finding the average distance of an equilateral triangle of length n.**

We can then draw the sketch above to find the equation of lines AC and CB for an equilateral triangle with lengths n. This leads to the following inequalities:

y < √3x

y < -√3x + √3n

So we can then modify the code as follows:

This then returns the average distances for equilateral triangles of sizes 1 to 10.

And when we plot this on Desmos we can see that there is a linear relationship:

The regression line has gradient 0.36 (2sf) so we can hypothesise that for an equilateral triangle of size *n*, the average distance between 2 points is approximately 0.36*n*.

**Checking the maths**

I then checked the actual equation for the average distance between 2 points in an equilateral triangle of sides n:

This gives us:

So we can see that we were accurate to 2 significant figures. So this is a nice mixture of geometry, graphing and computational power to provide a result which would be otherwise extremely difficult to calculate.

If you are a teacher then please also visit my new site: intermathematics.com for over 2000+ pdf pages of resources for teaching IB maths!

**Finding the average distance between 2 points on a hypercube**

This is the natural extension from this previous post which looked at the average distance of 2 randomly chosen points in a square – this time let’s explore the average distance in n dimensions. I’m going to investigate what dimensional hypercube is required to have an average distance of more than one, and then also what happens to the average distance as n approaches infinity.

**Monte Carlo method**

The Monte Carlo method is a very powerful technique which utilizes computational power. Basically we use the fact that the average of a very large number of trials will serve as an approximation to an exact result. In this case I will run a Python program 10 million times – each time it will select 2 coordinate points and then work out the distance between them. It will then find the average of these 10 million trials. The code above generates 2 coordinates in 3 dimensional space inside a unit cube. We can modify this for n-dimensional space by remembering that Pythagoras still works in higher dimensions.

**Results**

Running this code helps to generate the above results. This answers our first question – we need a 7 dimensional unit hypercube until the average distance between two randomly chosen points is greater than 1. We can also see that the difference between the average distances is reducing – but it’s not clear if this will approach a limit or if it will continue growing to infinity. So let’s do some more trials.

**Further trials**

This takes us up to a 22-dimensional hypercube. At this point it’s probably useful to plot a graph to see the trend.

**Reciprocal model**

This reciprocal model is of the form:

We can see that this is a pretty good fit (R squared 0.9994). If this model is accurate then this would suggest that the average distance approaches a limit as n approaches infinity.

**Polynomial model**

This polynomial model is of the form:

We can see that this is also a very good fit (R squared 0.9997). If this model is accurate then as b is greater than 0, this would suggest that the average distance approaches infinity as n approaches infinity.

**Reflection**

Quite annoyingly we have 2 model which both fit the data very accurately – but predict completely different results! Logically we could probably say that we would expect the average distance to approach infinity as n approaches infinity – and also we could possibly justify this by the fact that the polynomial model is a slightly better fit. Given the similarity between the 2 models it probably time to find out the actual results for this.

**Average n-dimensional distance bounds**

Not surprisingly the mathematics required to work this out is exceptionally difficult – and ends up with non-solvable integrals which require analytic solutions. The Monte Carlo method with very large numbers of trials is a reasonably good approach to approximating this answer. There is however a very useful lower and upper bound for the average distance in n dimensional space given by:

This shows immediately that the average distance will approach infinity as n grows large – as the lower bound will grow to infinity. Quite pleasingly we can see that the polynomial model we derived is similar to the lower bound. We can plot both upper and lower bound along with our polynomial model to see how these all compare. We have lower bound (green), polynomial model (black) and upper bound (green):

We can see that our polynomial model very closely follows the upper bound in our domain. As we extend the domain this polynomial approximation remains above the lower and tracks the upper bounds before gradually growing less accurate. When n is 50 our model predicts a distance of 2.94, whereas the upper bound is 2.88. This is quite a nice result – we have used the Monte Carlo method to derive a polynomial approximation to the average distance in n-dimensional hypercubes and it both closely follows the upper bound over a reasonable domain and also is of a very similar form to the lower bound. We can use this lower bound to see that a 36 dimensional hypercube (and higher) would be guaranteed to have an average distance of more than 2.

**Conclusion**

This was a nice example of the power of the Monte Carlo method in these kind of problems – we were able to use it quite successfully to get a polynomial approximation which turned out to be reasonably accurate. We could have significantly improved this accuracy by running 100 million (or 1 billion etc) trials each time – though this would have probably required a more powerful computer!

**Essential Resources for IB Teachers**

If you are a **teacher** then please also visit my new site. This has been designed specifically for teachers of mathematics at international schools. The content now includes over **2000 pages of pdf content** for the entire SL and HL Analysis syllabus and also the SL Applications syllabus. Some of the content includes:

**Original pdf worksheets**(with full worked solutions) designed to cover all the syllabus topics. These make great homework sheets or in class worksheets – and are each designed to last between 40 minutes and 1 hour.**Original Paper 3 investigations**(with full worked solutions) to develop investigative techniques and support both the exploration and the Paper 3 examination.- Over 150 pages of
**Coursework Guides**to introduce students to the essentials behind getting an excellent mark on their exploration coursework. - A large number of
**enrichment activities**such as treasure hunts, quizzes, investigations, Desmos explorations, Python coding and more – to engage IB learners in the course.

There is also a lot more. I think this could save teachers 200+ hours of preparation time in delivering an IB maths course – so it should be well worth exploring!

**Essential Resources for both IB teachers and IB students**

1) Exploration Guides and Paper 3 Resources

I’ve put together a **168 page** Super Exploration Guide to talk students and teachers through all aspects of producing an excellent coursework submission. Students always make the same mistakes when doing their coursework – get the inside track from an IB moderator! I have also made **Paper 3 packs** for HL Analysis and also Applications students to help prepare for their Paper 3 exams. The Exploration Guides can be downloaded here and the Paper 3 Questions can be downloaded here.