You are currently browsing the tag archive for the ‘bostrom’ tag.

Screen Shot 2023-03-28 at 7.46.37 PM

GPT-4 vs ChatGPT. The beginning of an intelligence revolution?

The above graph (image source) is one of the most incredible bar charts you’ll ever see – this is measuring the capabilities of GPT4, Open AI’s new large language model with its previous iteration, ChatGPT.  As we can see, GPT4 is now able to score in the top 20% of takers across a staggering field of subjects.  This on its own is amazing – but the really incredible part is that the green sections represent improvements since ChatGPT – and that ChatGPT was only released 3 ½ months ago.

GPT4 is now able to successfully pass nearly all AP subjects, would pass the bar exam to qualify as a lawyer and is now even making headway on Olympiad style mathematics papers.   You can see that ChatGPT had already mastered many of the humanities subjects – and that now GPT4 has begun to master the sciences, maths, economics and law.

We can see an example of the mathematical improvements in GPT4 below from a recently released research paper.  Both AIs were asked a reasonably challenging integral problem:

Screen Shot 2023-03-28 at 7.38.12 PM

GPT4 response:

Screen Shot 2023-03-28 at 7.56.00 PM

Screen Shot 2023-03-28 at 7.56.16 PM

Screen Shot 2023-03-28 at 7.56.21 PM

GPT4 is correct – and excellently explained, whereas the ChatGPT response (viewable in the paper) was just completely wrong.  It’s not just that GPT4 is now able to do maths like this – after all, so can Wolfram Alpha, but that the large language model training method allows it do complicated maths as well as everything else.  The research paper this appears in is entitled “Sparks of Artificial General Intelligence” because this appears to be the beginning of the holy grail of AI research – a model which has intelligence across multiple domains – and as such begins to reach human levels of intelligence across multiple measures.

An intelligence explosion?

Nick Bostrom’s Superintelligence came out several years ago to discuss the ideas behind the development of intelligent systems and in it he argues that we can probably expect explosive growth – perhaps even over days or weeks – as a system reaches a critical level of intelligence and then drives its own further development.  Let’s look at the maths behind this.  We start by modelling the rate of growth of intelligence over time:

Screen Shot 2023-03-28 at 7.16.02 PM

Optimisation power is a measure of how much resource power is being allocated to improving the intelligence of the AI.  The resources driving this improvement are going to come the company working on the project (in this case Open AI), and also global research into AI in the nature of published peer review papers on neural networks etc.  However there is also the potential for the AI itself to work on the project to improve its own intelligence.  We can therefore say that the optimisation power is given by:

Screen Shot 2023-03-28 at 7.16.08 PM

Whilst the AI system is still undeveloped and unable to contribute meaningfully to its own intelligence improvements we will have:

Screen Shot 2023-03-28 at 7.16.12 PM

If we assume that the company provides a constant investment in optimising their AI, and similarly there is a constant investiment worldwise, then we can treat this as a constant:

Screen Shot 2023-03-28 at 7.16.18 PM

Responsiveness to optimisation describes how easily a system is able to be improved upon.  For example a system which is highly responsive can be easily improved upon with minimal resource power.  A system which shows very little improvements despite a large investment in resource power has low responsiveness.

If we also assume that responsiveness to optimization, R, remains constant over some timeframe then we can write:

Screen Shot 2023-03-28 at 7.16.24 PM

We can then integrate this by separating the variables:

Screen Shot 2023-03-28 at 7.16.27 PM

This means that the intelligence of the system grows in a linear fashion over time.

However when the AI system reaches a certain threshold of intelligence it will become the main resource driving its own intelligence improvements (and much larger than the contribution of the company or the world).  At this point we can say:

Screen Shot 2023-03-28 at 7.16.31 PM

In other words the optimization power is a function of the AI’s current level of intelligence.  This then creates a completely different growth trajectory:

Screen Shot 2023-03-28 at 7.16.36 PM

Which we again solve as follows:

Screen Shot 2023-03-28 at 7.16.41 PMWhich means that now we have the growth of the intelligence of the system exhibiting exponential growth.

What does this mean in practice in terms of AI development?

Screen Shot 2023-03-28 at 7.16.48 PM

We can see above an example of how we might expect such intelligence development to look.  The first section (red) is marked by linear growth over short periods.  As R or D is altered this may create lines with different gradient but growth is not explosive. 

At the point A the AI system gains sufficient intelligence to be the main driving force in its own future intelligence gains.  Note that this does not mean that it has to be above the level of human intelligence when this happens (though it may be) – simply that in the narrow task of improving intelligence it is now superior to the efforts of the company and the world researchers.

So, at point A the exponential growth phase begins (purple) – in this diagram taking the AI system explosively past human intelligence levels.  Then at some unspecified point in the future (B on the diagram), this exponential growth ends and the AI approaches the maximum capacity intelligence for the system.

So it is possible that there will be an intelligence explosion once an AI system gets close to human levels of intelligence – and based on current trends it looks like this is well within reach within the next 5 years.  So hold on tight – things could get very interesting!

computer simulation

Are You Living in a Computer Simulation?

This idea might be familiar to fans of The Matrix – and at first glance may seem somewhat unbelievable.  However, Oxford University Professor Nick Bostrom makes an interesting case using both conditional probability and logic as to why it’s more likely than you might think.

The summary of Bostrom’s Computer Simulation argument is the following:

At least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

The full paper where he makes his argument is available as a pdf here – and is well worth a read.  Alternatively Bostrom makes this case in a detailed interview:

Taking the argument step by step, firstly when Bostrom talks about a “posthuman” stage he is referring to an advanced civilisation with significantly greater technological capabilities than we have at present.  Such a civilisation would have the ability to run a computer simulation so accurate that it would be indistinguishable from “real life”.

This is a twist on the traditional “Brain in a Vat” thought experiment much loved by philosophers when trying to argue whether we be sure that anything exists outside our own subjective experience:

Based on the same logic, we have no way of genuinely knowing whether we are really “here” or whether we are nothing but a computer model designed to give the impression that we really exist.  Interestingly, the possibility that our individual life, the world around us and indeed everything we know about the universe may be false means that we can never truly claim to have knowledge of anything.

I think that most optimists would think that civilisation has the potential to develop into a “posthuman” phase of advanced technology.  Indeed, you only need to look at the phenomenal growth in computer power (see Moore’s Law) to have confidence that should we stick around long enough, we will have the computational power possible to run such simulations.

So if we optimistically accept that humans will reach a “posthuman” stage, then it’s even easier to accept the second proposition – that if an advanced civilisation is able to run such civilisations they will do.  After all human nature is such that we tend to do things just because we can – and in any case running such ancestor simulations would potentially be very beneficial for real world modelling.

If we do accept both these premises, then this therefore leads to the argument that we are almost certainly living in a computer simulation.  Why?  Well, an advanced civilisation with the computational power to run ancestor simulations would likely run a large number of them – and if there is only one real world, then our experience of a world is likely to be one of these simulations.

As a ToK topic this is a fantastic introduction to epistemological questions about the limits of knowledge and questions of existence, and is a really good example of the power of logic and mathematics to reveal possibilities about the world outside our usual bounds of thinking.

If you enjoyed this post you might also like:

Imagining the 4th Dimension – How mathematics can help us explore the notion that there may be more than 3 spatial dimensions.

Is Maths Invented or Discovered? – A discussion about some of the basic philosophical questions that arise in mathematics.

Essential resources for IB students:

1) Exploration Guides and Paper 3 Resources

Screen Shot 2021-05-19 at 6.32.13 PM

I’ve put together four comprehensive pdf guides to help students prepare for their exploration coursework and Paper 3 investigations. The exploration guides talk through the marking criteria, common student mistakes, excellent ideas for explorations, technology advice, modeling methods and a variety of statistical techniques with detailed explanations. I’ve also made 17 full investigation questions which are also excellent starting points for explorations.  The Exploration Guides can be downloaded here and the Paper 3 Questions can be downloaded here.

Website Stats

  • 9,478,123 views

About

All content on this site has been written by Andrew Chambers (MSc. Mathematics, IB Mathematics Examiner).

New website for International teachers

I’ve just launched a brand new maths site for international schools – over 2000 pdf pages of resources to support IB teachers.  If you are an IB teacher this could save you 200+ hours of preparation time.

Explore here!

Free HL Paper 3 Questions

P3 investigation questions and fully typed mark scheme.  Packs for both Applications students and Analysis students.

Available to download here

IB Maths Super Exploration Guide

A Super Exploration Guide with 168 pages of essential advice from a current IB examiner to ensure you get great marks on your coursework.

Available to download here.

Recent Posts

Follow IB Maths Resources from Intermathematics on WordPress.com