Hautlieu Maths student Charlie Yau explains how Prof. Nira Chamberlain is working to stop artificial intelligence getting too far out of control. There is a link at the bottom of the article “How Will AI Change the World?”
On Tuesday 2nd July 2024, Hautlieu welcomed the entertaining and esteemed mathematical modeller Prof. Nira Chamberlain to explain how mathematics can be used to stop an AI apocalypse.
Mathematical modelling involves taking problems from the real world, describing and solving it using mathematics, and then applying the solution back in the real world.
In recent history there has been an explosion in AI capability ranging from Deep Blue’s 1997 victory against then World Chess Champion Garry Kasparov to the generative language model GPT which took the world by storm in 2022. In his talk, Prof. Chamberlain demonstrated how the Gambler’s Ruin problem can be used to simulate and model the probability of an AI takeover in a business environment.
An AI takeover in this case is defined as when AI ends up significantly exceeding human capabilities. In the simulations where no precautions were taken against the AI, it was determined that AI takeover occurred 80% of the time!
Clearly this is less than ideal, so Prof. Nira Chamberlain set out to determine how to minimise the probability of an AI takeover. To combat the AI businesses, he devised an AI algorithm which used a robin hood-esque method of taking resources from AI businesses and giving them to human businesses, or to put it succinctly, an anti-AI AI. When implemented, the probability of AI takeover dropped from 80% to 3%, a massive improvement!
So, while the prospect of an AI Apocalypse might be unsettling, you can be very sure that there are always some very clever people working on preventing them. And finally, in Prof. Chamberlain’s own words, a thought experiment to all the current and future AI scientists out there, would you build a high-performance car with no brakes?
What do you think? Is AI something which will benefit humanity or is it a grave existential threat?