It’s not just homeless people or “Terminator” fans who think there’s a chance technology will take over the world and be the end of humanity. Cambridge will open a centre that studies artificial intelligence as a potential threat to humans.
Some of you might remember “Terminator”. It was an impressive, almost shocking and apocalyptic view of the world. Robots took over the world and sought to end all human life on Earth. Since the first “Terminator” came out, technology evolved to such an extent that smartphones are in charge of our life, Facebook and Twitter lead our social lives, our cars are run by computers and so are most of our homes. With that in mind, saying artificial intelligence could easily take over the world in the future won’t make you feel/look like a crazy person anymore.
The Cambridge University has decided to put that to the test by founding a centre for “terminator studies”. The world’s top scientists will try to solve this puzzle, before the worse unfolds and thus save humanity from extinction, but without all the action in “Terminator” and certainly no time-traveling (because we can’t do that, yet).
Cambridge’s artificial intelligence centre is co-launched by Lord Rees, a renowned astronomer and cosmologist who wrote a book in 2003 arguing that humanity would wipe itself out by 2100 because of its destructiveness. The scientists with the Centre for the Study of Existential Risk warn that an ultra-intelligent machine is no longer possible just in science fiction books and Terminator movies.
The Centre for the Study of Existential Risk will address the technology that “might pose ‘extinction-level’ risks to our species”, “from developments in bio and nanotechnology to extreme climate change and even artificial intelligence”.
“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’ box moment with AGI that, if missed, could be disastrous” warns Huw Price, Bertrand Russell Professor of Philosophy and one of Cambridge’s artificial intelligence study centre founders.
“I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point. With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies” he added.