Is the AI apocalypse actually coming? What life could look like if robots take over!!!

The year is 2050. The location is London — but not as we know it. GodBot, a robot so intelligent it can out-smart any human, is in charge of the United Kingdom — the entire planet, in fact — and just ann\unced its latest plan to reverse global temperature rises: an international zero-child, zero-reproduction policy, which will see all human females systematically destroyed and replaced with carbon-neutral sex robots.

This chilling scenario is, of course, entirely fictional – though if naysayers are to be believed, it could become a reality in as soon as a few decades, if we humans don’t act now. Last night, dozens of AI experts — including the heads of ChatGPT creator OpenAI and Google Deepmind — warned that AI could lead to the extinction of humanity and that mitigating its risk should be as much of a global priority as pandemics and nuclear war.

The statement, published on the website of the Centre for AI Safety, is the latest in a series of almost hourly warnings of the “existential threat” machines pose to humanity over recent months, with everyone from historian Yuval Noah Harari to some of the creators of AI itself speaking out about the problems humanity may face, from AI being weaponised to humans becoming dependent on it.

The so-called ‘godfather’ of AI, Dr Geoffrey Hinton, whose work on neural networks and deep learning has paved the way or modern AI, recently quit his job at Google so that he could warn humanity about the dangers of continuing to probe into this technological Pandora’s Box. He went as far as to say he regrets some of his work and cautioned against some of the potentially “catastrophic” effects the tech could pose if governments don’t step in and regulate. “Right now, [robots are] not more intelligent than us, as far as I can tell. But I think they soon may be,” he said on announcing his resignation from Google.

According to a recent study, half of all AI researchers believe there is at least a 10 per cent chance of AI causing human extinction, with many warning that robots could be capable of human-like goals such as attaining high political office, starting new religions or even playing God. Google’s boss Sundar Pichai admits the thought keeps him awake at night. ChatGPT’s creator Sam Altman says he’s a “little bit scared” of the technology. DeepAI founder Kevin Baragona has likened the relationship between humans and AI to “a war between chimps and humans”. And Stuart Russell — one of the world’s leading AI pioneers who has advised Downing Street and the White House — has even likened the recent AI boom to what would happen if the world was to detect an alien civilisation.

“We’ve got the Europeans calling for an emergency global summit. You’ve got China basically banning large language models. We’ve got the White House, calling in all the [technology industry] CEOs for an emergency meeting. I mean, it’s sort of what you would imagine might happen if we really did detect an alien civilisation,” he said earlier this month. “The stakes couldn’t be higher: if we don’t control our own civilisation, we have no say in whether we continue to exist.”

Russell and more than 1,000 academics and tech moguls including Elon Musk and Apple co-founder Steve Wozniak have now signed an open letter sounding the alarm on this “out-of-control” AI race, calling for an immediate six-month pause in the development of AI technology before it’s too late. So what actually is this worst case scenario if governments don’t step in — could the doomsday situation above really become a reality? Is the pace-of-change really so fast that we could see anything like this in our lifetime? And is all of this just dangerous scaremongering — or could machines actually become so powerful and intelligent that they kill humans off altogether?

Theoretically yes, if you ask most experts and even AI chatbots themselves. Ask AI model Craiyon to draw what the last selfie taken on Earth could look like and it produces nightmarish scenes of zombies and cities burning, while ChatGPT suggests that “a powerful AI system might decide that it no longer needs human oversight or intervention, and begins to act independently. This could lead to a rapid and widespread takeover of all digital systems, including military and industrial infrastructure”.

Such predictions are undoubtedly terrifying and theoretically possible, but that is not to say they are likely to happen — and certainly not within the short-term. Hinton is right to worry about the rate of progress and his warnings about robots wanting more power are “interesting and useful speculation, but remain part of sci-fi,” says Dr Kate Devlin, a reader in Artificial Intelligence & Society at King’s College London (KCL).

Yes, AI technology is already mindbogglingly smart and yes the pace of change in recent years has been dizzying, but most academics agree that clickbait headlines about a Terminator-style future where rampant, self-replicating robot overlords take over the world are unhelpful and scaremongering in the short-term because before those scenarios could ever happen, machines would need to become conscious and understand what they’re doing.

Quite how likely the creation of a machine that is smarter than humans — a prospect known as superintelligence — really is depends on who you speak to. It could, most agree, be apocalyptically bad and you certainly won’t find many techsperts not terrified by the prospect of humans becoming subservient to the robots we’ve created. But what most agree on is that just as terrifying in the meantime are those shorter-term, more pressing dangers that come with already-existing AI technology getting into the wrong hands — as it is already, by virtue of bots like ChatGPT being made available to everybody.

Meanwhile the Centre for AI Safety offers four potential disaster scenarios in this week’s statement: that AI is used to build chemical weapons; that AI-generated misinformation destabilises society; that AI’s power gets into an increasingly small number of hands and enables “oppressive censorship”; or enfeeblement, where humans become dependent on AI “similar to the scenario portrayed in the film Wall-E”.

Leave a Comment

Your email address will not be published. Required fields are marked *

Translate »