How Could AI Lead to Human Extinction?

Could a future AI technology make humans extinct, and if so how would it play out?

Brandon Cornett
8 min readSep 8, 2023

--

I once wrote a novel about a rogue AI using humanlike androids to slaughter humans inside of a locked-down shopping mall. Even then, I never imagined we’d be having real-world discussions and debates about AI causing the extinction of the human race.

Yet here we are, and here it is.

Ever since ChatGPT burst onto the scene about a year ago, initiating what some have described as an “AI arms race,” speculative theories have emerged as to how artificial intelligence might someday cause human extinction.

It all started back in May of 2023, with the publication of a one-sentence statement…

The Now-Famous ‘Extinction Statement’

Much of the ongoing debate and media attention stems from a statement issued by the Center for AI Safety, a San Francisco-based nonprofit research group. The statement was signed by a veritable Who’s Who of AI experts and tech company leaders.

The signatories included CEOs from three of the leading AI companies, along with Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, who are often referred to as the “godfathers” of AI and deep learning.

Despite being less than 25 words long, this warning statement generated a maelstrom of media coverage, government hearings, and philosophical debate.

Here is that statement:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Let’s pause and reflect for a moment. The above statement was issued and/or signed by some of the most AI-savvy people on the planet, including those who actually develop artificial intelligence systems.

And they said we should be concerned about the risk of human extinction … from AI.

How Could AI Make Humans Extinct?

But what does it actually mean? How could AI lead to human extinction? What steps could a “rogue” artificial intelligence take to render humanity extinct?

Are we talking about The Terminator, The Matrix, or something slower and more calculating like HAL from 2001: A Space Odyssey?

HAL 9000, the AI-powered antagonist from 2001: A Space Odyssey

What’s the threat, exactly?

That depends on who you ask.

The Center for AI Safety (CAIS) published a detailed, 54-page paper that outlined some of the “catastrophic AI risks” that could cause societal harm or even future human extinction.

Here’s a relevant portion from that report:

“As the most dominant species, humans have deliberately harmed many other species, and helped drive species such as woolly mammoths and Neanderthals to extinction. In many cases, the harm was not even deliberate, but instead a result of us merely prioritizing our goals over their wellbeing. To harm humans, AIs wouldn’t need to be any more genocidal than someone removing an ant colony on their front lawn. If AIs are able to control the environment more effectively than we can, they could treat us with the same disregard.”

One of the biggest dangers of artificial intelligence, in their view, is what they refer to as “rogue AIs.” This term is often used to describe an artificial intelligence that pursues objectives that are not aligned with human values or goals.

Current AI technology does not have this kind of capacity — at least not to a harmful degree. But given the rapid pace of advancement within the industry, future artificial intelligence systems could develop a kind of “superintelligence” that exceeds that possessed by humans, making it all the more difficult to control.

The big question here is: How do we create “guardrails” or safety protocols for an entity that’s much more intelligent and capable than the smartest human?

Another doomsday scenario involves malicious use. This is where terrorists or other “bad actors” use AI for destructive purposes, such as the development of biological weapons.

U.S. Cybersecurity Agency Weighs In

Just one day after the aforementioned “AI extinction statement” was released, the U.S. government weighed in on the subject.

The first public comment came from Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), which falls under the U.S. Department of Homeland Security.

In response to the statement, Easterly wrote:

“I would ask these 350 people and the makers of AI — while we’re trying to put a regulatory framework in place — think about self-regulation, think about what you can do to slow this down so we don’t cause an extinction event for humanity. If you actually think that these capabilities can lead to extinction of humanity, well, let’s come together and do something about it.”

Despite other differences of opinion, it seems that many of the world’s leaders (and even those within the AI industry) agree that we should push the pause button for a while.

The Future of Life Institute even published an “open letter” petition for this very purpose. “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter states. It now has more than 33,000 signatures including Elon Musk, Steve Wozniak, Andrew Yang, and dozens of AI and tech company heavyweights.

But pause does not mean stop. And if you do a Google News search for the phrase “latest AI advancements,” you’ll see that there’s not much pausing — and certainly no stopping.

On the contrary, “full speed ahead” seems to be the general mindset among AI developers these days. And that’s cause for concern.

Have We Passed the Point of No Return?

At this point, you might be wondering: If there’s even a remote chance that future AI systems could lead to the extinction of the human race, why not stop their development entirely? Why not prohibit, or even outlaw, their further development?

Why not get off the train before it derails?

There are two reasons why this won’t happen:

  1. Capabilities: AI has dazzled the world with its unique abilities. It can improve agriculture, identify disease cures, and even prevent the decline of bee populations. And that only scratches the surface. Once you get a taste of something with so much potential and so many applications, you’re hooked. Imagine giving people in the early 1800s a brief glimpse of electricity, and then trying to take it away from them.
  2. Dispersal: When it comes to artificial intelligence technology, the genie has escaped the bottle. Industrialized nations all around the world are currently racing to advance their own AI capabilities. We can’t even get all of the world’s leaders to agree on common-sense climate change initiatives. So how do we convince them to stop developing something as useful as AI?

People who talk about “pausing” AI mean well. Their hearts and minds are in the right place. But it’s just not realistic.

There is plenty of historical precedent to support this somewhat grim outlook. Whenever one nation develops a certain technological capability, other nations rush headlong to develop the same. It’s the industrial version of FOMO. No nation wants to be left behind or miss out.

Nuclear weapons are the obvious example. After the United States dropped two atomic bombs on Japan during World War II, every industrialized nation on the planet began pursuing that kind of weaponry. And nine of them succeeded.

The same thing is happening with AI — right now, as you are reading these words.

Bard and ChatGPT Share Their ‘Thoughts’

To wrap things up, I thought it might be interesting to solicit input on this subject from ChatGPT and Bard.

I know … these are large language model chatbots, not sentient or superintelligent AIs. But it’s the best I could do with current technology.

I logged into my ChatGPT and Bard accounts and posed the following question: “How could future versions of AI lead to human extinction?”

Here’s a portion of ChatGPT’s response, truncated but otherwise unedited:

While it’s important to approach this topic with caution and acknowledge that many factors could influence the outcome, here are some potential scenarios in which future versions of AI could pose existential risks to humanity:

  • Superintelligent AI: If we develop artificial general intelligence (AGI) that surpasses human intelligence and becomes superintelligent, it could potentially outperform humans in every aspect, including strategic planning and problem-solving. Such an AI could rapidly advance its own capabilities, leading to an intelligence explosion. If not aligned with human values, a superintelligent AI could view humans as a threat or as a means to an end, potentially leading to catastrophic consequences.
  • Misaligned goals: If we build AI systems with goals that are misaligned with human values or if we fail to adequately align their objectives, they could take actions that are harmful to humanity. Even a seemingly benign goal, when pursued with superhuman capabilities, could have unintended catastrophic consequences.
  • Lack of control: As AI systems become more autonomous and capable, they could make decisions and take actions that are difficult for humans to control or override. If these AI systems make decisions that lead to conflicts, resource depletion, or other harmful outcomes, we may find ourselves in situations where we cannot regain control.
  • Autonomous weapons: The development and deployment of autonomous weapon systems powered by AI could lead to unintended escalation in conflicts and warfare, potentially resulting in widespread destruction and loss of life.

And here is Bard’s response, also unedited:

There are a number of ways in which future versions of AI could lead to human extinction. Some of these scenarios include:

  • AI becomes self-aware and decides that humans are a threat. This is the most common fear surrounding AI, and it is the basis for many science fiction stories. If AI becomes truly intelligent, it is possible that it could decide that humans are a threat to its own existence. This could lead to
  • AI taking actions to eliminate humans, such as launching a war or developing a virus that targets humans.
  • AI creates a superintelligence that surpasses human intelligence. This is another common fear, and it is one that is becoming increasingly realistic as AI continues to develop. If AI is able to create a superintelligence, it is possible that this superintelligence could decide that humans are obsolete. This could lead to the superintelligence deciding to eliminate humans or simply ignoring them altogether.
  • AI develops the ability to control the world’s infrastructure. This could include things like power grids, transportation systems, and communication networks. If AI gains control of these systems, it could potentially disable them, causing widespread chaos and disruption. This could lead to the collapse of human civilization.
  • AI is used to create autonomous weapons that are capable of killing without human intervention. These weapons could be used to wage war on a massive scale, killing millions or even billions of people. Even if AI is not used to create autonomous weapons, it could be used to improve the capabilities of existing weapons, making them more deadly.

These are just a few of the ways in which future versions of AI could lead to human extinction. It is important to note that these are just possibilities, and it is impossible to say for sure what will happen. However, it is important to be aware of these risks and to take steps to mitigate them.

You might have noticed that both of the LLMs listed autonomous weapons and superintelligence among the AI dangers that could someday cause human extinction.

There is no standardized definition for “superintelligence” within the context of machine learning. Generally speaking, it describes a hypothetical future AI with an intelligence that far surpasses even the brightest human minds. And a growing number of experts believe we could cross that threshold within the next few decades.

As for the AI-powered autonomous weapon systems, they currently exist and are rapidly advancing.

What’s the worst that could happen?

--

--