You are using an older browser version. Please use a supported version for the best MSN experience.

Artificial Intelligence May Kill Us All in 30 Years

U.S. News & World Report U.S. News & World Report 1/11/2015 Jeff Nesbit
Artificial intelligence, conceptual computer artwork. © Andrzej Wojcicki/Science Pho Artificial intelligence, conceptual computer artwork.

The human race could vanish in the blink of an eye within our lifetimes. Or we could just as plausibly see our species become immortal by the middle of the 21 st century. That's the promise, and the threat, posed by the accelerating pace of research around artificial intelligence. It may be an either/or proposition.

Most people have heard about Ray Kurzweil's immensely hopeful view, captured in his books and lectures about "singularity." In that view, AI progresses in helpful leaps, benefitting humankind at nearly every step of the way.

A good example of the early benefits of AI that nearly everyone uses now is Google Maps. The next stage of AI will be a supercomputer that recreates human intelligence. And the final evolution is artificial super-intelligence (ASI) that learns so quickly that it literally "soars" past ordinary human intelligence and solves every problem confronting mankind.

But there is a dark, threatening side to the AI story, and it is only now being discussed publicly. Physicist Stephen Hawking has said that the development of ASI " could spell the end of the human race."

Microsoft co-founder Bill Gates says he doesn't " understand why some people are not concerned" that an artificial super-intelligence by mid-century might save (or destroy) human civilization.

Billionaire entrepreneur Elon Musk fears that we are " summoning the demon" in our race to create an artificial super-intelligence.

What all of them agree on is that we may very well approach a "tripwire" sometime in the next 30 years, where a powerful supercomputer finally replicates the human brain and mind, and crosses over nearly instantly into super-intelligence.

And then what happens next is anyone's guess.

"While most scientists I've come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI's abilities could be used to bring individual humans, and the species as a whole, to…species immortality," writes Tim Urban, the author of the popular "Wait, But Why?" blog.

Right now, drones use AI to navigate very complicated landscapes in order to deliver bombs in battlefield conditions. But they're still piloted remotely by human beings.

Should a stealth bomber be developed that can fly itself (not dissimilar to Google's self-driving cars that likewise use AI), and then make decisions about where to drop bombs in battlefield conditions without human input, it could create a situation where AI is in control (and not humans).

Kurzweil believes we will hit this tripwire by 2045. Most of his scientific colleagues believe it is inevitable that we will hit it at some point in the 21 st century. Many of them are fearful of what happens when we cross it.

But why are they all so afraid of ASI? It's a good question – one that hasn't truly been explored all that much beyond a few boardrooms.

Much of what the public knows about the potential risks posed from AI applications comes from either science fiction movies such as "The Terminator," or from fears surrounding autonomous weapons with AI capabilities to target without human control. These are very real fears.

The truth is that AI is poised to do significant, irreparable harm right now, not just at some point in the future through the creation of a non-human super-intelligence, scientists have warned. AI combined with autonomous weapons could launch an era of indiscriminate killing the likes of which civilization has never seen before.

There have been two revolutions in warfare. With each revolution, humankind made a quantum leap in the ability to kill exponentially more people on the battlefield from a distance. We are on the cusp of the third revolution, engineered by AI. This one, though, may erase its inventor.

For centuries, if you wanted to kill someone, you had to do it at close range. Gunpowder gave us the ability to fire projectiles at enemies from a distance, and changed the concept of war for good. Soldiers could kill their enemy without seeing the result at close range.

Nuclear weapons created the second revolution in warfare. While few nuclear weapons have been put to use, their invention taught us that we could create very large weapons, launch them from an even greater distance, and kill many people on the battlefield all at once. War hasn't been the same since then.

But it is the third revolution in warfare – autonomous weapons that can largely think for themselves and target enemies on the battlefield without human intervention - that we should all be worried about. Once such weapons are created, there may be no turning back.

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group," Musk, Hawking and others wrote in an open letter in July 2015. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

World leaders, to date, have ignored the scientists on the threats AI poses to our very existence, much as they've either ignored them (or moved slowly) on other existential future threats like climate change or nuclear proliferation.

But AI is different. Once a super-intelligent, big-data-crunching AI machine learns how to think and learn for itself, it may decide that carbon life forms are the obvious target in any threat scenario. At that point, it won't care what world leaders think.

More from U.S. News & World Report

U.S. News & World Report
U.S. News & World Report
image beaconimage beaconimage beacon