You are using an older browser version. Please use a supported version for the best MSN experience.

Weaponized AI is coming. Are algorithmic forever wars our future?

The Guardian logo The Guardian 10/12/2018 Ben Tarnoff
The US has been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there. © Getty Images The US has been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there.

Last month marked the 17th anniversary of 9/11. With it came a new milestone: we’ve been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there. They can also serve in the six other places where we’re officially at war, not to mention the 133 countries where special operations forces have conducted missions in just the first half of 2018.

The wars of 9/11 continue, with no end in sight. Now, the Pentagon is investing heavily in technologies that will intensify them. By embracing the latest tools that the tech industry has to offer, the US military is creating a more automated form of warfare – one that will greatly increase its capacity to wage war everywhere forever.

On Friday, the defense department closes the bidding period for one of the biggest technology contracts in its history: the Joint Enterprise Defense Infrastructure (Jedi). Jedi is an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger. The contract is worth as much as $10bn over 10 years, which is why big tech companies are fighting hard to win it. (Not Google, however, where a pressure campaign by workers forced management to drop out of the running.)

At first glance, Jedi might look like just another IT modernization project. Government IT tends to run a fair distance behind Silicon Valley, even in a place as lavishly funded as the Pentagon. With some 3.4 million users and 4 million devices, the defense department’s digital footprint is immense. Moving even a portion of its workloads to a cloud provider such as Amazon will no doubt improve efficiency.

But the real force driving Jedi is the desire to weaponize AI – what the defense department has begun calling “algorithmic warfare”. By pooling the military’s data into a modern cloud platform, and using the machine-learning services that such platforms provide to analyze that data, Jedi will help the Pentagon realize its AI ambitions.

The scale of those ambitions has grown increasingly clear in recent months. In June, the Pentagon established the Joint Artificial Intelligence Center (JAIC), which will oversee the roughly 600 AI projects currently under way across the department at a planned cost of $1.7bn. And in September, the Defense Advanced Research Projects Agency (Darpa), the Pentagon’s storied R&D wing, announced it would be investing up to $2bn over the next five years into AI weapons research.

© Getty

So far, the reporting on the Pentagon’s AI spending spree has largely focused on the prospect of autonomous weapons – Terminator-style killer robots that mow people down without any input from a human operator. This is indeed a frightening near-future scenario, and a global ban on autonomous weaponry of the kind sought by the Campaign to Stop Killer Robots is absolutely essential.

But AI has already begun rewiring warfare, even if it hasn’t (yet) taken the form of literal Terminators. There are less cinematic but equally scary ways to weaponize AI. You don’t need algorithms pulling the trigger for algorithms to play an extremely dangerous role.

To understand that role, it helps to understand the particular difficulties posed by the forever war. The killing itself isn’t particularly difficult. With a military budget larger than that of China, Russia, Saudi Arabia, India, France, Britain and Japan combined, and some 800 bases around the world, the US has an abundance of firepower and an unparalleled ability to deploy that firepower anywhere on the planet.

The US military knows how to kill. The harder part is figuring out whom to kill. In a more traditional war, you simply kill the enemy. But who is the enemy in a conflict with no national boundaries, no fixed battlefields, and no conventional adversaries?

This is the perennial question of the forever war. It is also a key feature of its design. The vagueness of the enemy is what has enabled the conflict to continue for nearly two decades and to expand to more than 70 countries – a boon to the contractors, bureaucrats and politicians who make their living from US militarism. If war is a racket, in the words of marine legend Smedley Butler, the forever war is one the longest cons yet.

But the vagueness of the enemy also creates certain challenges. It’s one thing to look at a map of North Vietnam and pick places to bomb. It’s quite another to sift through vast quantities of information from all over the world in order to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labor-intensive. This is where AI – or, more precisely, machine learning – comes in. Machine learning can help automate one of the more tedious and time-consuming aspects of the forever war: finding people to kill.

The Pentagon’s Project Maven is already putting this idea into practice. Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company’s involvement. Maven is the military’s “pathfinder” AI project. Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.

KANDAHAR, AFGHANISTAN - The US military in Kandahar, southern Afghanistan, a Taliban stronghold, are using high-tech Predator drones against their enemy. They have approximately 8 there. The Predator has no pilot, and is controlled for his highly secret mission from Las Vegas. The team in Kandahar is in charge of their take off and landings. The drones have a highly powerful camera with infrared, as well as a still camera and two missiles. (Photo by Veronique de Viguerie/Getty Images) © Getty KANDAHAR, AFGHANISTAN - The US military in Kandahar, southern Afghanistan, a Taliban stronghold, are using high-tech Predator drones against their enemy. They have approximately 8 there. The Predator has no pilot, and is controlled for his highly secret mission from Las Vegas. The team in Kandahar is in charge of their take off and landings. The drones have a highly powerful camera with infrared, as well as a still camera and two missiles. (Photo by Veronique de Viguerie/Getty Images)

“We have analysts looking at full-motion video, staring at screens 6, 7, 8, 9, 10, 11 hours at a time,” says the project director, Lt Gen Jack Shanahan. Maven’s software automates that work, then relays its discoveries to a human. So far, it’s been a big success: the software has been deployed to as many as six combat locations in the Middle East and Africa. The goal is to eventually load the software on to the drones themselves, so they can locate targets in real time.

Won’t this technology improve precision, thus reducing civilian casualties? This is a common argument made by higher-ups in both the Pentagon and Silicon Valley to defend their collaboration on projects like Maven. Code for America’s Jen Pahlka puts it in terms of “sharp knives” versus “dull knives”: sharper knives can help the military save lives.

In the case of weaponized AI, however, the knives in question aren’t particularly sharp. There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms – algorithms that can’t recognize black faces, or that reinforce racial bias in policing and criminal sentencing. Do we really want the Pentagon using the same technology to help determine who gets a bomb dropped on their head?

But the deeper problem with the humanitarian argument for algorithmic warfare is the assumption that the US military is an essentially benevolent force. Many millions of people around the world would disagree. In 2017 alone, the US and allied strikes in Iraq and Syria killed as many as 6,000 civilians. Numbers like these don’t suggest a few honest mistakes here and there, but a systemic indifference to “collateral damage”. Indeed, the US government has repeatedly bombed civilian gatherings such as weddings in the hopes of killing a high-value target.

© Getty

Further, the line between civilian and combatant is highly porous in the era of the forever war. A report from the Intercept suggests that the US military labels anyone it kills in “targeted” strikes as “enemy killed in action”, even if they weren’t one of the targets. The so-called “signature strikes” conducted by the US military and the CIA play similar tricks with the concept of the combatant. These are drone attacks on individuals whose identities are unknown, but who are suspected of being militants based on displaying certain “signatures” – which can be as vague as being a military-aged male in a particular area.

The problem isn’t the quality of the tools, in other words, but the institution wielding them. And AI will only make that institution more brutal. The forever war demands that the US sees enemies everywhere. AI promises to find those enemies faster – even if all it takes to be considered an enemy is exhibiting a pattern of behavior that a (classified) machine-learning model associates with hostile activity. Call it death by big data.

AI also has the potential to make the forever war more permanent, by giving some of the country’s largest companies a stake in perpetuating it. Silicon Valley has always had close links to the US military. But algorithmic warfare will bring big tech deeper into the military-industrial complex, and give billionaires like Jeff Bezos a powerful incentive to ensure the forever war lasts forever. Enemies will be found. Money will be made.


AdChoices
AdChoices

More from The Guardian

image beaconimage beaconimage beacon