By SYDNEY J. FREEDBERG JR.
M1 Abrams tank

Russell, however, has bigger things to worry about — or rather, much, much smaller things.
Soldier with handheld quadcopter

So what Russell really worries about is not robotic tanks — though he’d definitely prefer a world without them — but what happens when the technology is developed and the precedent is set.
“Given the cost of a new M1A2 around $9 million…there are far cheaper ways to flatten a city and/or kill all of its inhabitants,” Russell told me. “The problem with full autonomy is that it creates cheap, scalable weapons of mass destruction.”
It’s already possible to build assassin drones by combining off-the-shelf quadcopters, small amounts of homebrewed explosive, and the kind of facial-recognition technology Facebook uses to tag other people’s bad pictures of you.
“My UAV colleagues tell me they could build a weapon that could go into a building, find an individual, and kill them as a class project,” Russell said. “Skydio plus self-driving cars plus AlphaStar more or less covers it.” (Skydio’s a drone you can buy on Amazon; AlphaStar is a version of the DeepMind AI that beats humans at complex strategy games like Starcraft). In fact, he said, Switzerland’s domestic security agency, DDPS, “made some to see if they would work — and they do.”
Not only would they work, they’ve already been tried. ISIS has already used mini-drones as “flying IEDs,” and someone attempted to assassinate Venezuelan president Nicolàs Maduro with a pair of exploding drones.
A quadcopter that slipped through security to land on the White House lawn
Small Drones, Big Kills
Now what happens when you scale this up? Russell and fellow activists actually produced a video, Slaughterbots, in which swarms of mini-drones attack, among other groups, every member of Congress from a particular party. But that’s still thinking small.
Remember, once you’ve written the software, you can make infinite copies; lone cranks can make explosives; and mini-drones are getting cheaper by the day. Remember also that the Chinese government has personal information on some 22.1 million federal employees, contractors, and their family members from the Office of Personnel Management breach two years ago. Now imagine one out of every thousand shipping containers imported from China is actually full of mini-drones programmed to go to those addresses and explode in the face of the first person to leave the house. Imagine they do this the day before China invades Taiwan. How effectively would the US government react?
A quadcopter drone destroyed by Rafael’s “Drone Dome” laser system.

Such a tactic might only work once, much like hijacking airliners with box cutters on 9/11. “Small drones are vulnerable to jamming, to high-powered microwaves, to other drones that might intercept them, to nets,” said Paul Scharre, an Army Ranger turned thinktank analyst. “Bullets work pretty well… I have a buddy who shot a drone out of the sky back in Iraq in 2005.” (Unfortunately, the drone was American). At least some object-recognition algorithms can be tricked by carefully applied reflective tape.
“People are working on countermeasures today,” Scharre told me, “and the bigger the threat becomes, the more people have an incentive to invest in countermeasures.”
But how do you stop tiny drones from becoming a big threat in the first place? While technology to build a “working prototype” already exists, Russell told me, the barrier is mass production.
No national spy agency or international monitoring regime can find and stop everyone trying to make small numbers of drones. But, Russell argues fervently, a treaty banning “lethal autonomous weapons systems” would prevent countries and companies from openly producing swarms of them, and a robust inspection mechanism — perhaps modeled on the Organisation for the Prohibition of Chemical Weapons— could detect covert attempts at mass production.
Without a ban, Russell said, legal mass production could make lethal swarms as easy to obtain as, say, assault rifles — except, of course, one person can’t aim and fire thousands of rifles at once. Thousands of drones? Sure.
So don’t fear robots who rebel against their human masters. Fear robots in the hands of the wrong human.
Would a ban on lethal AI actually work? Would the United States actually want it to work? That’s the question we’ll address in the fourth and final story in this series, out Monday.