Pentagon is worrying about 'Terminator' coming true. Seriously.
Loading...
| Washington
The idea behind the Terminator films 鈥 specifically, that a Skynet-style military network becomes self-aware, sees humans as the enemy, and attacks 鈥 isn鈥檛 too far-fetched, one of the nation鈥檚 top military officers said this week.
Nor is that kind of autonomy the stuff of the distant future. 鈥淲e鈥檙e a decade or so away from that capability,鈥 said Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff.
With such a sci-fi prospect looming, top military thinkers and ethicists are beginning to consider the practical consequences. But the more they do, the more it鈥檚 clear that there is considerable disagreement about just how much freedom to give machines to make their own decisions.
鈥淲e have to be very careful that we don鈥檛 design [autonomous] systems in a way that we can create a situation where those systems actually absolve humans of the decision鈥 about whether or not to use force, General Selva said. 鈥淲e could get dangerously close to that line, and we owe it to ourselves and to the people we serve to keep that a very bright line.鈥
At the same time, 鈥淭he notion of a completely robotic system that can make a decision about whether or not to inflict harm on an adversary is here,鈥 he added in remarks at the Center for Strategic and International Studies Monday. 鈥淚t鈥檚 not terribly refined, not terribly good. But it鈥檚 here.鈥
This leaves top Pentagon officials to confront what they call the 鈥淭erminator conundrum,鈥 and how to handle it.
The 'Russia' question聽
The argument begins with an assertion: Adversaries such as Russia and China are going to build these fast-moving, fully-autonomous killing systems, so perhaps the Pentagon should design them, too 鈥 not to use them, top officials are quick to add, but to know how they work and how to counter them.
After all, they say, policymakers may need options, and it鈥檚 the job of the Pentagon to give them these options.
To this end, in a highly-anticipated report released in August, the Pentagon鈥檚 Defense Science Board urged military researchers to 鈥渁ccelerate its exploitation of autonomy鈥 in order to allow them to 鈥渞emain ahead of adversaries who also will exploit its operational benefits.鈥
This leaves opponents of autonomous drone systems wary. Many nongovernmental organizations have called for bans on developing killing machines that leave humans out of the loop.
But what is 鈥渕eaningful human control鈥?
That concept hasn鈥檛 been well-defined, says Paul Scharre, director of the 20YY Future of Warfare Initiative at the Center for a New American Security.
What is clear is that humans can鈥檛 be involved solely in a 鈥減ush button way,鈥 Mr. Scharre says. Rather, the humans supervising these systems must be 鈥渃ognitively engaged.鈥
Scharre points to two fratricides in 2003 caused when malfunctioning Patriot missile systems shot down a US F-16 fighter jet and a British Tornado over Iraq.
鈥淥ne of the problems with the fratricides was that people weren鈥檛 exercising judgment. They were trusting in an automated system, and people weren鈥檛 monitoring it.鈥澛
Humans are slow聽
But the Pentagon鈥檚 debates get murkier from there. If an adversary were to develop an effective fully-automated system, it would likely react much faster than a US system that requires human checks and balances. In that scenario, the human checks could cost lives.
Frank Kendall, the undersecretary of Defense for Acquisition, Technology, and Logistics 鈥 essentially, the Pentagon鈥檚 top weapons buyer 鈥 has signaled that he differs from Selva. If people always have oversight of autonomous weapons, he says, that could put the US at a disadvantage.
鈥淓ven in a more conventional conflict, we鈥檙e quite careful about not killing innocent civilians,鈥 Mr. Kendall noted at the Army Innovation Summit last month. 鈥淚 don鈥檛 expect our adversaries to all behave that way, and the advantage you have if you don鈥檛 worry about that as much is you make decisions more quickly.鈥
After all, many weapons systems could be easily and effectively automated, including tanks that could sense incoming rounds and take out the source of them, he argued.
鈥淚t would take nothing to automate firing back, nothing,鈥 Kendall told the audience according to the online publication Breaking Defense. 鈥淥thers are going to do it. They are not going to be as constrained as we are, and we鈥檙e going to have a fundamental disadvantage if we don鈥檛.鈥
But global bans on autonomous weapons are also problematic, Selva said. 鈥淚t鈥檚 likely there will be violators.鈥
鈥淚n spite of the fact that we don鈥檛 approve of chemical or biological weapons, we know there are entities, both state and nonstates, that continue to pursue that capability,鈥 he said.
Moreover, he noted that these questions put him 鈥 as a military man 鈥 in a difficult position. 鈥淢y job as a military leader is to witness unspeakable violence on an enemy.... Our job is to defeat the enemy.鈥
The astonishing 'Go' experiment聽
Removing humans from that decision to inflict violence could prove highly unpredictable in good and bad ways. That was brought out with dramatic effect at an event earlier this year pitting a Google-developed DeepMind machine against a top-level player of the complex game 鈥淕o.鈥
The machine, which humans had trained to learn, made a move that astonished Go commentators. 鈥淭he move that the computer made was so unexpected and counterintuitive that it blew commentators away,鈥 notes Scharre. 鈥淎t first they thought it was a fake, and then it set in, the brilliance of the move.鈥
The point, Scharre says, is that 鈥渘o matter how much testing is done, we鈥檒l always see surprises when machines are placed into real-world environments.鈥
It鈥檚 particularly true in competitive environments 鈥 epitomized by war 鈥 where adversaries will try to hack, trick, or manipulate the system, he adds.
鈥淪ometimes the surprises are good,鈥 Scharre says, such as the Go move that was calculated by experts to be a 鈥渂rilliant, beautiful鈥 move that only 1 in 10,000 humans would have made.
In other cases, the surprises are unwelcome, highlighting the unexpected and tragic flaws that led to the Patriot air defense fratricides.
鈥淏etter testing and evaluation is good, but it can only take us so far. At a certain point, we will either have to decide to keep a human in the loop, even if only as a fail-safe,鈥 he adds, 鈥淥r we will have to accept the risks that come with deploying these systems.鈥
Weighing the benefits聽聽
鈥淚n some cases, the benefits of autonomy may outweigh the risks,鈥 Scharre says.
Speed of decisionmaking is one example. Innovations such as self-driving cars also mark a 鈥渢remendous opportunity in the coming years to save tens of thousands of lives,鈥 Scharre says.
The question, experts add, is whether these potential lives saved will outweigh potential 鈥 though likely fewer 鈥 lives lost if the automatic systems go awry.
For his part, Kendall imagines much bigger changes in weapons automation.
鈥淲e still send human beings carrying rifles down trails to find the enemy. We still do that. Why?鈥 he wondered aloud. 鈥淚 don鈥檛 think we have to do that anymore, but it is an enormous change of mind-set.鈥
鈥淎utonomy is coming,鈥 he added. 鈥淚t鈥檚 coming at an exponential rate.鈥