海角大神

As AI joins battlefield, Pentagon seeks ethicist

|
Charles Dharapak/AP/File
The Pentagon in Washington announced it is seeking to hire an AI ethicist. As artificial intelligence and machine learning permeate military affairs, these technologies are beginning to play a more direct role in taking lives.

When the chief of the Pentagon鈥檚 new Joint Artificial Intelligence Center briefed reporters recently, he made a point of emphasizing the imminently practical 鈥 even potentially boring 鈥撎齛pplications of machine learning to the business of war.听听

There鈥檚 the 鈥減redictive maintenance鈥 that AI can bring to Black Hawk helicopters, for example, and 鈥渋ntelligent business automation鈥 likely to lead to exciting boosts in 鈥渆fficiencies for back office functions,鈥 Lt. Gen. Jack Shanahan said. There are humanitarian pluses, too: AI will help the Defense Department better manage disaster relief.

But for 2020, the JAIC鈥檚 鈥渂iggest project,鈥 General Shanahan announced, will be what the center has dubbed 鈥淎I for maneuver and fires.鈥 In lulling U.S. military parlance, that includes targeting America鈥檚 enemies with 鈥渁ccelerated sensor-to-shooter timelines鈥 and 鈥渁utonomous and swarming systems鈥 of drones 鈥 reminders that war does, after all, often involve killing people.

Why We Wrote This

Artificial intelligence is making inroads in the U.S. military, transforming everything from helicopter maintenance to logistics to recruiting. But what happens when AI gets involved in war's grimmest task: taking lives?

When he was asked halfway through the press conference whether there should be 鈥渟ome sort of limitation鈥 on the application of AI for military purposes, General Shanahan perhaps recognized that this was a fitting occasion to mention that the JAIC will also be hiring an AI ethicist to join its team. 鈥淲e鈥檙e thinking deeply about the safe and lawful use of AI,鈥 he said.

As artificial intelligence and machine learning permeate military affairs, these technologies are beginning to play a more direct role in taking lives. The Pentagon鈥檚 decision to hire an AI ethicist reflects an acknowledgment that bringing intelligent machines onto the battlefield will raise some very hard questions.

鈥淚n every single engagement that I personally participate in with the public,鈥 said General Shanahan, 鈥減eople want to talk about ethics 鈥 which is appropriate.鈥澨

A shifting landscape

Hiring an ethicist was not his first impulse, General听Shanahan acknowledged. 鈥淲e wouldn鈥檛 have thought about this a year ago, I鈥檒l be honest with you. But it鈥檚 at the forefront of my thinking now.鈥澨

He wasn鈥檛 developing killer robots, after all. 鈥淭here鈥檚 a tendency, a proclivity to jump to a killer robot discussion when you talk AI,鈥 he said. But the landscape has changed. At the time, 鈥渢hese questions [of ethics] really did not rise to the surface every day, because it was really still humans looking at object detection, classification, and tracking. There were no weapons involved in that.鈥澨

Given the killing potentially involved in the 鈥淎I for maneuver and fires鈥 project, however, 鈥淚 have never spent the amount of time I鈥檓 spending now thinking about things like the ethical employment of artificial intelligence. We do take it very seriously,鈥 he said. 鈥淚t鈥檚 core to what we do in the DOD in any weapon system.鈥

Pentagon leaders repeatedly emphasize they are committed to keeping 鈥渉umans in the loop鈥 in any AI mission that involves shooting America鈥檚 enemies. Even so, AI technology 鈥渋s different enough that people are nervous about how far it can go,鈥 General听Shanahan said.听

While the Pentagon is already bound by international laws of warfare, a JAIC ethicist will confront the thorny issues around 鈥淗ow do we use AI in a way that ensures we continue to act ethically?鈥 says Paul Scharre, director of the technology and national security program at the Center for a New American Security.

It will be the job of the ethicist to ask the tough questions of a military figuring out, as General听Shanahan puts it, 鈥渨hat it takes to weave AI into the very fabric of DOD.鈥澨

Overseas competition

Doing so will involve mediating some seemingly disparate goals: While most U.S. officials agree that it is important to develop the military鈥檚 AI capabilities with an eye toward safeguarding human and civil rights, these same leaders also tend to be voraciously competitive when it comes to protecting U.S. national security from high-tech adversaries who may not abide by the same ethical standards.

General听Shanahan alluded to this tension as a bit of a sore spot: 鈥淎t its core, we are in a contest for the character of the international order in the digital age.鈥 This character should reflect the values of 鈥渇ree and democratic鈥 societies, he said. 鈥淚 don鈥檛 see China or Russia placing the same kind of emphasis in these areas.鈥

This gives China 鈥渁n advantage over the U.S. in speed of adoption [of AI technology],鈥 General听Shanahan argued, 鈥渂ecause they don鈥檛 have the same restrictions 鈥 at least nothing that I鈥檝e seen shows that they have those restrictions 鈥 that we put on every company, the DOD included, in terms of privacy and civil liberties,鈥 he added. 鈥淎nd what I don鈥檛 want to see is a future where our potential adversaries have a fully AI-enabled force 鈥 and we do not.鈥

Having an ethicist might help mediate some of these tensions, depending on how much power they have, says Patrick Lin, a philosophy professor specializing in AI and ethics at California Polytechnic State University in San Luis Obispo. 鈥淪ay the DOD is super-interested in rolling out facial recognition or targeting ID, but the ethicist raises a red flag and says, 鈥楴o way.鈥 What happens? Is this person a DOD insider or an outsider? Is this an employee who has to worry about keeping a job, or a contractor who would serve a two-year term then go back to a university?鈥澨

In other words, 鈥淲ill it be an advisory role, or will this person have a veto?鈥 The latter seems unlikely, Professor Lin says. 鈥淚t鈥檚 a lot of power for one person, and ignores the political realities. Even if the JAIC agrees with the AI ethicist that we shouldn鈥檛 roll out this [particular AI technology], we鈥檙e still governed by temporary political leaders who may have their own agenda. It could be that the president says, 鈥榃ell, do it anyway.鈥欌

An ethics of war

Ethicists will grapple with 鈥淚s it OK to create and deploy weapons that can be used in ethically acceptable ways by well-trained and lawyered-up U.S. forces, even if they are likely to be used unethically by many parties around the world?鈥 says Stuart Russell, professor of computer science and a specialist in AI and its relation to humanity at the University of California, Berkeley.听

To date, and 鈥渢o its credit, DOD has imposed very strong internal constraints against the principal ethical pitfalls it faces: developing and deploying lethal autonomous weapons,鈥 Professor Russell adds. Indeed, Pentagon officials argue that beyond the fact that it does not plan to develop 鈥渒iller robots鈥 that act without human input, AI can decrease the chances of civilian casualties by making the killing of dangerous enemies more precise.听

Yet even that accuracy, which some could argue is an unmitigated good in warfare, has the potential to raise some troubling ethical questions, too, Professor Lin says. 鈥淵ou could argue that it鈥檚 not clear how a robot would be different from, say, a really accurate gun,鈥 and that a 90% lethality rate is a 鈥渂ig improvement鈥 on human sharpshooters.

The U.S. military experienced a similar precision of fire during the first Gulf War, on what became known as the 鈥渉ighway of death,鈥 which ran from Kuwait to Iraq. Routed and hemmed in by U.S. forces, the retreating Iraqi vehicles 鈥 and the people inside them 鈥 were being hammered by American gunships, the proverbial 鈥渟hooting fish in a barrel,鈥 Professor Lin says. 鈥淵ou could say, 鈥楴o problem. They鈥檙e enemy combatants; it鈥檚 fair game.鈥欌 But it was 鈥渟o easy that the optics of it looked super bad and the operation stopped.鈥澨

鈥淭his starts us down the road to the idea of fair play 鈥 it鈥檚 not just a hangover from chivalry days. If you fight your enemy with honor and provide some possibility for mercy, it ensures the possibility for reconciliation.鈥 In other words, 鈥渨e have ethics of war,鈥 Professor Lin says, 鈥渋n order to lay the groundwork for a lasting peace.鈥

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines 鈥 with humanity. Listening to sources 鈥 with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That鈥檚 Monitor reporting 鈥 news that changes how you see the world.
QR Code to As AI joins battlefield, Pentagon seeks ethicist
Read this article in
/Technology/2019/1028/As-AI-joins-battlefield-Pentagon-seeks-ethicist
QR Code to Subscription page
Start your subscription today
/subscribe