海角大神

海角大神 / Text

Musk, Hawking, Chomsky: Why they want a ban on killer robots.

Leading researchers in robotics and artificial intelligence signed an open letter, published Monday, calling for a preemptive ban on autonomous offensive weapons.

By Jessica Mendoza, Staff writer

A global arms race for killer robots? Bad idea.

That鈥檚 according to more than 1,000 leading artificial intelligence (AI) and robotics researchers, who have together signed an open letter, published Monday, from the nonprofit Future of Life Institute.

The letter calls for a ban on autonomous offensive weapons as a means of preventing just such a disaster, and represents the latest word on the global conversation around the risks and benefits of AI weaponry.

Proponents of robotic weapons, such as the Pentagon, say that such technology could increase drone precision, keep troops out of harm鈥檚 way, and reduce emotional and irrational decisionmaking on the battlefield, 海角大神鈥檚 Pete Spotts reported聽last month.

Critics, however, warn that taking humans out of the equation could lead to human rights violations as well as trouble around international laws governing combat, Mr. Spotts wrote.

The current letter is inclined towards the latter:

鈥淲e therefore believe that a military AI arms race would not be beneficial for humanity,鈥 the letter goes on to say.

Among the signatories are renowned physicist Stephen Hawking, Tesla Motors Chief Executive Officer Elon Musk, cognitive scientist Noam Chomsky, and Apple co-founder Steve Wozniak, as well as top AI and robotics experts from the Massachusetts Institute of Technology, Harvard University, Microsoft, and Google.

Dr. Hawking in particular summoned images of the Terminator wreaking havoc on humans when he told the BBC in a 2014 interview, 鈥淭he development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.鈥

Others are less dire in their pronouncements.

鈥淲e鈥檙e not anti-robotics and not even anti-autonomy,鈥 Stephen Goose, one of the signatories and director of arms-control activities at Human Rights Watch, told the Monitor. 鈥淲e just say that you have to draw a line when you no longer have meaningful human control over the key combat decisions of targeting and attacking.鈥

The problem is what is meant by 鈥渕eaningful human control鈥 鈥 an idea that is 鈥渋ntuitively appealing even if the concept is not precisely defined,鈥 according to the United Nations Institute for Disarmament Research.

To further complicate the issue, others point out that a preemptive ban, such as that advocated by the open letter, could close the door to potential for developing AI technology that could save lives.

鈥淚t sounds counterintuitive, but technology clearly can do better than human beings in many cases,鈥 Ronald Arkin, an associate dean at the Georgia Institute of Technology in Atlanta whose research focuses on robotics and interactive computing, told the Monitor. 鈥淚f we are willing to turn over some of our decisionmaking to these machines, as we have been in the past, we may actually get better outcomes."

One thing most experts do agree on is that further debate is critical to determining the future of AI in warfare.

鈥淔urther discussion and dialogue is needed on autonomy and human control in weapon systems to better understand these issues and what principles should guide the development of future weapon systems that might incorporate increased autonomy,鈥 wrote Michael Horowitz and Paul Scharre, both from the Center for a New American Security.