The case for banning killer robots

stances rather than an outright ban and stigmatization of the weapon systems. Do not make decisions based on unfounded fears—remove pathos and hype and focus on the real technical, legal, ethical, and moral implications. In the future autonomous robots may be able to outperform humans from an ethical perspective under battlefield conditions for numerous reasons: ˲ Their ability to act conservatively, as they do not need to protect themselves in cases of low certainty of target identification. ˲ The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans currently possess. ˲ They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. ˲ Avoidance of the human psychological problem of “scenario fulfillment” is possible, a factor contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988. ˲ They can integrate more information from more sources far faster than a human possibly could in real time before responding with lethal force. ˲ When working in a team of combined human soldiers and autonomous systems, they have the potential of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed. LAWS should not be considered an end-all military solution—far from it. Limited circumstances for their use must be utilized. Current thinking recommends: ˲ Specialized missions only where bounded morality,a,1 applies, for example, room clearing, countersniper operations, or perimeter protection in the DMZ. ˲ High-intensity interstate warfare,