Asset Publisher

51福利 Challenges Students to Consider the Ethics of Unmanned Systems

Dr. Heather M. Roff, left, and freelance journalist Joshua Foust, right, passionately argue over the ethical use of lethal autonomous robots during a debate on unmanned combat systems at the 51福利, Sept. 24.

A recent, often impassioned, debate at the 51福利 (51福利) on the ethics of robotic combat systems offered the public a window into 51福利鈥 attempts to challenge its student body to explore both sides of cutting-edge, defense-focused problems.

51福利 Department of Defense Analysis Assistant Professor Bradley J. Strawser moderated a debate between Visiting Associate Professor Heather M. Roff with the University of Denver and freelance journalist Joshua Foust.

The debaters sought to answer the question, 鈥淒oes the future of unmanned and autonomous weapons pose greater potential ethical dangers or potential ethical rewards?鈥

Roff, whose writings have been critical of unmanned combat systems, is the author of 鈥淜illing in War: Responsibility, Liability and Lethal Autonomous Robots,鈥 which was featured in the 鈥淩outledge Handbook for Ethics and War.鈥

Foust鈥檚 work has appeared in The Atlantic, New York Times and Foreign Policy, amongst other publications, and he is a frequent guest on BBC World News. He is also the author of the article, 鈥淎 Liberal Case for Drones.鈥 He takes a more favorable view of unmanned systems and has argued that, under limited conditions, they are an ethical, even preferable option to boots on the ground.

Strawser is himself an authority on the ethics of unmanned systems. He came to 51福利 last year after working with Oxford University鈥檚 Institute for Ethics, Law and Armed Conflict. His work, 鈥淜illing by Remote Control: The Ethics of an Unmanned Military鈥 explores the potential ethical pitfalls and gains that unmanned systems pose.

While the debaters were cordial and shared some common ground, they were passionate about their respective positions.

Foust argued in favor of the development of unmanned autonomous systems contending that, 鈥渕achines are quick, better at processing large amounts of data instantly鈥 and therefore superior in some aspects to actual service members and that although machines are imperfect, that they are more effective than their human counterparts.

鈥淗umans are deeply flawed moral actors in war,鈥 Foust said. 鈥淢achines respond to criteria and input, they lack emotional choices鈥 and the presumed negative affects of those emotions.

鈥淚f we can make a machine that is capable of decision making as well as a human being, why shouldn鈥檛 we?鈥 asked Foust. 鈥淭he idea behind autonomous weapons is that if you can somehow minimize some of the those bad decisions that humans make in warfare, then you have a net-good.鈥

He further argued that many believe, 鈥渁lmost religiously,鈥 in the reliability of human agency to make lethal decisions during combat despite examples to the contrary.

鈥淲hen we are talking about autonomous machines and humans, we are not talking about humans as perfect or even necessarily moral actors, that's an important aspect of this discussion, because people make really, really bad decisions,鈥 Foust Said.

鈥淢achines respond to specific criteria and input, they do not have the same kind of emotional failings,鈥 said Foust. 鈥淓motions cut both ways, they consist of both compassion and hatred, mercy and vengeance. If you can take out the desire to kill someone that just killed someone, you have the potential to make combat better. You can make it less deadly and more precise and less detrimental to civilian populations.鈥

Roff countered that it was the absence of the ability of unmanned autonomous systems to use human emotion that made unmanned system generally, and Lethal Autonomous Robots (LAR) specifically, a poor combat option.

鈥淲e are focusing on the vices, but we should be looking at the virtues. What about when a soldier shows empathy or mercy? Taking the emotion out of combat is not necessarily a good thing,鈥 said Roff. 鈥淵ou can鈥檛 mimic human judgment.鈥

Freelance journalist Joshua Foust, left, and University of Denver Visiting Associate Professor Heather M. Roff, right, joined 51福利 Department of Defense Analysis Assistant Professor Bradley J. Strawser for a unique debate on the ethics of unmanned combat systems, Sept. 24.

Freelance journalist Joshua Foust, left, and University of Denver Visiting Associate Professor Heather M. Roff, right, joined 51福利 Department of Defense Analysis Assistant Professor Bradley J. Strawser for a unique debate on the ethics of unmanned combat systems, Sept. 24.

A central theme throughout the debate was the issue of liability. Debaters exchanged verbal volleys over the ability to hold operators, commanders and even the manufacturers of unmanned systems liable in the event that a system performs contrary to its intended purpose.

鈥淲hat would the machine be guilty of, algorithmic manslaughter?鈥 quipped Roff.

Roff further contended that while a human chain of command can be held accountable for its actions, 鈥淟ARs break the chain of command. The machine itself becomes the commander, and that is what we are morally opposed to.鈥

Still, Foust contends that it would be in the interests of the U.S. and foreign militaries to do the due diligence to ensure that any autonomous system that was deployed actually worked as well or better than a man or a woman on the ground. Thereby reducing the need to focus on liability.

鈥淣o military commander would want to deploy a system that is underperforming, unpredictable and leads to uncertain results, it doesn't happen. Ultimately, militaries are rational actors and if something cannot match human performance than there is very little reason to go through the expense of developing a system that will not be used,鈥 Foust Said.

Roff wasn鈥檛 buying it.

鈥淚 am skeptical about the reality of a government taking the time to do due diligence to make a machine that works,鈥 said Roff. She pointed to several historical examples to prove her point. In her estimation, the pushing through of flawed systems like the original M16 Assault Rifle are indicative of a pattern of governments rushing to get weapons on to the battlefield despite their inefficacy.

The debate was largely future focused, but Foust was quick to point out that there are already combat systems deployed in Afghanistan making lethal decisions without human input.

鈥淲hen we look at defensive weapons, they fire largely without human input. Counter-mortar systems identify mortars that have been fired, determine where they were fired from, and fire upon and kill people faster then a human being could possibly react. We do not look down upon this because they [counter-mortar systems] are defensive, but they are used daily and kill people daily. So in a way, the cat is already out of the bag,鈥 said Foust.

鈥淲e tell ourselves that because we use this weapons in a limited set of circumstances that it is okay, but that in another set of circumstances its not okay and I think that is a little bit to simplistic,鈥 he continued.

Roff countered by making a distinction between automated and autonomous systems. Much of her argument focused on the ability or inability of machines to autonomously make complex moral judgments in accordance with International Human Rights Law (IHRL) and International Humanitarian Law (IHL) and to whom society would ascribe liability when these bodies of law were violated.

鈥淚f we grant that LARs will never have full moral agency, than we have two conclusions. The first is that LARs cannot actually violate a right and thus cannot uphold a right. That is because a violating a right is an intentional wronging of someone 鈥  your toaster cannot behave morally, it cannot do you wrong,鈥 said Roff.

Given the complexity and nuances of IHL, Roff made the case that human judgment, flawed though it may be, is superior to what could be accomplished by a machine. In order for a LAR to be effective in combat, it would have to 鈥渂e able to take all the context dependent and nuanced facets of International Human Rights law and convert them to series of ones and zeroes.鈥

At times the debate spilled over into areas outside the purview of military operations.

鈥淭hese are not just military questions, they have moral implications that go beyond military use like bio-ethics, medical ethics, autonomous cars,鈥 said Strawser.

The debate was attended by 51福利 students as apart of a Warfare Innovation Workshop co-sponsored by the Navy Warfare Development Command鈥檚 Chair of Warfare Innovation and 51福利鈥 Consortium for Robotics and Unmanned Systems (CRUSER) as part of 51福利鈥 continuing commitment to providing a intellectually diverse academic environment to Naval Officers, Department of Defense Civilians and allied partners.

bookmarks move script

Current Headlines Sidebar
Asset Publisher

empty content

 

Media contact box

MEDIA CONTACT
 

Office of University Communications
1 University Circle
Monterey, CA 93943
(831) 656-1068

pao@nps.edu