(français) 法国 英国 中国

CATEGORY Robotics series

Robotics Series – 4 – Can military robots adopt ethical standards?

ParisTech Review / Editors / 2014-10-23

In recent years, the massive and controversial use of drones in U.S. military operations in Iraq and Afghanistan has fueled an intense debate. But this controversy is only the tip of the iceberg: the development of standalone and remote-controlled machines is but the prelude to the rise of military robotics, a field that involves all industries. It is already used in logistics, communications and training, with expected effects on number of staff and productivity. The gradual integration of robotics will affect the safety of operational troops and combat on the battlefield. It will also raise many ethical questions.

Read this article in Chinese | Français


In the United States, the most advanced country in the research on military robots, the reason most often given for the use of these machines is the need to reduce the hardship and danger of military operations. The idea of “clean war” was launched in the early 1990s. It not only refers to a war that spares civilian losses and more generally, human lives, through “surgical strikes”; it also reflects the unwillingness of the public opinion to put soldiers’ lives at risk and more generally, to threaten their physical and psychological integrity (post-traumatic stress). In asymmetric conflicts fought against guerrillas that don’t respect the rules of war, the issue of treatment of prisoners has also become very sensitive.

In this context, during the past twenty years, robots have gained a higher profile in military doctrine and operations, to the extent that defense industries have contributed significantly to the recent growth of civilian robotics.

Military robotics isn’t only about robotics in the strict sense (standalone machines driven by algorithms), but also remote-controlled devices. Each of them has many advantages on the ground. Their performances do not rely on fatigue or weather conditions (cold, rain, darkness). They are never distracted and know nothing of fear. And when they are damaged, they can be destroyed without qualms.

But the massive use of military robotics can also make a difference in terms of human resources and payroll. Soldiers are expensive, both during and after their active service. In 2012, for example, remuneration and social and medical coverage of active and retired military consumed one quarter of the U.S. defense budget, nearly $150 billion. One of the reasons for the rise of military robotics is the desire to reduce the costs of defense. The aim is to reduce the number by 60,000 men before the end of 2015 and again by 60,000 before 2019 for a total of 420,000 men.

The Pentagon seeks to reduce the size of their operational “bricks”, while preserving the overall efficiency. In early 2014, U.S. General Robert Cole, head of the U.S. Army Training and Doctrine Command (TRADOC), explained that by 2030-2040, the Brigade Combat Team (BCT), the smallest of the large units that can be sent to fight independently, must decrease from 4000 to 3000 men. This involves restructuring the smallest useful unit of nine men, the squad. Military doctrine is based in part on the vehicles that can transport this squad. The objective of the DARPA (Defense Advanced Research Projects Agency), the largest technology laboratory in the Pentagon, can be summarized in a few words: inventing robots that can reduce the size of the squad and the cost of its vehicles.

An appropriate robot for each problem

Robots are part of the military equipment since a long time. For example, through onboard computers which have become indispensable components of fighter planes. On land, robots are used to perform boring, dirty or dangerous operations. Some are multipurpose, others are highly specialized. An appropriate robot for each problem.

The destruction of improvised explosive devices (IED) is one of the missions where robots have now become indispensable. This development has a history. While combats in Iraq took place mainly on roads where vehicles were the most common targets, in Afghanistan ground troops were confronted to IEDs. Troops were considerably slowed down in their progression because they had to be preceded by mine-clearing teams equipped with mine detectors that move very slowly. Engineers quickly discovered that a pressure from 37 to 53 lbs/sq. inch is enough to trigger an IED, much less than the 88 lbs/sq. inch pressure of an equipped soldier. They therefore developed a mini-bulldozer driven by a simple robotic kit. Since then, different mine-clearing robots have been developed. Most of them are very small: Minirogen, from French company ECA Robotics, only weighs 13 lbs. Its compact dimensions allow it to inspect inaccessible locations, such as water pipes or the underside of vehicles.

minirogen
ECA Robotics’ Minirogen

U.S. forces have 7,000 drones and about 5,000 ground robots. The most sophisticated is “Packbot 510” from the iRobot company. It is able to recover from a fall, to maneuver over rough terrains, restore a faulty radio, transmit HD images. To ease logistics, “mule robots” are capable of carrying military equipment and supplies. Boston Dynamics has developed a fast and agile mule called “Big Dog” that can carry 400 lbs of equipment on 20 miles in 24 hours. DARPA is working on its side on a “wild cat”, a new autonomous robot (unlike its predecessor) equipped with a two-stroke engine coupled to a hydraulic pump. This metal quadruped, currently still in prototype stage, is capable of reaching 16 mph on flat ground. DARPA has also launched a contest: the “Robotics Challenge Trials” during which machines compete in opening a door, removing a pile of rubble or driving a car.

irobot_packbot
iRobot’s Packbot

The key term of military robotics is the concept of “artificial soldier”. One of its branches strives to meld humans and machines with an exoskeleton – a high tech combination made famous by the movie Iron Man – that would boost the speed, power and accuracy of the infantry on the battlefield. The XOS2, presented in 2012, weighs only 22 lbs and allows a soldier to maneuver loads of over 200 lbs!

Autonomy: how far?

The opportunities are obvious but the challenges are considerable, especially for autonomous machines, those that are driven by algorithms and not by a human being. As was the case for the assembly lines in the industry, the challenge is to build robots capable of operating among humans without undermining their effectiveness or physical and mental integrity. In addition, legal uncertainty may arise. The actions of a robot are a reflection of its programmer, its manufacturer and its operator. Which of them is liable in case of malfunction?

Furthermore, the use of semi-autonomous or remote-controlled machines has psychological impact on staff who use them and are involved in their actions. Experience has shown that technicians who fly drones remotely suffer from disorders caused by the disjunction between their daily lives and the decisions they make – decisions that often trigger lethal consequences thousands of miles away. A study conducted by Julie Carpenter from the University of Washington showed that, on the battlefield, the relationship between a soldier and a mine-clearing robot is far more complex than the one that usually binds a user with his instrument. In this particular case, the user creates a strong emotional bond with his robot that may affect his effectiveness.

However, the deepest issues are raised by autonomous machines. With the advances in artificial intelligence and the development of learning machines, the question arises of how to restrict the autonomy of robots. For military robots operating on the battlefield, especially in urban areas (the most dangerous situation for soldiers), the question focuses on whether it is possible to delegate the decision to open fire to the machine. Many hawks are already questioning whether a human mind should still be allowed a veto right. According to a recent report by the U.S. Air Force, “by 2030, the capacity of machines will increase to the point that humans will become the weakest link in a wide range of systems and processes.”

Quasi-autonomous robots already exist. The Israeli Harpy drone flies on its own to a patrol area, inspects the surroundings in search for an enemy radar signal, then opens fire to destroy the source of this signal. Missile defense systems such as the U.S. Navy Phalanx or the Israeli Iron Dome system automatically launch missiles with a delay that leaves virtually no room for human intervention. Similarly, the Samsung Techwin SGR-A1 system, which replaces soldiers along the border between the two Koreas, is a robot that detects the entry of a person into its relevant scope and asks for a password. Theoretically, the SGR-A1 can be set to shoot automatically.

Where to draw the line? Ron Arkin, robotics expert and ethicist at Georgia Tech, has worked extensively with Pentagon agencies on various robotic systems. He proposes to integrate a code of ethics – a set of rules that outline a form of artificial consciousness – directly into the machines to ensure their conformity with international humanitarian laws.

Robotic ethics?

For military robots, the notions of security and reliability cannot be separated from the ability to select and destroy targets. They must be programmed to spare targets “worthy of moral consideration” including “friendly” targets, civilians and foes that are out of action. One could also add opponents to the list, insofar as it is always better to disarm an opponent that to eliminate him. Robots must choose among different modes of action and therefore be able to express ethical judgments. This is what engineers call an “artificial moral agent.” The challenge depends on the robot’s flexibility. When the robot operates in a geographically circumscribed context, its actions are in the hands of programmers and designers who can integrate in advance all the answers, according to each predictable occurrence, into its software. This robot is called “operationally moral.” It doesn’t need to assess for itself the ethical context, that is to say, the consequences of its actions. Such a robot will never end up in a situation requiring it to choose in a split of a second the ethical rules that apply to a particular case. Nor will it ever have to choose between several contradictory rules.

A report submitted to the U.S. Navy by the “Ethics and Emerging Sciences” group of the California Polytechnic State University of San Luis Obispo (California) states that this operational morality is inadequate because it can collapse in complex environments where robots are submitted to unforeseen influences that can overwhelm their overly simple control architecture. According to these researchers, machines must also acquire a functional morality, that is to say an autonomous capacity for ethical judgment. However, this type of program raises serious difficulties. The academic group has focused on programming the most simple “moral” laws, starting with those devised by Isaac Asimov, the well-known American author of popular science books and science fiction novels. The Three Laws of Robotics are: 1- A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. It is easy to see that these rules are limited by the machine’s knowledge of what hurting a human being means exactly.

Another approach: an “automatic” ethics that would part from the Kantian universalization of rules of behavior through what the German philosopher called the “categorical imperative”. But the proponents of this idea face the fact that a robot has neither will nor intention, whether good or bad. Other teams of researchers recommend to abandon altogether the idea of ​​embedding ethics inside a software and advocate to manage military robots as obedient machines with a « slave ethics », a seemingly reassuring proposal but which collides against the need for effectiveness: the use of a military robot in a theater of operations is meaningful only if it can act and react very quickly.

Some philosophers propose an alternative approach. According to them, programmers are developing independent subsystems that could potentially contribute to the development of “artificial moral agents” even if taken individually, none of these subsystems is explicitly designed for moral reasoning. According to them, learning algorithms, emotional sensors and social mechanisms may all contribute to a robot’s sense of ethics. However, computer scientists don’t know whether this system could one day lead robots to a higher cognitive level of emotional intelligence, moral judgment and consciousness.

Whichever precautions are taken, the idea of ​​deploying autonomous machines with lethal capabilities raises concerns around the world. The International Committee for Robot Arms Control (ICRAC), an NGO founded in 2009 by experts in robotics, ethics, international relations and human rights, has become their representative and preaches against the “dystopia” (anti-utopia) of a world plagued with armed robots. The ICRAC pilots the campaign to end the race for robotic arms before it starts. It has been successful in some cases. In May 2013, a UN report called for a temporary ban of lethal autonomous systems until member countries would establish rules for their use.

YOU MAY ALSO LIKE...

  • CO2 emissions management: Options in the era of intelligent transportation
    / Product Marketing, Research and Development Director, Valeo Powertrain Systems /
  • The elusive risks of nano-medicine
    / Editors /
  • Robotics Series – 9 – Robots in everyday life
    / Editors /