As military robots gain traction, ethical-use guidelines emerge

Robots can help the military – on land, at sea, in the sky, and in space – and the U.S. Department of Defense is slowly establishing some guidelines for their ethical use.

From massive drone swarms by air or on land to “killer robots” or satellite robots wielding lasers and other weapons, the term “military robots” can sound somewhat alarming. But the reality is that most of the robots used by the U.S. military today are for relatively mundane but highly important tasks.

“Robotics are already being used for logistics and transportation,” says Brad Curran, an aerospace and defense industry analyst for Frost and Sullivan. “Most trucks and trains will become autonomous, essentially robots, directed from a central headquarters and run 24-7 within their own lanes on the highway, which is expected to make the roads a lot safer.”

On battlefields in Afghanistan and Iraq, many U.S. casualties occurred during attacks on convoys. “Fuel, water, and ammunition needed to be transported across large distances – countries the size of Texas,” Curran adds. “There were hostile people and there were many places to be ambushed. Using autonomous vehicles – self-driving trucks – to haul that stuff would be a huge benefit in those situations. It’s too heavy and expensive to go by air, so self-driving convoys of supply trucks would be ideal.”

If you’re out on patrol with robots behind and in front of you (think Boston Dynamics’ four-legged robot Spot) that can be programmed through your route, they can use their acoustic, electro-optical/infrared, and magnetic sensors and video cameras to search for mines or an enemy that may be setting up an ambush.

“The same thing goes within an urban environment for unmanned aerial vehicles (UAVs),” Curran says. “There are so many different windows, corners, and underground areas. But you can fly a UAV over the top and it can send down live video and talk to a robot on the ground and you. Tie in artificial intelligence (AI) and those other Internet of Things (IoT) battlefield sensors, big data analysis, and anomaly change detection, and it can be overlaid on an up-to-date map and all of a sudden a corporal on patrol has an extremely good situational awareness of the battlefield. Then, in the back, in trace of your patrol, you can have another robot carrying your extra water, ammo, or batteries.”

Military robots on the ground will require a lot of batteries or some form of energy harvesting to recharge themselves or each other remotely.

As we gear up to go to Mars, the entire robotics arena will benefit from this work: “When we went to the moon we developed all kinds of cool technologies that we have now because of it,” Curran points out. “We’re about to have all kinds of cool things we haven’t even thought of yet – everything from materials to laser communications to exotic propulsion – because we want to go to Mars and be able to survive. None of this is in isolation. If you’re going to build robots in the future it’ll take dozens of technologies working together.”

Robot use by the military is growing

UAVs – also known colloquially as drones – might be the first type of military robot that comes to mind, and “they’re still extremely popular, but the focus has shifted to improving their sen­sors, collaboration, intelligence analysis, and distribution, more so than the platform itself,” Curran says.

At sea, unmanned undersea vehicles and autonomous surface vehicles have been around for a while, and he now expects to see exponential growth surface and subsurface robot use in the Navy, similar to the explosive growth of UAVs during the past 10 to 15 years.

Progress for military robots on the ground has been slower, primarily because of difficulties with the terrain and navigation. Another big hurdle is that they’re noisy. “Ground troops want the help of ground-based robotic sensors that can be out on the perimeter to improve security, and that can follow and trace, and carry the heavy items like fuel, ammunition, water,” Curran says. “Most are electric and use tracks, which are inherently noisy. That noise is going to draw attention that can get you killed, so finding ways to reduce noise is important and being actively worked on.”

21

Figure 1 | Complex streamlines generated by the flapping wings of a mosquito in flight: Mosquitoes flap their wings not just to stay aloft but also to generate sound and to point that buzz in the direction of a potential mate, according to Johns Hopkins University researchers. This knowledge may help create quieter drones. Credit: Rajat Mittal/JHU.

Noise is also a problem for UAVs: On this front, researchers at Johns Hopkins University are exploring the sounds of mosquitoes mating, which they say could potentially lead to quieter drones in the sky. (Figure 1.)

Rajat Mittal, a mechanical engineering professor, and Jung-Hee Seo, an assistant research professor in the univer­sity’s Whiting School of Engineering, are studying the aerodynamics and acoustics of mosquito mating rituals via computer modeling. The researchers say that understanding the strategies and adaptations used by insects such as mosquitoes to control their aeroacoustic noise could provide insights into the development of quieter rotors for drones and other bioinspired micro-aerial vehicles.

COTS component use

Most military robots are using commercial off-the-shelf (COTS) components today, according to Curran, but some are custom. Many military labs and innovative firms like Boston Dynamics are working on some downright cool robots.

Northrop Grumman, for example, recently made progress toward successful heterogeneous unmanned vehicle (UxV) swarming with the test of Rapid Integration Swarm Ecosystem (RISE) during a DARPA field experiment. (Figure 2.)

21

Figure 2 |
Pictured are low-cost ground vehicles included in a recent DARPA field experiment to test swarming. Image courtesy Northrop Grumman.

This experiment leveraged the command, control, and collaboration capabilities of RISE within a mock city environment at Fort Benning in Georgia. The test was part of the OFFensive Swarm-Enabled Tactics (OFFSET) program, with a goal of providing dismounted soldiers with as many as 250 small UxVs.

Swarm technologies “are vital to get expanded situational awareness in a complex environment like the one in this test,” says Vern Boyle, vice president of emerging capabilities for Northrop Grumman. “Applications of autonomous robotics and human-machine teaming for swarming enhance a warfighter’s capacity and speed for information gathering and processing under a variety of conditions.”

RISE uses the robot operating system (ROS), an open architecture that enables the use of small, low-cost COTS air and ground vehicles. These vehicles rely on the human-swarm teaming approach to enable swarm commanders to define a high-level mission plan, monitor the mission, and make decisions based on that new system-sourced information. RISE also allows third-party developers to easily interact with existing platforms, sensors, and effectors through the use of standard ROS interactions.

Ethics concerns

While the use of sensors, AI, and big data for anomaly change detection can help the military make better decisions, bringing automation or AI into military operations introduces serious ethics considerations. International rules regulating the use of AI and autonomous weapons don’t exist yet, but the fundamental principles of the Law of War provide a general guide. It’s an area of concern because China is determined to become the global leader in AI by 2030, and Russia is also focusing heavily on AI.

Earlier this year, the U.S. Department of Defense (DoD) tasked the Defense Innovation Board (DIB) with proposing AI ethics principles for the design, development, and deployment of AI for both combat and noncombat purposes.

In its report, “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense,” the DIB defines AI as: “A variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task.”

DIB stresses that AI is not the same thing as autonomy, because while some autonomous systems may use AI within their software architectures, this isn’t always the case. An example it cites is that DoD Directive 3000.09 addresses autonomy in weapons systems, but it neither addresses AI as such nor AI capabilities not pertaining to weapons systems.

DIB came up with the following five AI ethics principles for DoD systems. “Use of AI systems must be:

  1. Responsible: Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems.
  2. Equitable: DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or noncombat AI systems that would inadvertently cause harm to persons.
  3. Traceable: DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
  5. Governable: DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.”

Once robots are armed with weapons, “we need to ensure a person is in the loop to make decisions,” Curran points out.

Humanoid robots on the way?

Are we likely to see humanoid (appearance of or resembling a human) military robots any time soon? It’s an area still under development, but progress is already being made, most notably by Boston Dynamics.

“Humanoid robots are yet another ethics issue, and our near-peer adversaries might not be as concerned about the morals and ethics involved,” Curran notes.

Securing such a network is also a concern. “Adversaries are going to do their best to hack into that network,” he says. “So there’s a security resiliency and survivability aspect to all of this. On the UAV side, they’re way ahead with swarming tactics and ensuring security. But the Navy is going to be putting out as many sensors as possible, which is important because our adversaries are starting to develop hypersonic missiles, meaning that our ships are in real danger. So robotic systems out there that could target and launch missiles would be helpful. Again, it all circles right back to ethics.”

With hypersonic weapons now entering into the mix, we’ll need robotic satellites to detect and potentially shoot them. “Hypersonics means we’ll need AI and big data, and probably lasers too,” Curran says. “An enormous amount of money and engineering will be required to deal with them.”