Military Embedded Systems

Human/AI collaborations improve via Army Research Laboratory program

News

January 15, 2018

Mariana Iriarte

Technology Editor

Military Embedded Systems

Human/AI collaborations improve via Army Research Laboratory program
Image by Army Research Laboratory (Shutterstock)

ORLANDO, Fla. U.S. Army Research Laboratory scientists completed the Autonomy Research Pilot Initiative (ARPI) and developed ways to improve collaborations between humans and artificially intelligent agents.

Under ARPI, ARL's Dr. Jessie Chen, senior research psychologist, and her colleagues developed the Situation awareness-based Agent Transparency (SAT) model and measured its effectiveness on human-agent team performance in a series of human factors studies.

The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

The SAT model addresses the six barriers identified in 2016 by The U.S. Defense Science Board which include human trust in autonomous systems, with 'low observability, predictability, directability and auditability' as well as 'low mutual understanding of common goals' being among the key issues.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL's experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators' decision making during military scenarios.

The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human's decision making and thus the overall human-agent team performance. More specifically, researchers said the human's trust in the agent was significantly better calibrated -- accepting the agent's plan when it is correct and rejecting it when it is incorrect-- when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member (ASM), on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad.

[caption id="" align="aligncenter" width="600"] The ASM's user interface features an '"at a glance" transparency module' where user-tested iconographic representations of the agent's plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. Photo: Dr. Jessie Y. Chen[/caption]

Under the ASM program, Chen's group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM's user interface features a transparency module where user-tested iconographic representations of the agent's plans, motivator, and projected outcomes are used to promote transparent interaction with the agent.

A series of human factors studies on the ASM's user interface have investigated the effects of agent transparency on the human teammate's situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project's findings, demonstrated the positive effects of agent transparency on the human's task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

"Bidirectional transparency, although conceptually straightforward—human and agent being mutually transparent about their reasoning process—can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent's planning and performance—just as agent transparency can support the human's situation awareness and task performance, which we have demonstrated in our studies," Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

This work was supported by the U.S. Department of Defense Autonomy Research Pilot Initiative with Dr. DH Kim as Program Manager.

 

Featured Companies

U.S. Army Research Laboratory

2800 Powder Mill Road
Adelphi, MD 20783-1138