Team investigates artificial intelligence, machine learning in DARPA project

CAMBRIDGE, Mass. A multidisciplinary team led by Charles River Analytics has received a contract to participate in a Defense Advanced Research Project Agency (DARPA) program on Explainable Artificial Intelligence (XAI). The four-year broad agency announcement contract is valued at close to $8 million.

In DARPA's XAI program, the agency is funding multiple teams to make artificial intelligence more explainable through the interdependent development of machine learning techniques and human-computer interaction designs.

According to information from DARPA, although continued advances in artificial intelligence (AI) promise to produce autonomous systems that will perceive, learn, decide, and act on their own, the effectiveness of these systems is limited by the machines' current inability to explain their decisions and actions to human users. The Department of Defense () is investigating the concept that XAI -- especially explainable machine learning -- will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.

Charles River’s team for the Causal Models to Explain Learning effort -- known as CAMEL -- includes members from Brown University (Providence, Rhode Island), the University of Massachusetts at Amherst, and Roth Cognitive Engineering (Stanford, California).

Dr. Brian Ruttenberg, senior scientist at Charles River Analytics, says of the XAI program: "Using the tools and techniques developed under CAMEL, we hope to crack the opaqueness of these effective [machine learning] systems and give the user the means to understand why a machine learning system reached a particular conclusion. This will ultimately provide the justification, rationale, and verification to use these sophisticated machine learning systems in complex and adverse environments.”

Topics covered in this article