Artificial intelligence can help warfighters on many fronts
Bringing artificial intelligence (AI) or "smarts" to the battlefield is already changing the way battles are fought; this technology will become increasingly critical in the future.
AI makes it possible for machines to learn from experiences, adapt to new data, and perform human-like tasks. Deep learning and natural-language processing techniques are helping to train computers to accomplish specific tasks by processing large amounts of data and recognizing patterns within it. The technology is increasingly seen as helpful to the warfighter.
As Vladimir Putin starkly summed it up last fall during a speech to students: “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader within this sphere will become the ruler of the world.”
The U.S. Department of Defense (DoD) Project Maven initiative is on target to bring computer algorithms and AI to war zones by the end of 2018 in order to help government platforms extract objects from their massive amounts of video or still imagery. (Figure 1.)
Project Maven uses computer vision, which is a combination of machine and deep learning (neural networks), to autonomously find objects of interest. This field is, as you can imagine, not without controversy: Google is one of the contractors for Project Maven and, although the company says it’s doing object recognition “for non-offensive uses only,” many of its employees are questioning this and asking the company to commit publicly to not assisting with warfare technology, according to a letter from Google employees to management published by the New York Times.
One of the reasons for concern within this realm is that AI carries with it myriad ethical and moral issues – think autonomous offensive weapons – and so far there are no agreed-upon rules or standards for nonmilitary or military uses of AI.
What are AI algorithms?
Algorithms are a specific set of software to perform a specific function – essentially the brains of AI.
“If you don’t have the algorithms correct when you’re looking for signals, measuring for signals, or trying to pull out salient features, then you basically have a paperweight,” says John Hogan, director of the FAST Labs Sensor Processing and Exploitation Product Line for BAE Systems. “Our machine learning and AI is based on how the brain works, and we actually have cognitive neuroscientists working on it. They copy the way the brain works in these algorithms.”
BAE Systems doesn’t merely go to the web and download a cognitive neural net piece of shareware, but rather builds their own theories and algorithms based on how the brain works and the types of signals they’re working with.
How are neural networks different in terms of amount of code involved than other algorithms? “The lines of code really depend on the complexity of the problem,” Hogan explains. “You may have an algorithm that adds two numbers, and it’s one line of code. But an algorithm that searches for patterns inside data could be thousands of lines.”
While there’s “no rule of thumb for an algorithm,” Hogan says that server farms make the work easier because “you don’t have to worry so much about writing resource-constrained code to try to pack everything into a small space in computer memory.
“But one challenge is that you usually measure on one platform and send it to a processing center, and then you get an answer sent back. The problem is that signals are so fleeting that you need to have that processing on the platform as an embedded system or you’ll completely miss it.”
Full spectrum of AI
BAE Systems plans to apply AI and machine learning across the entire enterprise in all of its systems and services, according to Hogan. To this end, the company established a Machine Learning Center of Excellence in 2016. “We’re the lead for the R&D there,” he says, “and we work both on fundamental machine learning and in theory. It’s not just applied machine learning; we’re actually developing new theory on machine learning as well applied research on specific problems.”
What sorts of machine-learning techniques is BAE Systems using? “The full spectrum,” Hogan says. This includes “deep learning, which most people have heard of, one-shot learning, transfer learning, and sequence learning. One-shot learning is a method to use if you only see an object once and need to recognize it again. And an example of transfer learning is, say you have an object like a tank that’s a tracked vehicle, and then you see another tracked vehicle but it doesn’t quite look like a tank … you can use the things you learned on a tank to help it learn this new object. For sequence learning, an adversary does things in a certain order and we have measurable phenomena like frequency or images. So we can learn the sequence that the adversary uses and use it to predict future events. This is called ‘predictive analytics.’ If we can learn a sequence of events and measure that these events are actually occurring, then we can predict a point in the future when another event will occur.”
The researchers can also build up “patterns of life,” such as what traffic looks like during certain times of the day, but this can also be done on radio signals and traffic – or anything else you can measure. “A lot of the problems we see come from what we call discovery,” Hogan says. “Someone will come to us and say ‘I’ve collected all of these signals, and I think there’s something happening there but I don’t know what.’ Our tools are agnostic to the kind of data that you can measure, so they can be used across a multitude of data. Machine-learning analytics – based on the collection of data – will cluster the data and automatically search for patterns. That’s considered to be an ‘unlabeled pattern’ within the data, and we can work with an analyst who maybe studies certain problems over time and they can say ‘I know what this pattern is,’ and label it so that the next time we see this pattern we can put a label to it. We do this across many military applications.”
Flipping the cost curve
How can all of this change the future of warfare? “In the past, when a system needed multiple functionalities, you’d simply add more hardware and software to increase its capability,” Hogan says. “Now, we want the ability to flip the cost curve. If we send a fifth-generation aircraft in and adversaries send a $1 million missile to shoot it down, it’s very expensive for the U.S. If we send in a low-cost unmanned vehicle and they fire a $1 million missile at it, we’ve flipped the cost curve because it’s more expensive for the adversary to remove that asset than it is for us to send another one.”
Whenever you add elements to increase capacity, it adds size, weight, and cost to the system. “This doesn’t align with the cost curve,” Hogan points out. “To change how systems are made today, we want one that will respond to changes in very dynamic environments. So we have both software and hardware architectures that will be software defined – meaning that depending on data measured from the environment I can automatically reconfigure my system to have a different functionality. We’re working on a system now that will automatically reconfigure and optimize itself based on the updated data environment. We call this machine-learned-driven execution (MLDE). If a system is measuring a part of the radio spectrum and a new signal pops up, it would automatically reconfigure and optimize itself to measure this new signal.”
AI can help soldiers learn faster during warfare
One of the many ways AI can assist soldiers during battles is to help them learn or process information at a much faster rate. A new AI technology developed by the U.S. Army Research Laboratory enables soldiers to learn 13 times faster than conventional methods, researchers say, by helping them decipher clues or hints of information faster so that they can quickly deploy solutions, whether the situation is recognizing threats in the form of vehicle-borne improvised explosive devices (IEDs) or spotting potential danger zones from aerial war zone images.
“The past decade has seen a reinvention of AI and an explosion in AI-driven data science research,” says Rajgopal Kannan, a researcher for Computing Architectures Branch, CISD, Army Research Lab-West (Los Angeles), as well as an adjunct research professor in the Ming-Shieh Department of Electrical Engineering at the University of Southern California. “So ARL decided to focus on AIML [AI/machine learning] as the foundational technology that can provide maximum tactical benefits to the modern soldier and help maintain our competitive advantage.”
AI and machine learning have helped “drive many major results improving inferencing and learning within complex systems and environments – the domain of the soldier,” Kannan adds. “AI and machine-learning algorithms can really exploit the massive amounts of data – historical and real-time – available to soldiers.”
Because computation is moving to the system edge where data is being gathered, and knowing that AI is going to drive fundamental advances in future soldier ability, Kannan’s focus is to develop algorithms that will map AI techniques onto cheap, low-power hardware and make it work faster, essentially speeding up existing AI methods.
This is “a very intellectually stimulating area,” Kannan notes, “because existing AI techniques need to be carefully mapped to exploit subtle features of available hardware.”
To develop their new AI technology, Kannan and colleagues implemented a stochastic gradient descent for collaborative filtering (CF), which is a well-known machine-learning technique, on a state-of-the-art low-power field-programmable gate array (FPGA) platform. “The meat of the work consisted of analyzing the hardware bottlenecks – pain points – and using that to develop a set of mathematical algorithms for carefully mapping each phase of collaborative filtering onto the hardware to reduce the communication and computation complexity,” he explains. “The end result was hardware acceleration or speedup of the basic AI algorithm on the low-power FPGA system when compared to traditional multicore or GPU hardware. The low-power nature of our FPGA system also makes it more useful in adaptive, lightweight tactical computing systems.”
The researchers got a big surprise in the form of speedup numbers – approximately 13 times faster – when compared to multicore and GPUs, despite inherent advantages, which implied something fundamental about this new approach: “Our techniques also turned out to be generalizable to other machine-learning problems,” Kannan says. “And, interestingly, we can adapt our FPGA implementation to native multicore and GPU implementations to speed up their state-of-the-art performance as well.” (Figure 2.)
What kinds of applications can this type of AI be applied to in the military and battlefield arena? “This work is an implementation of a general, popular machine-learning technique that’s used in many pattern matching, inferencing, and decision and recommendation systems, such as recognizing vehicle-borne improvised explosive devices (VBIEDs), detecting threats in war zones, and recommending actions,” Kannan says. “More importantly, these systems can be used at the tactical edge – right where the soldier collects the data, where it’s needed most.”
The ARL researchers’ work is now being expanded toward other Army applications, because “there are a lot of machine-learning algorithms used by the Army, and we can accelerate them and make them more usable in tactical environments,” Kannan continues. “In the long run, I’d like to develop innovative and practical techniques for the use and application of AI to solve some large-scale problems of use to the Army – tactical and adaptive computing at the edge in resource-constrained environments. The end result should be lightweight computing and reasoning platforms that can provide maximum benefit – through adaptive learning and computation – to soldiers within actual operating environments.”