Military Embedded Systems

Fitting AI under the SWaP umbrella

Story

June 11, 2019

Mariana Iriarte

Technology Editor

Military Embedded Systems

Reduced size, weight, and power (SWaP) requirements are also impacting artificial intelligence (AI) and machine learning (ML) designs.

“Translating the commercial and consumer AI accelerator hardware technologies into the deployable, rugged, size, weight, and power (SWaP)-constrained systems mil/aero platforms often need” is a challenge, says David Tetley, principal software engineer, Abaco Systems (Huntsville, Alabama).

Reduced SWaP tends to not be synergistic with cloud-based solutions that can appear to have an endless supply of virtual space.

Unfortunately, the reality is that “Many military systems are SWaP-constrained, which challenges the assumptions of today’s AI/ML solutions,” explains Ray Petty, vice president, Aerospace & Defense at Wind River (Alameda, California). “Today’s solutions are typically developed in cloud or enterprise environments with assumptions about available connectivity, like bandwidth and latency, and available compute resources like scalability, memory, and CPU flexibility.”

Another challenge is managing the resources of ML systems that can be quite resource-intensive.

Thankfully, “We see that most of the deep-learning ML systems that are getting a lot of coverage recently are very resource-intensive when being trained, but are not especially intensive during runtime,” says Avi Pfeffer, chief scientist at Charles River Analytics. “This means that they are a good option for reduced-SWaP systems if they are a good fit for the problem; that is, there is sufficient data to train them with.”

However, when designing a ML/AI-intensive system that is traditionally SWaP-constrained – like an aircraft – space becomes an issue.

“We have worked in applying AI to building secure systems, such as a system to detect malware running in the avionics system of an aircraft that we built for the Air Force Research Lab,” says Terry Patten, principal scientist at Charles River Analytics (Cambridge, Massachusetts). “We found that one of the best sources of information about system health, and one of the best sources of mitigation resources in real-time attacks, is redundancy, which is problematic for reduced-SWaP systems.”

Ruggedization can also be tricky when managing SWaP constraints.

“Size is perhaps one of the most challenging constraints, because large devices are harder to ruggedize than small ones,” says Devon Yablonski, principal product manager, Mercury Systems (Andover, Massachusetts). “Most embedded parts tested for high vibration and provided in BGA form are lower-power parts that sacrifice many of the features of their ‘full-sized’ counterparts, which are the features desired by these new ML and AI applications. For instance, the high-bandwidth memory (HBM) found on many processors today cannot handle high temperatures well and thus are not found on embedded chips very often. High-speed access to memory is critically important to ML and AI applications.”

The problem is that users can’t “deploy the high-power, server-class CPUs and GPUs widely used commercially for AI in SWaP-constrained systems, so we need to use mobile-class devices [because of their lower power consumption and lower heat dissipation design],” Tetley explains. “Ideally, these devices include specific AI-accelerating features that give excellent IOPs [inference operations per second] per watt. An example of this is the Deep Learning Accelerator on the NVIDIA Turing class of GPU and on NVIDIA Xavier SoCs [systems-on-chip]. Also, you need to ensure that your deployed neural network (or ‘inference engine’) is highly optimized by using tools such as NVIDIA’s TensorRT. This ensures the software is ideally optimized for the target embedded hardware architecture and therefore runs as efficiently as possible.”

Often SWaP requirements are just not compatible with many ML models and solutions.

It’s also important to note, Petty adds, that “SWaP constraints of many military systems are not compatible with these assumptions and require a refactoring of the AI/ML solutions.” “The refactoring, however, is consistent with the adaptation of AI/ML solutions for other markets such as industrial controls, medical, and automotive, and solutions can be leveraged across domains.”

Of course, in the commercial domain, the reigning monarchs in the AI/MIL field “are the Facebooks, Googles and Amazons of the world,” Yablonski says. “They have limited requirement for minimizing size, weight, and power other than the fact that they can build more efficient data centers if any of those can be reduced. The chip vendors like Intel, NVIDIA, AMD, and others cater to these markets. The volume in defense is much lower, despite the importance of the pursuit. It is challenging to get the high-performance chips and products from the commercial space in a flavor that are packaged for survivability in our market, as well as considering the power limitations (often 100-200 watts per computer) and size constraints.”