The path to smarter, autonomous radar and EW platforms
Data flowing from radar and electronic warfare (EW) systems to the analyst's screen will determine the course of action in any given mission. Bearing in mind that decisions need to be made, at times in seconds, it's critical for radar and EW systems to quickly sift through that data and turn it into actionable intelligence. To achieve this goal, the defense industry is using artificial intelligence (AI), machine learning (ML), and deep learning (DL) techniques to program these systems and make them into smarter, more autonomous tools.
The journey starts at the design table, as the environment continues to drive toward a more intelligent and connected battlefield. Graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and general-purpose computing graphics processing units (GPGPUs) are only part of the equation to program smarter radar and EW systems; sensors also play a big role in capturing data. The catch? Military users want all of this functionality in ever-smaller, lighter systems.
“In general, the demands from military customers are interconnected sensors and communications that are fast, robust, and hard to detect, and jammers that can be adaptive to the unknown threat,” says Peter Thompson, Director, Business Development – Technology, at Abaco Systems (Boston, Massachusetts).
It’s that unknown threat that keeps designers and engineers up at night pushing the defense industry to innovate and use relatively new techniques such as AI, machine learning, and DL. “The advantage of AI is that the algorithms can adapt to changing environments and scenarios. AI can also replace human operators in systems where human involvement is required for target recognition,” Thompson adds.
Instead of humans analyzing the data, the idea is to move to intelligent artificial means of analyzing that data. Neural networks, or the presently used term “deep learning,” essentially means having a smart computer that can make decisions and think more like humans, according to an MIT news article titled “Explained: Neural networks,” (available on news.mit.edu./2017/explained-neural-networks-deep-learning-0414).
“Neural networks can be used in these systems for clutter rejection, target detection, classification, and tracking,” Thompson explains.
“The community is looking for a better way to get actionable intelligence,” Rodger Hosking, vice president and cofounder of Pentek. “There is so much information being gathered right now by the current technology that it is virtually impossible for the human mind to sift through it in real time. Information is different from knowledge, as knowledge is something you can act upon. So the buzz this year is about how to automate the evaluation of information using new strategies like artificial intelligence and learning algorithms that can help boost the speed and accuracy of decision-making abilities of the humans in the loop.”
The trends continue upward with the use of these techniques because “these algorithms are very sophisticated in that in order to be able to pick out these targets, these machines have to think more like a human,” says Marc Couture, product management and systems application engineering management at Curtiss-Wright Defense Solutions (Littleton, Massachusetts).
The adaptive intelligent battlefield: The challenges ahead
As the years roll by, radar and EW solutions will leverage more of techniques being used to identify targets and improve the decision-making process. Additionally, integrating these methods “will lead to smarter, more autonomous radar and EW platforms,” Thompson remarks.
Furthermore, a more widely distributed/intelligent sensor network will require even more emphasis on cybersecurity, Thompson adds: “If the enemy can break into a single node and disable a function across a network, it would represent a major vulnerability. As such, cybersecurity will continue to be a major part of systems that include radar and EW signal processing.”
It’s not only security concerns, but the processing demands and packaging as well, says Denis Smetana, senior product manager, FPGA products, for Curtiss-Wright Defense Solutions: “How do we manage the power and the thermal heat that is being generated? In order for us to fully utilize the FPGA and GPGPU capacity that is now available, we need to move to more exotic cooling techniques such as air-flow-through (AFT), or even liquid-flow-through (LFT).”
Hosking’s view lines up with this, as he states, “Packaging and thermal management becomes increasingly difficult since component density is always increasing. New materials and better EDA modeling tools are helping. As complexity increases at every system level, fully functional subsystems become more attractive to systems integrators. Also, high-level software tools and APIs help by abstracting the details.”
Abaco’s Thompson says, “In the field of deployable AI-based solutions, the challenges will be twofold. The first challenge is developing rugged processing systems powerful enough to host the compute-intensive neural network-based algorithms.”
The other question can only be answered as these systems are used in the theater: “With such a connected and intelligent system, a major challenge will be proving the effectiveness of the techniques against an adaptive and unknown enemy,” Thompson says. “If our systems become so smart that we can’t prove they work coherently, this will pose a critical challenge to military operators to trust the effectiveness of these new digital weapons.”
More intelligent radar and EW tools have software challenges as well. “We are facing several challenges right now. The first is the creation of the algorithm (the intelligence). Creating a large enough data set, formatting, and tagging the data are just a few of the challenges and requirements of training the algorithm. The computational power and time required to train the algorithm complicates the challenge,” says Tammy Carter, senior product manager for OpenHPEC products for Curtiss-Wright Defense Solutions (Ashburn, Virginia).
In addition, “One of the bottlenecks that we see in FPGA is with the memory. Many algorithms, including deep learning, require large amounts of data to be stored while being processed, which requires both a large capacity of memory as well as a high-throughput memory interface.”
Programming with AI, ML, and DL
Programming algorithms to quickly respond to threats and think more like humans is part of the challenge of designing intelligent systems; it’s also about which type of hardware to use to ensure a successful mission for the warfighter. “Some would argue that the real battle is going to come down to FPGA versus a more general-purpose type of processing – either a true CPU approach, such as with Xeon, or with a GPU-based processing approach,” says Mark Littlefield, head vertical product manager, defense, at Kontron (Laguna Niguel, California).
“It’s sort of a three-way balancing act where developers need to select the best approach between the ease of development, the gigaflops per watt, and the longevity of supply,” he adds. “FPGAs and CPUs each have their pluses and minuses against these three important units of measure.”
When mixing GPUs, FPGAs, and the concept of AI, “says Thompson, “the use of neural networks for signal processing is not new, but the practicality of deploying these techniques in SWaP [size, weight, and power]-constrained systems in the battlefield is only just becoming real. New generations of processing hardware, with GPUs at the forefront, are now enabling AI to be applied to replace or enhance traditional signal-processing techniques in radar and EW.”
As Smetana explains, “GPUs have always been easier than FPGAs to program, which has historically given FPGAs a bad reputation due to the fact that you need highly trained engineers to be able to efficiently use them. However, as FPGA and GPGPU vendors fight for control of the data center market, FPGA vendors are working hard to develop the tools to break this paradigm, which will ultimately benefit the defense industry. The result is the software and tools that are really needed to efficiently map machine-learning algorithms into FPGAs.”
Smetana further analyzes the pros and cons of FPGAs: “One of the advantages of FPGAs is that they are reconfigurable. Therefore, they are well-suited for environments where the user needs the system to adapt to the current situation. For deep-learning applications, FPGAs are more power efficient than GPGPUs and have lower latency. So over time, I expect to see both FPGAs and GPGPUs used for deep learning, with GPGPUs heavily on the training side, and an edge to FPGAs on the deployment side.”
Regarding FPGAs and reduced SWaP, many in the industry are looking forward to when Xilinx releases its RF system-on-a-chip (SoC) solution. It will have more ADCs and DACs built in, says Noah Donaldson, Vice President of Product Development for Annapolis Micro Systems in Annapolis, Maryland. This will enable SWaP reductions as the same functionality and performance of previous systems is not only increased, but enabled in a smaller footprint, which also reduces overall size as one board can now do the work of multiple boards, he explains. RF SoC will also enable more functionality in military radar and electronic warfare solutions, Donaldson adds.
For reduced SWaP applications Annapolis offers an FPGA board called the WILDSTAR UltraKVP ZPB DRAM for 3U OpenVPX – WB3XB0. These FPGA boards include 1 Xilinx Kintex Ultrascale XCKU115 or Virtex Ultrascale+ XCVU5P/XCVU7P/XCVU9P FPGA with 64 High Speed Serial connections performing up to 32.75 Gbps. There are two 80-bit DDR4 DRAM interfaces clocked up to 1200 MHz. The on-board quad ARM CPU runs to 1.3 GHz local application requirements. It is accessible over backplane PCIe or Ethernet and provides dedicated AXI interfaces to all FPGAs.
“We are seeing FPGAs playing a pretty big role, but we are also seeing more general processing such as the Intel Xeon,” Littlefield says. “The current generation, the Intel Xeon D, is actually quite a potent processor capable of handling machine learning and artificial intelligence kinds of problems. I think the radar and EW developers are taking a really pragmatic approach by letting the embedded computing industry, which is influenced by everything from finance to autonomous vehicles, drive the base technologies for machine learning and AI. They can then lift and utilize those technologies when they are available.”
The GPU versus FPGA argument continues as GPGPUs are added to the equation, with the salient piece the evolution of leveraging commercial solutions for military applications. “One of the things that we’ve seen, a big trend, has been the introduction of GPGPUs into electronic warfare,” Couture says. “GPGPUs have historically been used for gaming systems and for rendering video displays.”
It’s critical to point out, however, that, “While the FPGA is important, it’s only one piece of the puzzle,” Thompson remarks (Figure 1). “To enable adaptive and machine-learning algorithms, designs must work with the latest CPU [central processing unit] and GPU technology. In truth, it’s a ‘use the right tool for the job’ argument.”
The goal: Being able to program “very different types of processors, and get them to talk to each other and also being able to use these new paradigms, like deep learning and machine learning.” Couture says.
Data and the role of sensors on the intelligent battlefield
Ultimately, high-performing operation of intelligent systems comes down the data and the data analysts. “Long before the algorithms can teach themselves, they must be taught by the analysts, and before those analysts can train the machines they must be trained to think like the machines,” Carter says.
This continues to be a problem, however, as there are not enough humans or analysts to sift through the huge amounts of generated data. For radar and EW applications, the data is everything. “One shorter-term off-spin will be the need for more real-time data storage on platforms to gather ‘raw’, real-world sensor data to facilitate neural network training,” Thompson says.
The quest for the truly adaptive battlefield will keep pushing sensors to the limit to gather actionable intelligence. “Beyond the three-year horizon, a major challenge will be the expansion in the number of sensors and jammers on the battlefield,” Thompson says. “It won’t be one high-value platform, but many smaller systems all playing a role on the adaptive battlefield.”
“The continued trend is for an exponential increase in sensor data,” Couture says. “There is far too much of it and far too few analysts associated with electronic warfare and ISR sensors. This includes the data gathered from electro-optic infrared imagery data, the higher-resolution cameras, all of the RF microwave tuners, etc.”
Processing requirements are becoming more demanding, not just for the increasingly large waterfall of sensor data, but for sensor fusion where you’re basically cross-correlating phased-array radar matrices and mapping that over RF emitter data and electro-optic imagery, for instance, Couture continues.
Smetana points out: In addition to the “work that’s going on in the data center market, which is maybe not necessarily related to military, but from a business-case standpoint, that’s driving a lot of the software and tools that are really needed to efficiently map machine-learning-type algorithms into FPGAs.”
The warfighter no longer has the luxury to sift through data for actionable intelligence, while “adaptive countermeasures can dynamically jam or modify signals to evade or confuse the enemy,” Hosking adds. “These capabilities continually become more refined and precise as new technology evolves.”
Automating some of these processes is the answer to the massive amount of data coming in through each mission. “Automatic classification of signals helps identify and segregate targets faster and more accurately than human operators,” Hosking says.