New electronic warfare architectures based on tight coupling of FPGA and CPU processing

3Electronic Warfare (EW) system designers are taking advantage of the performance leaps in commercial technology, driven by high-volume commercial markets such as telecommunications and cloud computing. These components such as FPGAs combine speed, high connectivity, and low power consumption for signal-processing intensive EW platforms such as fighter aircraft and Unmanned Aerial Vehicles (UAVs).

The tremendous research and development push in recent years by the major designers has resulted in new technologies that revolutionize system architectures. These new FPGAs and converters deliver the high sampling speed, high data rate connectivity, consumption, and parallel data processing power required by sophisticated EW platforms such as the French Dassault Rafale and UAVs on persistent missions.

FPGAs have thus become the best possible interface to sensors: Their capacity to perform powerful parallel algorithms makes them ideal for building all the filters and transforms that are essential to system performance. In EW systems that must seek for ever-fainter features, FPGAs enable the implemention of polyphase filterbanks. This reduces the “” in the frequency response of the Discrete Fourier Transform () more efficiently than the classical windowing functions approach, and with less computational complexity.

Another important feature of modern FPGAs is their lower processing consumption compared to CPUs, , or GPUs. When working on integers, the FPGAs tend to be as much as 10 times more efficient in terms of Giga Operations (GOPs)/watt in comparison with CPUs or GPUs.1 This ratio will only grow with each new generation of FPGA.

These FPGAs also feature enhanced connectivity. By implementing sophisticated designs, their transceivers have become extremely fast; in the case of Xilinx’s GTZ technology, reaching 28 Gbps per lane, which enables high-speed chip-to-chip or connections. Another of Xilinx’s offerings, the FPGA, can have an aggregate bidirectional transceiver bandwidth of as much as 2.7 Tbps. In order to benefit from these high-speed transceivers, strong multichannel Direct Memory Access () engines are implemented in FPGAs managed by software drivers run by the CPUs of the system. These DMA engines – part of the delivered together with the FPGA modules – have become very fast tools to move data and samples between memories.

A/D converter designs are seeing similar leaps in capability. A single direct RF-sampling A/D converter can replace an entire Intermediate Frequency (IF) sampling or Zero Intermediate Frequency (ZIF) sampling radio signal path subsystem of mixers, LO synthesizers, amplifiers, filters, and ADCs. Such a configuration can drastically reduce the bill of materials cost and design time, while shrinking board size, weight, and power. In addition, their performance over a large range of input frequencies up to and beyond 2.7 enables maximum frequency, bandwidth, and overall system programmability and flexibility in many applications, including , , and Signals Intelligence (). The JESD204B interface standard – ratified several years ago – allows moving from high pin count/low-speed parallel interfaces to low pin count/high-speed serial interfaces.

In combining the new technologies available on and FPGAs, EW system architects can dramatically improve their data sample processing. The FPGA Mezzanine Card () 57 standard, promoted by the VITA Group and the FMC Marketing Alliance, enables high data throughput and very low latency response between an A/D converter or converter FMC and the FPGA, simplification of the design, and the all-important ability to cost-efficiently retarget an FPGA carrier card design. Retargeting now requires simply swapping out the FMC module and adjusting the FPGA firmware. Moreover, thanks to the now well-established standard, all of these new technologies can be implemented on cards, ensuring the integrity of high-speed signals on systems backplanes.

The main point remaining to be solved: How to connect all these FPGA modules to the wideband network that sensors require to supply data to the users? Special middleware provides easy communication between FPGA modules, modules, and the external networks of an EW system.

Only high-end CPU modules are able to run the high-level communication protocols and middleware that is necessary to get WAN connectivity. Consequently, it becomes essential to find ways to get the FPGA and CPU modules to communicate seamlessly.

The leading embedded CPU technology is the 3rd or 4th generation Core i7 CPUs based on 32 or 22 nm microarchitectures. These CPUs contain integrated Gen 2/Gen 3 interfaces, allowing a per-lane maximum theoretical bandwidth as fast as 7.88 Gbps thanks to the reduced overhead allowed by the 128b/130b encoding. This configuration leads to a throughput of 31.5 Gbps on a PCIe x4 link.

The best way to get FPGA modules and CPU modules to communicate appears to be the proven PCIe protocol in its latest iteration. Figure 1 shows an example of a platform with communication between OpenVPX FPGA modules and Intel dual-/quad-core dual processor cards.

Figure 1: Pictured is a signal processing platform architecture.
(Click graphic to zoom by 1.9x)

So how should seamless, high-speed communication between the FPGA modules and the CPU modules be organized? A good approach is a communication middleware package such as . A full set of software components provides a high-level abstraction in order to provide the designer with services such as Virtual over PCIe, shared memory, message synchronization with DMA powered transfers, and transfer between FPGA modules and CPU modules or between different CPU modules.

This Multiware runs on the different CPU modules and provides multiple and user space services adapted to all user designs as shown in Figure 2:

  • Virtual Ethernet to use Multiware as a network device
  • I/O device to use Multiware as a character device
  • Sysfs Linux virtual file-system to use Multiware through Sysfs directory entries and to create and access a shared memory or a pool frame
  • Kernel functions to use Multiware from kernel modules

Figure 2: Multiware architecture.
(Click graphic to zoom by 1.9x)

These services are based on a core layer with a set of sub-modules:

  • Aperture manager, a Multiware component that allocates the system memory and configures Non-Transparent Bridging (NTB) to make the local aperture accessible from other remote PCIe domains
  • IDC, a Multiware component that allows communication with the different RC domains
  • Transfer layer in charge of executing the data copy with several kinds of copy engine

In addition to running this sophisticated communication middleware, the Intel CPU modules can manage classical functions such as storage through Serial ATA links or graphic displays through HDMI links.

All of these advantages are combined on OpenVPX-based devices that leverage FPGAs. OpenVPX products, such as the IC-INT-VPX6a dual Intel core i7 CPU module from , are used in EW, radar, and other military signal processing applications. The Interface Concept device includes two independent PCIe switches and one Kintex-7 to connect 32 PCIe lanes on the backplane. This board can be combined with a dual FPGA card, like the company’s IC-FEP-VPX6b, which has two XC7VX690T Virtex-7 FPGAs.

Thierry Wastiaux is Senior Vice President of Sales at Interface Concept, a European manufacturer of electronic embedded systems for defense, aero, telecom, and industrial markets. He has 25 years of experience in the embedded systems and telecom market, having held positions in operations, business development, and executive management. Prior to Interface Concept, he was responsible for the operations of the Mobile Communication Group and the Transmission Business Unit at Alcatel-Lucent. He holds an M.Sc. from France’s Ecole Polytechnique. Readers may contact him at

Interface Concept 215-956-1200

1 National Science Foundation – Allen George, Herman Lam, and Greg Stitt – IEEE Magazine Computing in Science and Engineering – Jan/Feb 2011.