3U OpenVPX plus 40-gig Ethernet - best of both worlds
To solve tough problems like synthetic aperture radar, sensor fusion, and target recognition processing, the military wants and needs performance. That requirement means getting the fastest throughput in the smallest package with the lowest power penalty. The hunger for performance is even more true for autonomous platforms – from aircraft to ground vehicles – that require high-bandwidth processing to “think” for themselves and act on their own, as well as to perform basic sensor and mission processing, self-protection, and communications and navigation functions.
Small-form-factor, 3U, OpenVPX boards were invented for data-hungry and size, weight, and power (SWaP)-limited applications. OpenVPX, a highly flexible standard – and the successor to VME – can run almost any high-speed data transfer technology on its data plane – InfiniBand, PCI Express, Serial Rapid IO, or Ethernet.
OpenVPX boards come in many flavors, but multiple single-board computers (SBCs), each using multicore processors, would be appropriate for computationally intensive sensor processing tasks. A combination of such SBCs and massively parallel graphical processing unit (GPU) cards could also tackle these jobs.
The multicore boards, in turn, could be interconnected using a 40-gigabits-per-second Ethernet switch for maximum throughput to accommodate the interboard bandwidth required to keep up with front-end data collectors. The OpenVPX switch module 3U slot profile (SLT3-SWH-8F) is a good fit for a 40-gigabit Ethernet system: A 40 GBASE-KR4 backplane connection would allow eight 3U payload boards to be interconnected via one switch.
Back to the future: Ethernet
These days engineers increasingly are turning to Ethernet for high-speed board-to-board communications across the backplane. Designers of high-speed embedded computing (HPEC) systems prefer Ethernet to PCIe because it is a fast, low-latency network technology that is well-understood and easier to use than PCIe.
Although Ethernet is a relatively ancient computer technology, it has kept up with the times – recently hitting 40 gigabits per second, or four times the maximum Ethernet data rate found on military platforms today. The Ethernet programming model also has stayed relatively stable through the years, and the switching mechanism is easier to accommodate than with PCIe. What’s more, profiles developed and controlled through groups like the IEEE [Institute of Electrical and Electronics Engineers] keep the standard aligned with the needs of its most demanding defense and aerospace users.
Anyone building a multinode system knows that it’s easier and more cost-effective for the boards to talk Ethernet to each other than to speak PCIe. While PCIe works well for systems with one or two cards and peripherals, Ethernet is preferable for larger systems.
Ethernet is a lingua franca – or common language – that operating systems natively understand. With PCIe, in contrast, an extra software layer is required to allow multiple processors to talk to each other. Since each board vendor has its proprietary version of this code, PCIe-based backplanes are perceived as less open and more vendor-dependent.
Ethernet has a huge installed base in homes and offices as well as on military platforms. It is well-defined, reliable, ubiquitous, and affordable – all music to military ears. Moreover, because it is simpler and easier to implement than other data-transfer technologies, solutions can be put together faster and at lower cost, another selling point for the technology.
An example of possible foundational elements for a 40-gigabits-per-second Ethernet-based 3U OpenVPX system: a pair of ruggedized, air/conduction-cooled products with security features from Abaco Systems: the SBC367D SBC, with Intel’s up to 16-core Xeon D-1500-series processor; and the SWE440 Ethernet switch, supporting as many as eight 40-gigabits-per-second ports and burning 40 watts or less of power. (Figure 1.)
High-density, low-SWaP computing is key to solving the military’s embedded-processing challenges. The lower the latency, the faster the data can be distributed among processing nodes, and the higher the data throughput volumes per unit of time. Following that logic, the more reliable and effective the results based on that data can be, the more effective the sensors or other systems can be, and the shorter the decision cycles based on that information will be.
High-speed Ethernet is a good fit for low-SWaP, highly compute-intensive applications, where massive amounts of data are being processed per node and flowed between nodes. In this context, multiple 40 gigabits-per-second modules coupled with a 40 gigabits-per-second switch could be the best of both worlds.