Military Embedded Systems

HPEC Vanguard: New tools for complex algorithms

Blog

August 28, 2012

Eran Strod

Curtiss-Wright

Arthur C. Clarke famously made the observation that “Any sufficiently advanced technology is indistinguishable from magic.” In the radar and signal intelligence world, corps of Ph.Ds are regularly developing mathematically complex algorithms that require significantly more processing power than can be deployed in a contemporary embedded COTS system. These algorithms must sit gathering dust in a drawer until Moore’s Law has driven performance densities sufficiently to enable the needed compute power for that class of algorithm. When any given class of applications is made possible, more demanding algorithms follow in their wake. This way of understanding our market highlights the fact that there are two types of applications that embedded military and aerospace system designers typically confront.

First, there are those for which the algorithm’s complexity stays relatively stable or fixed over time. Let’s presume, for example, that a particular algorithm requires a 19” wide chassis full of processing boards. With the cadence of Moore’s law increasing the processing power available in a single slot every couple of years, the number of boards for a given application will be cut in half with the advent of each new generation of processing elements. If the application’s requirements (algorithms) remain fixed, the number of processing boards required will similarly be reduced. This steady pace of evolution will continue until all that is needed is a single SBC with one processor – or less.

Sonar is a good example of this phenomenon. Twenty years ago, computers with 10-20 modules were employed to perform sonar algorithms. For this class of application, Moore’s Law, with its expected doubling of processing power every 18 months, guarantees that an HPEC system that originally requires 16 6U VPX boards, will over time be satisfied with 8, then 4. After a number of years that system may shrink down to 3U boards, and finally, require only a PC to handle the needed processing. Sonar algorithms have essentially remained the same over time so the hardware required today is pretty minimal in comparison to what we would today consider an HPEC system.

This leads me to postulate the following extension of Moore’s Law, which I believe aptly describes small form factor computing:

Orville’s Corollary: "Any application whose performance requirements remain fixed over time will undergo continuous size, weight, power and/or cost reductions."

Unfortunately, some misunderstand this process to mean that today’s large HPEC systems are all destined, over time, to migrate to small and cheap hardware. This misconception reveals a critical lack of familiarity with the HPEC market.

See also: HPEC Vanguard: blogging on advances in HPEC technology in the embedded COTS defense & aerospace market

The key to understanding Orville’s Corollary is that it describes those cases in which application requirements are fixed. But in HPEC there is commonly another class of application in which requirements expand to fill available processing. In fact, HPEC applications look to a particular chassis configuration and fill that footprint with as much compute horsepower as is practical given contemporary chip densities and packaging technologies. If Moore’s Law delivers more processing in the same space, then application developers deploy ever demanding applications (remember the dusty drawer above?).

True HPEC applications never shrink. In this way they bring to mind the old saw about software always expanding to the limits of available memory. For this class of applications, which includes numerous multi-function radars, SIGINT, and image processing problems, the compute requirements will always expand to fill the available system slots. It is this ceaseless thirst for more processing power that makes HPEC such a vibrant market niche.

It’s not unusual for our customers to ask “How many FLOPS can you support?” Today the answer is somewhere around 200 GFLOPS on a single Intel CPU, but here’s where Moore’s Law aids the HPEC system designer. Two years from now GFLOPS will likely double – and so on. Compute power increases by an order of magnitude every 4-5 generations. Using commercial HPC open standards and leading-edge processor silicon, HPEC integrators are now better able to keep up with the increasing processing demands of the algorithm developers. It’s a given: As processor microarchitectures shrink, algorithms will get more complex.

This leads me to posit Wilbur’s Corrollary: “Any application whose performance requirements expand over time will seek to fill any given size, weight, power and cost (SWaP/C) with as much processing as is made available by state-of-the-art technology.”

Orville’s and Wilbur’s Corrollaries can help managers and designers understand the difference between small form factor (SFF) and high-performance embedded computing (HPEC). That difference has less to do with your system architecture today then where they will be several computing generations in the future.

The good news is that thanks to HPEC architectures, embedded COTS system designers now have a new set of tools and design strategies to help them keep apace and enable the harnessing of the next class of advanced algorithms. Getting back to Arthur C. Clarke, HPEC is the set of enabling technologies that enables designers to move applications from the magic column to the new technology column.

 

Featured Companies

Curtiss-Wright

20130 Lakeview Center Plaza
Ashburn, Virginia 20147
Categories
Radar/EW - Signal Processing
Topic Tags