NPN240 6U OpenVPX CUDA-capable multiprocessor

GE Intelligent Platforms, Inc.

Flag/report this product
As with most things new (and thus somewhat in the experimental stage), you never know where they will be used Ö and thatís what keeps things interesting. Case in point: We hear about military ìsupercomputingî from time to time, but this is probably the first time weíve heard about it paired with OpenVPX (VITA 65). The embodiment: GE Intelligent Platformsí NPN240 6U OpenVPX CUDA-capable multiprocessor. Armed with two 96-core NVIDIA CUDA-enabled GPUs, the rugged OpenVPX and VPX-REDI (VITA 48) platform can execute at up to 750 GFLOPS peak per card slot. And, to boost the supercomputing exponentially: Several NPN240s can link with multiple hosts (or sometimes even a single host), resulting in multi-node CUDA GPU clusters rendering thousands of GFLOPS. This computational density comes in handy in SWaP-constrained apps, including sonar, radar, video and graphics imaging, and sensor/signal processing too. Compatible with any OpenVPX host SBC, NPN240 also features parallel GPGPU (General Purpose computing on a Graphics Processing Unit) processing, touted by the company as a more cost-effective alternative to utilizing FPGA-based technologies. Other notables include each GPU nodeís 16-lane PCI Express gen2 system-backplane interface and local DDR3 SDRAM, along with NPN240ís harsh-environment-savvy conduction-, spray-, and air-cooling options.
NPN240 6U OpenVPX CUDA-capable multiprocessor

FEATURES

  • Two 96-core NVIDIA CUDA-enabled GPUs
  • Rugged
  • and -REDI () platform
  • Up to 750 GFLOPS peak per card slot
  • Several NPN240s can link with multiple hosts (or sometimes even a single host), resulting in multi-node CUDA GPU clusters rendering thousands of GFLOPS
  • Its density comes in handy in SWaP-constrained apps, including sonar, radar, video and graphics imaging, and sensor/
  • Compatible with any OpenVPX host SBC
  • Parallel GPGPU (General Purpose computing on a Graphics Processing Unit) processing
  • Each GPU node has a 16-lane PCI Express gen2 system-backplane interface
  • Local DDR3 SDRAM
  • Conduction-, spray-, and air-cooling options

See also:

Go Back

Topics covered in this article

Flag/report this product