Military Embedded Systems

Open standards for digital video drive common infrastructure

Story

February 21, 2011

Steve Edwards

Curtiss-Wright

Gaining video sensor data interoperability combined with open standards and a common infrastructure network will allow COTS vendors to keep up with sensor growth in military platforms that provide critical missions support and save lives.

The complexity of deployed embedded military systems increases as more video sensors are added to ground and airborne platforms, delivering ever-increasing amounts of video data to be processed, viewed, and archived. As the variety of video sensors used in military radar and signal processing applications continues to multiply, many of which feature incompatible input requirements, the result has been increased system complexity. One key area of complexity is the cabling and configuration needed to distribute multispectral data within these sensor-to-video/-display/-recorder embedded systems.

Today’s military platforms often have upwards of dozens of sensors, making the real-time distribution of video data a real challenge. The good news is that open standards are underway and gaining traction, such as the new DEFSTAN 00-82 standard in the UK. This trend is helping to drive industry use of standard media, such as 10 GbE to transport data, which will reduce system complexity and promote system interoperability.

The open standards approach

A standard approach to digital video connectivity will help military suppliers provide compatible subsystems that support a common infrastructure. Emerging standards for digital video interoperability support this approach. One example, as mentioned, is a well advanced, recently published standard in the UK: DEF STAN 00-82, which establishes protocols for video digital streaming. This standard is being mandated by the UK Ministry of Defence’s (MoD’s) Generic Vehicle Architecture standard (DEF STAN 23-09), which is overseen by a standards consortium. Curtiss-Wright Controls Embedded Computing (CWCEC) recently demonstrated implementations of digital video streaming and recording over a 10 GbE fabric using DEF STAN 00-82.

A typical video distribution system architecture comprises the front-end interfaces to the sensors, which accept data from legacy analog systems (RGB or composite video) and newer interfaces such as HD-SDI and 3G-SDI. It can also accept streamed video data from network-compatible sensors that generate data onto a digital network, such as GbE Vision, or systems built on top of Real-time Transport Protocol (RTP), as invoked by DEF STAN 00-82. Multiple video streams are then sent through a rugged Ethernet switch and multiplexed to create the 10 GbE fabric of the video network within the platform.

The platform’s video processing and data storage equipment, such as CWCEC’s Sentric2 digital video recorder, connect to the 10 GbE fabric to extract multiple video feeds from the multiplexed stream (Figure 1). The appropriate image processing can then be performed on the video data, such as multispectral image fusion, image registration and stabilization, image enhancement, and image stitching that combines multiple sensor views to provide an outside view of the platform.

 

Figure 1: The Sentric2 High-Definition Video Recorder System captures, compresses, and stores multiple channels of high-resolution analog or digital video, composite TV, network video, and audio in digital formats.

(Click graphic to zoom by 1.9x)


21

 

 

Supporting heterogeneous protocols

By using an open standard for transporting multispectral data over a single network, the system designer is also able to eliminate the need to translate different video protocols. The results are a unified transport medium and a unified vocabulary for controlling the various video sources and the format of the video streams. The best advantage of using a common digital video streaming format is that it eliminates the need to perform protocol conversion, which can be costly for high-bandwidth video data. A better method is for the sensor to speak the same language as the equipment receiving the video signal. There are two aspects to video protocols: data and control. It is especially important that the equipment that wants to subscribe to video streams from specific sensors be able to speak a control language that those sensors understand. The control language should be sufficiently generic that multiple manufacturers can conform to that standard with their own specialized equipment.

Eliminating dedicated cabling

Typically, individual cabling is used to connect each video sensor to the data-concentration point. The common infrastructure approach eliminates the need for multiple cables in a confined space, supporting transmission over a standard network of HD sensor video, simultaneous digital video, and metadata recording. It also enables multiple video streams to be concentrated at one or more processing units to undergo advanced image-fusion techniques.

Keeping up with sensor proliferation

Attaining video sensor data interoperability with open standards and a common infrastructure network will help COTS vendors keep up with the growth of sensors on military platforms. The real-time battlefield data these sensors provide saves lives and supports critical missions. Reducing cabling and configuration complexity and providing a high-bandwidth common architecture for distributing video sensor data will enable onboard processors to take full advantage of the growing amounts of video sensor data being made available to today’s warfighter.

To learn more, e-mail Steve at [email protected].

 

Featured Companies

Curtiss-Wright

20130 Lakeview Center Plaza
Ashburn, Virginia 20147
Categories
Unmanned - Sensors
Topic Tags