Key considerations in radar test
Makers of traditional radar and electronic warfare (EW) test and measurement equipment are adapting to meet new requirements, introducing modular instruments and additional modeling and simulation during different test phases. Modeling and simulation will also reduce the need for expensive full-system testing and aid in identifying and solving problems earlier in the testing process to reduce schedule risk.
The operating environment and requirements for military radars – and particularly electronic warfare (EW) – are changing rapidly due to such efforts as radar architecture proliferation, including active electronically scanned array (AESA), bistatic, and passive radar. Each of these brings a seemingly infinite number of software-defined techniques within cognitive radar and low probability of intercept (LPI) radar, which increases the range of testing required by test systems.
Another trend is platform miniaturization, which is driving the consolidation of radio frequency (RF) systems. Future radars, EW receivers, and communications will likely share the same sensor platform and be tested as a unit. Autonomy similar to that seen in the commercial-vehicle market will also drastically increase the amount of testing required across multi-sensor and multiplatform systems as applications demand higher levels of safety and reliability.
Radar modeling and target simulation is the only type of test that can be applied throughout the design process. The increased complexity of radar systems makes flexible radar modeling and simulation during development critical to decreasing the cost of expensive full-system testing, finding and resolving design problems earlier in the process, and reducing schedule risk.
New test considerations
Addressing test challenges early in the test design process means understanding the initial component- and system-level test considerations for such innovations in the radar and EW industry as hypersonic weapons, multistatic sensors and drones, networked electronic order of battle, and cognitive radar or predictive EW.
For systems that combine data from a group of sensors and make software-driven adjustments based on that data, two main component tests are critical: waveform variance test for antennas and a signal-integrity test for system inputs and outputs (I/O). Because antennas are multipurpose, the test must account for waveform variance and verify that isolation and directivity are both high. Due to the mix of sensors and the data these sensors are generating, the system I/O is complex; signal-integrity testing must ensure and maintain high data throughput and the ability to use customizable system I/O. For system-level test, the heavy software suite and integration require further testing with a series of multifunction simulations to ensure the software is able to manage potential error or unexpected inputs.
For their part, hypersonic weapons systems and reacting platforms need dependable low-latency systems to adapt quickly enough to the environment. As a result, radar and EW systems have higher range requirements, which means that their antenna systems at the component level must feature more elements per antenna for the radar to conduct more precise beam steering with phase and amplitude control. The system level requires low-latency testing, specifically quick update rates for simulations, to ensure that the system can keep up with the hypersonic speeds and decision-making of the weapons or antiweapon system.
The requirement to know more information earlier about smaller radar targets or a target environment has led to greater demand for systems that combine multistatic or geographically diverse) sensors plus unmanned aerial systems (UASs), which must work in tandem. Having connected systems at the component level drives the need for wider-band linear components that may need to test for nontraditional impairments.
For elements on phased-array antennas, high gain and directivity guarantee that each element has higher performance over a smaller area, while the entire system of elements ensures the correct coverage for the overall phased-array antenna. High directivity and tighter beams enables the radar to find smaller and more distant targets. Testing for robustness and accuracy of these radar systems means balancing channels with high-density and detailed EW simulation.
The connected world and big data trends have also inspired a networked electronic order of battle, which is a series of new types of sensors and devices working together to identify, locate, and classify other groups’ movements, capabilities, and hierarchy. With the wide array of sensors used, testing at the component and the system level requires more complex I/O analysis. Such systems also need intricate simulators that can provide higher fidelity and handle more complex threat scenarios.
As these enhanced systems generate more data at a higher rate, cognitive radar or predictive EW systems are needed that make decisions and organize data faster than humans. For these systems, component and subsystem test program sets involve a wider range of frequencies and bandwidths than other systems. Also, traditional parametric testing is likely not enough to fully understand system performance, which means that modeling and simulation testing must be done early in the test process. At a system level, open-loop simulators are no longer a viable option; test assets need to more accurately emulate targets and environments instead of relying on traditional threat databases that do not assess all the capabilities of a cognitive radar system.
Traditional test instrumentation considerations
Four traditional test approaches are used for radar system integration and test: delay lines, commercial off-the-shelf (COTS) FPGA-enabled instrumentation or RF systems on chip (RFSoCs), COTS radar target generators, and turnkey test and measurement solutions. Each of these test methods presents its own strengths and weaknesses.
Delay lines are robust and cost-effective solutions that are easier to buy and develop and that meet very low-latency requirements. However, they are very limited in capability and only work for simple system functionality testing. They don’t offer electronic counter-countermeasure (ECCM) techniques and simulations of real-world environments or scenarios seen by modern radars, like clutter and interference.
COTS FPGA-enabled instrumentation or RFSoCs feature low capital cost, low-latency capabilities, and customizable design. These do, however, require substantial initial nonrecurring engineering (NRE) costs, can be difficult to maintain, and involve a lot of pre-test firmware and software work.
COTS radar target generator systems have a lower NRE cost investment because of their higher-level software starting point and ability to be tailored earlier in the process to specific application needs. There are some drawbacks, however: These COTS radar target generators typically cost more, require support to upgrade and maintain, and lack flexibility because a larger part of their functionality is already defined.
Closed or turnkey test and measurement solutions are delivered as full solutions, which results in great dynamic range, well-calibrated and well-known support based on a core COTS model, and the ability to be leveraged across multiple programs quickly. However, turnkey test and measurement solutions are limited to vendor-defined functionality and are difficult to configure for unique system needs. They also produce higher latency because they are not optimized for a specific test, are typically not phase-coherent, and are often prescripted or open-loop systems.
Test instrumentation trends
The industry trends affecting new radar and EW technology are also driving new instrumentation trends like industry convergence, software-defined platforms, test system maintainability, and test system architectures.
As the technologies and testing for industries like automotive, 5G, and defense converge, test instrumentation must expand frequency coverage and offer larger operating bandwidths with higher channel counts. Test and measurement vendors are investing more in software platforms to run their instruments as customers quickly choose the flexibility, test speed, and reliability of software over previous manual test systems. Economies of scale are driving down test instrumentation solution cost while creating more capable test instrumentation.
The industry agrees that boxed instruments for test need to last 8 to 12 years, with firmware updates required at 18- to 24-month intervals, and hardware upgrades every 18 to 36 months. Makers of these boxed instruments are emulating personal mobile devices by building in touchscreens, and are creating “super boxes,” or collections of boxed instruments, for larger test coverage from single systems.
Modular instruments are seeing the most growth in the industry with an increase in radio front ends, multiprocessor architectures, and reporting and storage needs. Modular hardware and software platforms enable users to adapt test systems for a wide variety of needs, from faster design and reduced schedule risk to compliance with future system requirements and added flexibility with FPGA and RF hardware in the same device.
Multipurpose modular measurement instruments also offer improved measurement IP, better components, advances in signal processing, and better software accessibility and architectures. In addition, modular test instrumentation has led to more compact test systems, so that more than one function can fit into a smaller, PXI-based modular instrument or system.
National Instruments www.ni.com