Cognitive technologies tackle security, robotics, and data analysis in drive to deliver smart military systems
Cutting-edge cognitive technologies will be the enabling factor in getting autonomous systems off the ground and into the military theater of operations. Specifically, artificial intelligence (AI), machine learning (ML), and deep learning (DL) techniques will be key to developing autonomous systems that deliver on the promise of taking some of the data and operational workload off the warfighter.
Artificial intelligence (AI), machine learning (ML), and deep learning (DL) all hold much promise in the quest to develop and use smart military systems. These technologies are still in their infancy, however, which means that engineers and designers face constraints and technical challenges that will need to be addressed as threats become more complex.
Many “smart” programs are still in the development stage, “where they are evaluating the performance, processors, and software to enable ML and AI in our space,” says Devon Yablonski, principal product manager, Mercury Systems (Andover, Massachusetts). “A challenge involves acquiring the data to train the algorithms and models to provide high confidence results. While security (both physical and cyber) is a known concern, it is not going to get as much focus until applications and systems get closer to a deployment maturity.
“This is a problem, which we have seen in the other application spaces such as radar and EO/IR [electro-optic/infrared], because security and safety need to be built into the hardware designs from the start,” Yablonski continues. “If it is not a strong consideration at the start of development, it is conceivable – if not probable – that a processor, network interfaces, computers, and software are designed in that have no ability to support the level of security or safety standards that are required. The result is either a complete redesign late in the development phase or an expensive effort to now get the hardware and software configuration certified; sometimes this is not possible in hindsight.”
ML, AI, and DL technologies are poised to help secure military systems but must be considered during the development phase of the design process. Basically, “The first challenge is that systems are just getting more and more complex,” says Terry Patten, principal scientist at Charles River Analytics (Cambridge, Massachusetts). “As the complexity of systems increases, the effective attack surface that adversaries can use to attack also increases in size and complexity, making it harder to create systems that are not vulnerable in some way or other.”
Yablonski says that “There are a number of emerging concerns for security with ML/AI systems; the issues surprisingly remain the same whether considering securing an ML/AI application, or using ML/AI in the security system itself.”
Because systems are becoming more complex and this is an ongoing challenge in the defense industry, engineers “need to develop systems that are better able to recognize and respond to attacks,” says Avi Pfeffer, chief scientist at Charles River Analytics. “In this case, some of the ways that AI/ML might help are in predicting attacks before they happen (for example, could you learn enough about the typical life cycle and evolution of malware to predict what malware will look like before it exists?), identifying attacks as they happen (or, could you learn enough about what malicious code looks like to identify a zero-day attack when it first appears?), and responding to ongoing attacks (that is, cyberattacks often happen faster than human defenders can respond – can we create AI/ML systems that assist in the defense?).”
AI/ML systems that can quickly detect and adapt to attacks will be integral to using smart systems. “This means that the threat being detected or its effect changes quickly and as such, applications will have to quickly change, and that the systems will need to be dynamic to handle application changes without major, expensive, and technical refreshes,” Yablonski says.
Unfortunately, the issue of rapid threat evolution is still being explored: “While ML/AI systems excel at the classification tasks they are trained for, they are not readily adaptable to new situations without retraining, and in the worst cases, without adapting the structure of the network that is trained,” Yablonski says. “Presently, curation of data for training is performed human-in-the-loop or ‘supervised,’ which introduces a delay between data acquisition and deployment of updated ML classifiers.”
In addition, “Cybersecurity can be challenging because the success of AI/ML solutions is dependent on the quality of the algorithms and training,” asserts Ray Petty, vice president, Aerospace & Defense, at Wind River (Alameda, California). “Training of the AI/ML systems, especially, can be difficult in a military environment because it typically requires the identification of ‘normal’ behavior and ‘off-normal’ behavior which can be very challenging in military systems with a mix of equipment from multiple generations.”
Even though the technology needs maturation, there are case studies that prove to be successful. For example, “ML has been used to learn models of normal, nonmalicious behavior (e.g., benign software, ordinary network traffic, typical operating-system usage) and it can then detect threats by looking for anomalies that do not look like the learned normal patterns of behavior,” Pfeffer explains. “More recently, ML methods are being used to predict future threats, though this is more of an interesting research area than a proven capability at the moment.”
Although detecting future threats is not yet a fielded capability, as mentioned before, “We can expect to see it used in the military for detecting threats,” says David Tetley, principal software engineer, Abaco Systems (Huntsville, Alabama). “AI can be used to identify network intrusions by using AI algorithms to examine network communication in terms of traffic patterns and packet content. It can also be used to identify malware and viruses by recognizing malicious files so they can be preemptively isolated from the system.”
Utilizing AI for offensive measures in cybersecurity instances makes sense such as when “you want to find the bad actors and their comms before they affect your systems or in parallel with them already having an impact on your systems,” says Chris A. Ciufo, chief technology officer at General Micro Systems (Rancho Cucamonga, California). “For example, they might be disrupting your RF communications by jamming or they might be using low-tech cellular services to infiltrate or communicate your position to their buddies and planning an attack.”
Detecting future cyber intrusions will prove to be an asset. Moving forward, “the key trends will likely be in line with how AI is being used for cybersecurity in the commercial world,” Tetley says. “That will see, for example, the use of biometric logins to military systems – retina scans and so on.”
AI and ML introduce new dimensions when it comes to security in addition to just the first step of a login to a system, Yablonski says. “AI and ML are predominantly employed in applications that perform identification/classification tasks. As such, a natural inclination is to employ them for Behavioral Intrusion Detection: that is, can the application distinguish between ‘typical operation’ and ‘anomalous operation’ of the host system itself that might be indicative of a cyber-compromise?”
“Mercury believes that practical solutions to this challenge begin with hardware security – to establish a trusted computing base impervious to the actions of a cyber adversary,” Yablonski asserts. “That is, any fielded military system should be able to power-cycle into a known-good configuration, irrespective of the number of successful cyberattacks that were launched against it during operation. This view is consistent with secured boot, cryptographic signature checks, and other system-security engineering mechanisms common in defense applications today. Hardware must provide the initial root of trust and must be guaranteed to be secure before cybersecurity techniques can be effective.”
However, using AI/ML/DL technologies will enable the military to take an offensive stance, such as protecting tactical networks. This strategy “requires ML/AI-capable systems able to process massive amounts of data extremely fast,” Ciufo says. “Until recently, any data from the battlefield had to be collected and sent to a data center possibly halfway around the world, taking hours to analyze and provide any actionable intelligence. That’s all changing, with new servers and small-form-factor systems that offers vast amounts of parallel processing (called ‘coprocessors’ or algorithm processors). These coprocessors are either GPUs – from Nvidia or AMD – or FPGAs from Altera or Xilinx. These are power- and data-hungry processors intended for a land-based data center that supplement the system’s main CPU(s). They require massive memories and fat pipes to move data into the system and around the system between the coprocessors.”
Therein lies the dilemma or, better stated, the elephant in the room: How does the warfighter sift through all that data in a timely manner to gather actionable intelligence?
It’s true that AI technology can benefit a range of military systems. Foremost among these is the “Big Data” problem, or the fact that the tech is still not advanced enough to handle enormous amounts of real-time data, and the limits are still there.
Regardless of the current limitations, it’s the expected potential of what AI offers that brings the defense industry to innovate. Simply because, Ciufo says, “Hands down, the military application that benefits the most from the use of machine learning and AI is C4ISR [command, control, communications, computers, intelligence, surveillance and reconnaissance]. ML and AI will help C4ISR search through petabytes of sensor data that today’s already-deployed systems can collect in order to identify and predict patterns, threats, images, anomalies, and to basically turn all that data into ‘actionable intelligence,’ so that proper actions can be taken. Such data analysis and interpretation can all be done in minutes now instead of hours.” (Figure 1.)
It’s paramount for users to be able to grab that actionable intelligence as quickly as possible and for “defense applications, ‘recognition’ can be considered in multiple contexts: from concrete questions such as ‘is that our aircraft approaching?’” Yablonski asks. “’Does that speck in the sky move like a bird or a like a jet?’ to the more abstract ‘Does that network traffic look normal?’ or ‘Does the spectrum today appear to be the spectrum we’ve been seeing?’ ML and AI holds the promise to build simulated brains tailored to process more concurrent senses – sensors – than a human brain has bandwidth to support.”
What these different techniques – AI, ML, DL – are delivering include increased levels of automation in cybersecurity and military applications overall, Petty says. “Automation in AI/ML/DL training, automation in AI/ML/DL decision-making, increased integration, and automation of AI/ML/DL in the decision-making loop results in much faster time to results and more efficient use of staff resources.”
Robotics: also smart systems
So let’s talk about automation. Another major area that AI, ML, and DL technologies is benefiting are “autonomous systems and vehicles,” Tetley says. “AI enables superior computer ‘perception’ of an environment via AI-based segmentation, object detection, and recognition algorithms, along with sensor fusion. It can also facilitate smart path planning so vehicles can get from A to B in the safest and most efficient way, based on perception/locality sensors and HD [high-definition] maps, while continually improving and optimizing its planning via machine learning.”
Petty agrees, saying, “Robotics is also experiencing significant success with AI/ML as evidenced by the explosion of autonomous vehicles using AI/ML as part of their overall autonomy package.”
With a more detailed look, “robotics/autonomous vehicles might use even more AI/ML, since there are a lot of functions that are done without a human controller, even if the vehicle is controlled by a human operator at the highest level,” Pfeffer explains.
In the robotics/autonomous vehicle arena, Ciufo says, “This means a lone armored vehicle equipped with machine learning at the edge can analyze network traffic, for example, and locate – and ultimately predict – patterns of the perpetrator’s communications and/or attempts to infiltrate your network and communications.”
One prominent example of this technology that is currently out in the commercial world is the Google Project Maven, “which processes drone imagery for object detection,” Yablonski states. “The results are still not precise enough to enable a significant amount of autonomous action; however, it is good enough to help a human filter a large complex environment and determine what is and is not an actual threat or otherwise interesting object. That still leaves a human in the loop, which implies still a significant time delay to an action. In many cases, the data may be too stale by the time the human operator can act on it, which is why the Holy Grail is to have the computer detect the issue and take care of it altogether. Over time, especially with safety-certified, secure hardware and software, the ML and AI can do more of the work.”
Because of projects like Maven, robotics is finding an increased use of ML and AI techniques because of its ability to be able to keep the warfighter out of harm’s way. By using this cutting edge-tech, “ML and AI [can] address recognition tasks that are difficult to specify in programming-language assertions,” Yablonski says. “As an example, the tasks of recognizing a traffic sign within an image is one a brain does with exceptional ease, irrespective of the orientation or scale of the sign; however, specifying the rules for how to process the pixels of the image to arrive at the same conclusion is intractable. The use of ML/AI in these applications is to effectively synthesize a brain simulator tailored to that specific task.”
“More specific applications are the use of AI-based machine vision for target detection and tracking in EO/IR surveillance systems and weapons platforms,” Tetley adds. For example, he cites the Abaco Systems Obox, “a self-contained platform specifically designed for the development of AI-based/machine vision-based autonomous applications. This too includes NVIDIA GPUs as well as up to two Intel Xeon D 12-core processors/Xeon E four-core processors to deliver up to 14 teraflops of performance – the level of performance required for fully autonomous vehicles.” (Figure 2).
The key moment for AI will be when these autonomous systems can detect and adapt to the information in front of them. “‘Fuzzy logic’ is a term often applied to how AI works – which perhaps makes applications like Degraded Visual Environment particularly relevant: enabling a vehicle or its occupants to ‘see’ even when normal vision is obscured by sand, cloud, water, and so on,” Tetley adds.
Unfortunately, the “defense application space is prime for encountering new threats revealed for the first time during combat operations,” Yablonski says. “Mercury believes this will generate a push toward unsupervised learning so as to adapt quickly to new threats. However, when learning is performed with data collected from the field, it offers one’s adversaries a new attack surface: how might they attempt to bias the in-conflict training? It may be interesting to see if supervised ML systems will be employed to manage the training process for unsupervised ML systems.”