New technology helps multicore meet safety-critical standards
As I mentioned in my previous blog post, though, the very thing that makes multicore attractive—parallel processing—makes it harder to test, particularly when you’re trying to meet DO-178 standards. Traditional instrumentation methods have come up short for testing multicore, due to the memory and run-time overhead and data collection techniques.
This is where solutions using ultra-light instrumentation create a breakthrough. For the first time, multicore developers have a way to efficiently and cost-effectively verify a safety-critical standard. Let me explain how it works.
Ultra-light instrumentation reduces memory footprint requirement
Traditional instrumentation uses a combination of precompiled and run-time processes, where probe points are inserted on every line of code.
With ultra-light instrumentation, static analysis of the code under test can be used to determine the best locations to place instrumentation points. This ultra-light instrumentation, coupled with a new form of highly optimized test harness framework, significantly reduces the memory footprint required to perform system-level testing and coverage analysis.
With this approach, it’s now possible to use test automation and hardware stubbing on target systems with well under 1K bytes RAM/ROM. This approach also takes advantage of a highly optimized data collection technique that integrates all platform test results and coverage dependencies into one data structure. This takes into account concurrency constraints as part of its structure.
To prevent concurrency issues at run-time, this technique eliminates calls into the operating system or to other library functions that manage memory or deadlocks. Thus, on resource-limited target platforms, the test environment mirrors the speed and functionality of the final application execution.
Rather than having to piece-meal together multiple component-level tests, system-level testing can be accomplished with fewer passes, thereby saving a significant amount of testing time.
New “bests” in verification technology
Two new “bests” in verification technology help enable multicore systems to achieve this compliance.
- The structure can now be set to fully use every bit. One bit per decision point makes the instrumentation as light as possible and minimizes the memory footprint.
- The inline structure manipulation is done at compile time, resulting in anywhere between one and three instructions. Compare this to traditional approaches, which can result in 10-20 instructions per probe point, and you can see how wildly different the two approaches can be.
Together, users have validated these approaches to produce an overall overhead of 1-10 percent in terms of executable size and execution time, marking a significant reduction in overhead from other mechanisms.
Minimizing the memory and performance overheads of both system test frameworks and code coverage instrumentation does two things:
- Now, developers can instrument applications on resource-constrained platforms such as multicore platforms.
- Plus, they’re also able to run tests once and capture data for the entire application.
This change helps reduce or eliminate test duplication, which increases productivity—especially important with the tight development schedules inherent in the industry’s drive to reduce SWaP.
As the industry turns more and more to multicore solutions, it’s clear that traditional testing methods aren’t sufficient. Ultra-light instrumentation, which offers a thorough, yet cost-effective and efficient way to meet safety-critical standards, fills this gap.