Code Quality: Improving testing for military-grade applications
Once upon a time, long, long ago, the only kind of dynamic software test was a system functional test. The battle against unreliable software was exclusively fought with complete system test in which an application’s worth was proven by reference to a set of requirements, a set of test data, and expected results.
While this still provides a vital component of the validation and verification process, most test data sets only exercise particular parts of the code. Unfortunately, it is that unusual code path which is only called into play when something out of the ordinary takes place that can lead to in-field catastrophic results. An example might be the test of a divisor to ensure that it is not zero prior to use in a calculation. It should never happen – but what if it does, and the test is flawed?
To prevent this kind of possibility, it’s good to introduce unit and integration testing as well. Unit test involves writing a wrapper ”harness” around a function or procedure, passing data to it, and ensuring that the output generated is in line with design requirements. Integration testing builds upon that success by employing a similar approach, but allowing the functions to make calls to other functions in the call tree, thus proving that the units work together as expected.
Unit and integration tests can fill in the gaps left behind by system test and exercise constructs to protect against those unexpected occurrences, such as “divide by zero”. Alternatively, we can exercise the whole system from the “bottom up”, first by proving that the smallest functional components have been fully exercised, and then showing that they work together.
Either way, although we now have the means to exercise all of the code, how do we know that we have done so? Good test tools provide structural coverage metrics to quantitatively analyze how many code paths have been exercised during structural coverage analysis. The use of standards, such as DO-178, have proven that such an approach reduces the risk of failure. Consequently, this has become the norm for most embedded military standards.
While such standards do not demand that you use tools to generate this information, the overhead of manually demonstrating coverage is so time consuming (never mind more prone to error) that most companies see tools as a way to significantly reduce development costs. Test tools create coverage data using proven instrumentation mechanisms consisting of function calls to record executed paths taken. To create an in-house implementation takes a similar level of effort as the application code itself.
Third party tools also provide a measure of independence, giving evidence that tests are comprehensive using a mechanism written by an organization with no vested interest.
So that’s the end of the story, is it? Using these tools and techniques, you can slay the dragon and prove that all statements are functionally correct and have been exercised.
Well, maybe. It depends on the implications of failure. The more critical the application, the more demanding the standards. Does the amount of coverage data you have generated reflect the criticality of the project? Has the code been exercised on the target, or the host?
More on all of this later. We’ll walk through the various aspects of how to create code quality so that regardless what circumstances demand, you’ll know how to use test tools to slay the dragon and rescue that pile of gold.