'Legacy' is not a four-letter word

Although people often have a negative attitude about so-called “legacy” code, the widespread use of legacy software is one of success. It not only reduces effort, but also increases reliability. Modern programming languages that promote reuse of existing code strengthen this tradition.

I am often asked whether any new applications are being started from scratch in Ada. There are indeed some such examples, a notable one being iFacts[1], a land-based component of the new air traffic control system for the United Kingdom’s National Air Traffic Services (NATS). This application is being developed in Ada, using the SPARK dialect with rigorous formal methods. However, that project is an exception. Most new large applications are not started from scratch in Ada or any other language for that matter. Instead, they make varying use of so-called “legacy” components, allowing reuse of code from earlier systems.

The word “legacy” is perhaps ill-chosen. For many, it seems to denote rusty old junk code, and it is clear that many programmers would prefer to invent (and very often reinvent) by writing new code from scratch. On the contrary, at its best represents the effective deployment of , which was always regarded as highly desirable. Back in the days of the original development of Ada in the late ‘70s and early ‘80s, code reuse was regarded as a critically important goal; the Ada design reflects this, for instance, in its strong insistence on separating specification and implementation.

There are many reasons why it is generally preferable to reuse code where possible rather than write from scratch. First, and most obvious, development of complex programs is an increasingly costly proposition, and anything that can reduce costs is highly desirable. Perhaps even more importantly, tried-and-true code that has been effectively deployed is likely to be reliable. Our discussion will address the issues of producing highly reliable code, including writing highly portable code and ensuring it is fit-for-purpose in its new home.

Paths to reliable code

Indeed, there are two rather different paths to achieving reliable code in new applications. The first path depends on the use of powerful techniques to eliminate the appearance of defects in the first place. The iFacts system exemplifies this approach. It is written almost entirely in SPARK (Figure 1) and uses mathematical proof techniques to ensure freedom from such defects as unanticipated overflow.

Figure 1
(Click graphic to zoom by 1.3x)

Typically, tools are used to derive proof conditions (for example, that overflow is impossible for a particular arithmetic computation); next, automated proof engines attempt to prove these conditions with varying degrees of manual intervention. The use of such techniques will become more prominent as more applications have strenuous safety and security requirements. The highest level of security in the Common Criteria[2], EAL7, actually requires fully formal techniques (Figure 2). The recently released Tokeneer system[3], a demonstration project from NSA and engineered using SPARK, provides another good example of this formal approach to ensuring freedom from defects. Tokeneer is a program that controls access to a secure enclave using biometric data such as fingerprints. The purpose of the demonstration is to show that it is feasible (and practical) to create such an application by formally proving the necessary security properties.

Figure 2

However, another path to reliability exists: long-term deployment of code in actual real-world use. Over years of use, such strenuous testing can bring large systems to effectively converge to a state of impressive reliability. Examples are the Apache server[4], the PARS airline reservation system[5], and versions of the AIX operating system[6]. The latter is an interesting data point, since the iFacts application is built on top of AIX, and there would not be much point in creating a highly reliable application if the underlying operating system itself was not trusted. This trust comes not from a formal demonstration of correctness (which is beyond the state of the art), but from NATS experience in deploying AIX in live air traffic control systems for more than a decade.

Even if an application is developed with a strict discipline, and subject to formal certification disciplines such as DO-178B[7], additional confidence is gained from real deployment under actual operating conditions. Although no lives have been lost on commercial airlines due to bugs, some hair-raisingly close calls have resulted from software defects. One example was when a Malaysian Airlines B777-200’s avionics software failed midflight at 38,000 feet due to a bug, but fortunately did not cause an accident[8]. So even software developed with this kind of rigor benefits from experience and real-world testing and becomes more reliable over time as such defects are corrected.

Given these considerations, it seems like a no-brainer to conclude that it is preferable to components where possible, and a high proportion of legacy code should be regarded as a good thing rather than a burden. Nevertheless, there are many program managers who almost seem to apologize for their use of legacy code. Some quite understandable reasons exist for this. Let’s look at two separate issues surrounding code reuse: creating reusable code and choosing appropriate legacy code.

Creating reusable code

One definite issue in using legacy code is that of creating reusable code in the first place. If a component is to be effectively reused, it must be written in a highly portable manner, since it is quite likely to be needed in a different environment (a different architecture or one utilizing a different compiler) from the original. It is possible to create portable code in any language, though some are more suitable to this task than others. For example, Ada was designed from the start to be suitable for creating highly portable, reliable programs. It is critical that the programmers know, understand, and follow the official definition of the language as described in the appropriate standard, such as the ISO and ANSI standards for C, C++, and Ada. Not all languages have formal standards, the most notable example being Java, but there are still defining documents that are reasonably rigorous.

Programmers can’t just write stuff that happens to work, because all too often they end up depending on undefined or implementation-dependent behavior. For example, the effect of integer overflow is undefined in C, but many compilers have implemented “wraparound” treatment, and many C programs have come to rely on this as the expected behavior. Ada is often advertised as highly portable, and it does better with this issue than many other languages. (For example, Ada allows definition of integer types with programmer-specified ranges rather than relying on whatever the hardware provides.) However, despite the fond wishes of many program managers, there is no magic here, and it is quite possible to write nonportable code. We have a lot of experience at AdaCore in guiding customers through porting code from old compilers to modern technology. In some cases, code has ported with essentially no changes at all. In other cases, programmers have unwittingly relied on implementation-dependent behavior (for example, choice of default alignments), and porting can be more difficult.

For effective code reuse, program components must be very well-designed and documented. From the interface specification, programmers must be able to tell exactly what the code is supposed to do and determine how it should be (re)used. Equally important, internal comments must provide sufficient detail and be consistent with the code, so that if the component needs to be modified when reused, a programmer coming to the code years after it was written (and after its original authors have long disappeared) can comprehend the design and internal organization. Designing a module for effective reuse definitely takes some skill and care and can add to the cost and time required for original code creation. Of course, in the ideal world, that extra cost is more than recovered when the code is eventually reused in some other project. But all too often, internal and external accounting policies mitigate against applying this sensible overall cost judgment, and instead an initial product is rushed to market in an unsuitable state for reuse.

A good example can be found in recent political events in California. You may recall that Governor Arnold Schwarzenegger did not attend the Republican convention. The official reason was that he had vowed not to leave the state until the state budget crisis was resolved. To show that he was serious, he announced that all state employees would be paid minimum wage until the agreement was achieved. Surprisingly, the comptroller of the state declared this impossible. He blamed an ancient legacy payroll system written in COBOL, vaguely implying that the language was at fault. In fact, COBOL is a perfectly reasonable vehicle for financial programming, and it is certainly possible to write well-documented, portable, reusable COBOL code. But it is unfortunately quite possible that in this case, they were faced with legacy code in the worst sense of the word: undocumented, badly designed, inflexible, and impossible to maintain.

Choosing appropriate legacy code

The second vital factor in successfully reusing legacy code is that code should be fit-for-purpose. To get the benefits of well-used code’s reliability, it is extremely important that it be used correctly in its new home. A spectacular case of failure to follow this critical principle can be found in the initial launch of the Ariane-5 rocket[9], where reusing Ariane-4 code caused catastrophic system failure because the specifications of the new rocket did not match those of the old rocket in a critical respect, causing an unexpected arithmetic overflow. This problem was compounded by a design flaw where the code in question was noncritical yet was not prepared to handle the unexpected overflow, leading to critical guidance systems shutdown.

On the other hand, for an example of appropriate use, let’s look at the problem of writing and maintaining payroll systems in legacy COBOL applications. Tax laws change from year to year, often radically, but by writing appropriate modular components, the need to rewrite such systems each year is avoided. Instead, well-written legacy payroll software is produced. (Apparently the State of California is unlucky not to have such a system.) Languages, systems, and management practices that encourage the creation of such components are a crucial factor in successful legacy code use.

In with the old, out with the new

Despite failures in some cases such as Ariane, the correct use of legacy code can have enormous cost, reliability, and safety benefits. A recent highly visible example is the avionics system of the new Boeing 787 “Dreamliner” (Figure 3). This is an all-new plane with all-new hardware technology and materials, but the software aboard has a significant proportion of reused modules from other Boeing airplanes. So when someone asks you, “What’s new in your latest software system?” let’s hope you can answer, “Same old, same old.” Whether it’s called “legacy code” or “reusable components,” the benefits can be huge.

Figure 3
(Click graphic to zoom by 1.7x)


1. “NATS Pioneers Biggest ATC Advance Since Radar,” www.nats.co.uk/article/218/62/nats_pioneers_biggest_atc_advance_since_radar.html

2. The Common Criteria Portal. Common Criteria for Security Evaluation. Vers. 3.1. Sept. 2006, www.commoncriteriaportal.org/thecc.html

3. The Tokeneer Project, www.adacore.com/home/gnatpro/tokeneer

4. The Apache Software Foundation, www.apache.org

5. Origins and Development of Transaction Processing Facility, www.blackbeard.com/tpf/tpfhist.htm

6. IBM AIX, www-03.ibm.com/systems/power/software/aix/index.html

7. RTCA SC-167/EUROCAE WG-12. RTCA/DO-178B. Software Considerations in Airborne Systems and Equipment Certification. Dec. 1992.

8. D. Evans, “Safety: Safety-proofing Software Certification,” Avionics Magazine, www.avtoday.com/av/categories/maintenance/703.html

9. J.L. Lyons, Report of the Inquiry Board into the Failure of Flight 501 of the Ariane 5 Rocket. European Space Agency Report, Paris, July 1996 (annotated), www.jacobs.com.au/_lib/pdf/Ariane%20501%20Failure.pdf


Dr. Robert Dewar is cofounder, president, and CEO of AdaCore; he has also had a distinguished career as a professor of Computer Science at the Courant Institute of New York University. He has been involved with the Ada programming language since its inception in the early 1980s and, as codirector of both the Ada-Ed and GNAT projects, led the NYU team that developed the first validated . Robert was one of the authors of the requirements document for the revision, and he served as a distinguished reviewer for both and Ada 95. He has coauthored compilers for SPITBOL (SNOBOL), Realia COBOL for the PC (now marketed by Computer Associates), and Alsys Ada. He is also a principal architect of AdaCore’s technology. He has written several real-time operating systems for Honeywell Inc. and frequently shares his thoughts on computers and on open-source software at conferences. He can be reached at dewar@adacore.com.