Using a hardware root of trust to decode software security
Quis custodiet ipsos custodes? – “who will guard the guardians?” – is a question as old as the Roman Empire. This question, with its underlying bearing on trust in general, is still relevant today. And it is directly applicable to computer systems.
The greatest strength of software is that it can be changed – it can be enhanced, upgraded, fixed, and modified for new needs and new situations. Simultaneously, the greatest weakness of software is that it can be changed and modified to replicate undesirable behavior.
A major goal of software security is to allow software to be changed only by authorized entities, and to prevent prevent software from being changed by unauthorized entities. Failing this, you want to be able to detect if software has been changed. This goal, the prevention and detection of unauthorized changes, is at the core of trusted software.
The question is, how do you know you can trust a piece of software? One way is to have an entity that you trust attest that the software can be trusted. An example of this is only installing software from “known good” – or trusted – sources. You should also check the signature of the software using cryptographic techniques. For instance, a developer may use the Linux rpm package format and rpm utilities to verify the cryptographic hash of their packages before they are installed. This is a normal part of using rpm, and helps detect software packages that have been modified through malicious operations or data corruption.
However, that’s just the start – trust and validation must occur throughout the entire software development chain. For instance, the software performing the rpm check has to be trusted. The rpm software can be checked by using system integrity utilities. But then, the system integrity utilities have to be trusted, too. That can be done inside the Linux kernel (which integrates certain system integrity checking capabilities. Then, the Linux kernel has to be validated, by the bootloader, which can be validated through UEFI firmware and secure boot. The UEFI firmware can check its own integrity. It is trust all the way down!
But the UEFI firmware is software. Special software, with multiple integrity checks and special update processes, but it is still software. With sufficient resources, even UEFI firmware can be compromised. Once the system firmware is compromised, nothing else in the system can be trusted.
To read more blogs on open source software and security from Russell Doty, click here.
Looking beyond the system, public key cryptography (PKC) can provide a robust way to verify the identity of a system and perform secure operations. PKC uses a pair of keys – a public key and a private key. As long as the private key is protected, the cryptography is secure. Unfortunately, if a system is compromised the private keys are compromised as well, and the actual cryptographic operations can be suborned. Not a good thing if you need to trust the system!
One answer is to use immutable hardware. For example, a CPU serial number can be fixed in the hardware by blowing a set of chip level fuses during production. Unfortunately, this CPU serial number will be read and transmitted by software. If the system is compromised, the software can return any serial number.
Another answer is to use a special security processor – a dedicated computer that does nothing but a small set of security and cryptographic functions and provides an immutable Hardware Root of Trust, which includes several key attributes. In a Hardware Root of Trust, the security processor and its software and memory are self-contained and designed to resist physical attack or compromise. The software is hardcoded into the chip and can’t be modified or updated; it is truly read-only. Limited amounts of non-volatile storage are included in the security processor; this memory is mainly used to store cryptographic keys and hashes. The security processor includes a set of robust cryptographic functions, including encryption and decryption, hashing, and cryptographic key generation, all of which are performed with a well-defined API.
With Hardware Root of Trust, public key and private key pairs can be generated inside the security processor. The private key may be retained inside the processor and never exposed to the system. In this mode, the private key never exists outside of the processor and can’t be retrieved or compromised.
Such a security processor can be used to provide the starting point for building a trusted system by using it to attest to the integrity of the lowest levels of UEFI [Unified Extensible Firmware Interface] firmware and then building a complete and trusted software stack. This is exactly what is done when using a trusted processing module (TPM) in conjunction with UEFI Secure Boot to implement Trusted Boot.
TPM is widely available in server, desktop and laptop systems. It is also available in higher-end Internet of Things (IoT) and embedded systems.
TPM is commonly implemented as a separate chip, and security processors are also being integrated into standard CPUs. Examples of this include ARM TrustZone and AMD Secure Processor. Intel is adding Software Guard Extensions (SGX) that provide hardware enforced security capabilities.
Unfortunately these security capabilities aren’t as widely used as they should be. I encourage anyone implementing security sensitive systems (which really should be all systems) to look into using the security features of the available Hardware Root of Trust on their systems.