Outmaneuvering potential IC sabotage
The Pentagon’s supply chain for microelectronics manufactur-ing has gone global – it’s no longer strictly the realm of U.S. manufacturers. This outsourcing of microchip design and fabrication increases the odds for surreptitious installation of malicious circuitry.
In the past, relatively few opportunities existed for outside vendors to access IC blueprints or circuitry, but outsourcing is providing fraudsters and malicious actors more points of access to tamper with chips. “Back doors” secretly inserted in hardware can enable attackers to alter or stealthily take over a device or system at a specific time.
To outmaneuver the “bad guys” and increasing sabotage, Siddharth Garg, assistant professor of electrical and computer engineering at NYU’s Tandon School of Engineering, and fellow cybersecurity researchers are working to develop a “verifiable computing” approach intended to both keep tabs on a chip’s performance and spot signs of malware.
How does it work? The NYU team’s approach involves a chip with two modules: an embedded one that proves that its calcu-lations are correct, and an external one to validate the first module’s proofs. The key part is a verifying processor that can be fabricated separately from the chip. (Figure 1.)
(Click graphic to zoom)
“Using an external verification unit made by a trusted fabrica-tor means that I can go to an untrusted foundry to produce a chip that has not only the circuitry-performing computations but also a module that presents proofs of correctness,” Garg explains.
Chip designers can then turn to a trusted foundry to build a separate, less complex module: an application-specific integrated circuit (ASIC) whose job is to validate the proofs of correctness generated by the internal module of the untrusted chip.
This arrangement provides a safety net for the chipmaker and end user, according to Garg. “Under the current system, I can get a chip back from a foundry with an embedded Trojan. It might not show up during post-fabrication testing, so I’ll send it to a customer,” he continues. “But two years down the line, it could begin misbehaving. The nice thing about our solution is that I don’t have to trust the chip. Each time I give it a new input, it produces the output and the proofs of correctness, and the external module lets me continuously validate those proofs.”
An added bonus is that the chip built by the external foundry is smaller, faster, and more power-efficient than the trusted rated ASIC – sometimes by orders of magnitude. So the verifiable computing setup could potentially reduce the time, energy, and chip area needed to generate proofs.
“For certain types of computations, it can even outperform the alternative – performing the computation directly on a trusted chip,” Garg notes.
The next step for the researchers is to explore techniques to reduce the overhead that generating and verifying proofs imposes on a system as well as lower the bandwidth required between the “prover and verifier” chips. “With hardware the proof is always in the pudding, so we plan to prototype our ideas with real silicon chips,” Garg says.
It’s also worth pointing out that this isn’t Garg’s first big con-tribution within this field: In 2015, he discovered serious secu-rity vulnerabilities in a popular method of camouflaging circuits to prevent intellectual property (IP) theft. These camouflaged circuits could be reverse-engineered within minutes, so he developed a new approach that makes the functionality of a chip dependent on small fluctuations in the concentration of impurities with which the silicon is doped. Optical tools used to “de-layer” a chip for IP theft are unable to discern the functionality of a chip protected via this method.
Further, Garg is cited for helping to create a visionary technique of “split manufacturing” designed to foil attempts to compromise ICs during the fabrication process. Rather than giving foundries access to a complete chip blueprint, the design plan is split and divvied up between several foundries, making it more difficult for a malicious actor to alter a circuit’s functionality.
The group’s work with verifiable computing was supported by grants from the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, a Microsoft Faculty Fellowship, and a Google Faculty Research Award.
Now, to continue pursuing the promise of verifiable ASICs, Garg and colleagues Ahbi Shelat of the University of Virginia, Rosaria Gennaro of the City University of New York, Mariana Raykova of Yale University, and Michael Taylor of the University of California–San Diego, will share a five-year National Science Foundation grant of $3 million.