While methods for recognizing software viruses are becoming well established and patches can be created and distributed to fix affected programs, on the hardware side deliberately inserted defects and modifications to the underlying processors still can be made to be almost invisible.
Part of the problem is that modern chips are designed by dispersed teams and often cross the globe to third party foundries for fabrication, packaging and testing. The outsourcing of fabrication, while economically necessary, gives so-called “bad actors” opportunity to steal intellectual property (IP), put in place a secretly inserted “back door” function that could allow attackers to alter or take over a device or system at a specific time, or install what are known as “Trojan horse” circuits.
For example, as pointed out in a paper entitled “A2: Analog Malicious Hardware” published at the 2016 IEEE Symposium on Security and Privacy (22-26 May 2016), Kaiyuan Yang, Matthew Hicks, Qing Dong, Todd Austin, and Dennis Sylvester of the University of Michigan showed how a fabrication-time attacker can leverage analog circuits to create a hardware attack that is small (requires as little as one gate) and stealthy (requires an unlikely trigger sequence before effecting a chip’s functionality).
In the open spaces of an already placed and routed design the authors constructed a circuit that uses capacitors to siphon charge from nearby wires as they transition between digital values. When the capacitors fully charge, they deploy an attack that forces a victim flip-flop to a desired value. By selecting a victim flip-flop that holds the privilege bit for the processor this attack can be weaponized into a remotely-controllable privilege escalation from user mode to super user mode after a certain sequence of instructions is executed on the processor. Experimental results showed that the attacks work, that the attacks eluded activation by a diverse set of benchmarks, and the authors suggested that the attacks could evade known defenses.
Similarly, an attack method known as the dopant-level Trojan can convert trusted circuitry into malicious circuitry by changing the dopant ratio on the input pins of victim transistors, effectively tying the input of these transistors to a logic level 0 or 1, a short circuit. Trojans are very difficult to detect since there are no added or removed gates or wires; detecting dopant-level Trojans often requires a complete chip delayering and comprehensive imaging with a scanning electron microscope.
Beyond that, with current tools and technology someone who has physical access to a chip can extract the detailed layout of the integrated circuit. By using advanced visual imaging techniques, reverse engineering can reveal details that are meant to be kept secret, such as a secure protocol or novel implementation that offers a competitive advantage.
Verification tools may not help. Every company in the world tests chips and semiconductor design companies have been performing chip verification for at least the last two decades. However, the focus has been on inadvertent mistakes in the chip’s design, making sure that the design meets the posted specifications. But what if the modifications are targeted ones, not inadvertent errors? If a chip is maliciously modified the changes are likely stealthy, so their effects won’t show up during testing. What’s more, it’s possible to modify a chip such that the modification is activated only after the chip has been in the field for a certain amount of time.
What to do? The most natural response—achieving high assurance by controlling the manufacturing process—may not be feasible or could impose enormous penalties in price and performance.
Circuit camouflaging is one proposed defense mechanism to protect digital ICs from reverse engineering — or “de-layering”— attacks. These strategies use camouflaged gates, i.e., logic gates whose functionality cannot be precisely determined by the attacker. In camouflaging ICs it has been assumed because undoing each gate to discern its function is a long, technically complicated process, to reverse engineer a circuit protected by camouflaging even a small number of gates in the circuit the attacker would be forced to undertake years and years of work. But In a recent Internet Society paper entitled “Integrated Circuit (IC) Decamouflaging: Reverse Engineering Camouflaged ICs within Minutes,” Mohamed El Massad (University of Waterloo), Siddharth Garg (New York University) and Mahesh V. Tripunitara (University of Waterloo) refute such claims, demonstrating that using same benchmark circuits with camouflaged gates chosen the same way as prior work, they could decamouflage the circuit in minutes, and not years.
In a subsequent paper (“Threshold-Dependent Camouflaged Cells to Secure Circuits Against Reverse Engineering Attacks”, published in the Proceedings of the IEEE Computer Society’s Annual Symposium on VLSI, 2016) the same authors, along with Maria I. Mera Collantes (of New York University) propose a new camouflaging technique that addresses this vulnerability. They present an approach that leverages the intrinsic characteristics of the material used instead of the physical structure of a camouflaged gate. Here the functionality of the proposed camouflaged gate is determined by the threshold voltage of its transistors, that is, depending on whether it has been fabricated using a high-Vth or low-Vth process. As such, this technique uses the standard features available in commercial CMOS dual-Vth (or multi-Vth) processes. The authors call this technique threshold dependent (TD) camouflaging and the cells are called TD camouflaged cells. Detailed circuit simulations of the proposed threshold dependent camouflaged cells demonstrate that they can be used to cost effectively and robustly camouflage large netlists.
One of the co-authors of the two papers just discussed, Siddharth Garg, an assistant professor of electrical and computer engineering at the NYU Tandon School of Engineering, along with fellow researchers from Stanford University, The Cooper Union and The University of Virginia are developing a chip with both an embedded module that proves that its calculations are correct and an external module that validates the first module’s proofs. The verifying processor can be fabricated separately from the chip. Garg’s configuration is an example of an approach called “verifiable computing” (VC), keeping tabs on a chip’s performance and spotting telltale signs of Trojans. The chip includes the components that perform the computation you care about, but also additional logic that provides proofs of correctness of the chip’s execution.
A paper describing this work “Verifiable ASICs” was presented at the IEEE Security and Privacy Conference. The researchers use a system in which deployed ASICs prove, each time they perform a computation, that the execution is correct, in the sense of matching the intended computation. An ASIC in this role is called a prover; its proofs are efficiently checked by a processor or another ASIC, known as a verifier that is trusted (produced by a foundry in the same trust domain as the designer).
The authors admit that right now the cost of providing this verifiability is still high. Over the next few years, they hope to bring down this cost, so it becomes practical even for commercial semiconductor vendors.
Another stratagem being considered is split manufacturing, a means of thwarting counterfeiting by an untrusted foundry by dividing a chip’s blueprint into several components and distributing each to a different fabricator. At the foundry level, split manufacturing involves partitioning a chip into multiple parts and sending each piece to a different foundry. Once the pieces are returned, the fabricated pieces can be assembled in a trusted facility. The limited visibility into the chip’s blueprint deters not only IP theft, because no one foundry can see the entire blueprint, but also the foundries’ ability to understand what the chip does and, consequently, modify it in a malicious manner.