Why Deception is the Achilles’ Heel of IoT
April 07, 2021
Story
The numbers are staggering. Depending on the source, the forecast number of connected IoT devices in 2021 varies from 20 to 40 billion, all producing immense volumes of data.
And as more intuitive and responsive devices are unleashed, the lines between the digital and physical universes get blurry. Not to mention the impact all this connectedness has on security.
IoT security has been widely discussed and is often times associated with topics such as “the expanding attack surface” and “zero-day vulnerabilities.” However, one underrated attack vector that is often missing in those discussions is the most basic and vital of all: deception.
Deception was known to be effective thousands of years ago, in the time of Sun-Tzu, author of “The Art of War,” which asserted it as the most powerful tool of the attacker.
Today, with the advent of IoT, it’s no different. How do you win by deception? It has a lot to do with impersonation. For example, in the military, if hostile signals pretend to be coming from your own drones, you are in big trouble. And, if, in the near future, smart traffic lights have been tampered with, sending out deceiving signals to autonomous cars, the outcome could be disastrous.
Take a common crime like stealing a car. In a parking lot, a criminal can attack one car. But, with the IoT, a hacker can simultaneously hack 1, 10 or 1,000 cars, possibly in various cities around the world. And the bigger problem is that, with the IoT, hackers can be ANYWHERE when making these attacks; they need not be physically near the hacked devices. They can be on the other side of the ocean.
Authentication is the Key
While it might not be the first thing everyone thinks of, the fact that deception is the biggest threat to IoT security is nothing new for security experts. Well-known security author Bruce Schneier wrote a blog in 2016 about a rare public talk by Rob Joyce, who at that time was head of the Tailored Access Operations Group of the NSA. In this talk he downplayed zero-day vulnerabilities as overrated, asserting that “credential stealing is how to get into networks.”
In fact, almost all security problems are authentication problems. If you can authenticate the identity of the device on the other side of a communication, you can know what is legitimate and what is not. But how do you authenticate a device? A drone or a car? It is possible the request to access your device is legitimate, but it might not be.
When talking about device authentication, it helps to draw the human analogy. Customs officers identify people by their passport. And for some countries it must be accompanied with a visa. To be really secure, they have to verify your identity by checking your fingerprints.
For devices that connect to the cloud it is very similar. Devices identify themselves to the cloud by showing their device-unique certificate. This certificate has been registered at the cloud provider and certain permissions are linked to it – similar to a visa on a passport. It is not very hard to copy certificates from one device to another. Identification is not enough, identity needs to be verified. And the best way to do this (the unclonable way) is by checking something that is unique to the device. In other terms, elements in the hardware of the device that cannot be copied from one device to another.
Figure 2. Rooting identity in something that is very hard to clone
PUF as a Device Fingerprint
Intrinsic ID uses a “fingerprint” that can be found in the static random-access memory (SRAM) of every chip. This fingerprint is called an SRAM physical unclonable function (PUF). Just as human fingerprints are unclonable, this device-unique fingerprint is also unclonable. Combined with the device certificate, which serves as a passport, it builds unclonable device identities. For every connected device – a voice-assisted device, a connected car, a drone, a watch, a thermostat, a light bulb, an insulin pump – an unclonable identity can be created based on the PUF that makes it very difficult to bypass the authentication safeguards. With this unclonable identity, we can securely authenticate the device, protect the data’s integrity, and ensure the data’s confidentiality.
But how does an SRAM PUF work? An SRAM PUF is based on the behavior of standard SRAM memory that is available in any digital chip. Every SRAM cell has its own preferred state every time the SRAM is powered, resulting from random differences in the transistor threshold voltages. Hence, when powering SRAM memories every memory will yield a unique and random pattern of 0s and 1s. As stated before, these patterns are like chip fingerprints, because each one is unique to a particular SRAM and hence to a particular chip.
However, this “response” from the SRAM PUF is a “noisy” fingerprint and turning it into a high-quality and secure cryptographic key requires further processing. By using “Fuzzy Extractor” IP, it is possible to reconstruct exactly the same cryptographic key every time and under all environmental circumstances.
Figure 3. Deriving a key from SRAM start-up behavior
There is no need for either the chip vendor or the device manufacturer to inject the device’s root key. Injecting secret keys requires a trusted factory, it adds cost and complexity to the manufacturing process, and limits flexibility. Hence, it is a significant benefit that key injection is not required.This method of deriving a key from the SRAM properties has great security advantages compared to traditional key storage in non-volatile memory (NVM):
-
Because the key is only generated when needed and therefore never stored, it is not present when the device is not active (no key at rest). Hence, it cannot be found by an attacker who opens the device and compromises its memory contents, which significantly increases the security of the device.
-
There is no need to add costly security hardware, such as a Secure Element or TPM chip, to protect secret keys and valuable data on the chip. Any sensitive content or IP encrypted with the SRAM PUF key (or a key derived from it) can be stored in unprotected memory, as it cannot be read anywhere outside of the chip without the SRAM PUF key.
From Key to Identity
Once a device is equipped with a root key from the SRAM PUF, additional functional keys can be derived from this root key by using a Key Derivation Function (KDF) as specified by the National Institute of Standards and Technology (NIST). Any key derived from the SRAM PUF root key automatically inherits the benefits described earlier, so it also does not require injection, is never stored (is derived only when needed), and does not require costly security hardware.
Device identities are typically managed by device certificates in a Public Key Infrastructure (PKI). Using PKI, each device identity is built from a strong public-private cryptographic key pair that is unique to the chip. While the public part can be shared to establish the identity, the private part (used to authenticate the identity) must be kept secret at all times and should be bound to the device. These requirements fit perfectly with the properties of the SRAM PUF.
When a public-private key pair is derived from an SRAM PUF, it is guaranteed that this key pair is device-unique simply because the root key is device-unique. Also, the private key is protected at all times, as it is never stored and only derived when needed. Hence, deriving a public-private key pair from an SRAM PUF provides the properties that are required for use in a PKI.
Now, the public key can be shared with a Certificate Authority (CA) via a Certificate Signing Request (CSR). Based on this public key, the CA returns a certificate that is provisioned onto the device. When the device connects with a cloud it will use this certificate to show its identity. Based on the certificate, the cloud can verify the identity of the IoT device by running an authentication protocol that requires the device to have its private key. The authenticity of the device can now be guaranteed since no other party knows, or has access to, the private key. And, of course, the private key is reconstructed on the fly from the chip’s SRAM PUF.
Figure 4. Connecting a device securely to the cloud
In this way, a secure identity can be built into devices similar to the secure identity built for people through documents such as passports. Instead of using the biometrics of a person as the root of trust for a passport, the “fingerprint” from the SRAM PUF is used to unequivocally tie the identity in a certificate to the hardware of a device.
The bottom line is that if we want to make the IoT successful, we have to establish trust, which requires authentication we can count on. Using the unclonable identity inherent to every device is a great place to start. If you want to learn more about PUF technology and unclonable identities, visit the website of Intrinsic ID.