Cybersecurity for connected consumer devices: A lack of standards, expertise
March 01, 2017
Connected consumer devices have been commandeered recently during multiple cyber attacks, largely because immense cost pressures have limited the use of satisfactory security technologies and...
Connected consumer devices have been commandeered recently during multiple cyber attacks, largely because immense cost pressures have limited the use of satisfactory security technologies and development practices. However, as the large organizations targeted by these attacks experience economic loss, Bernard Vachon, Director of Embedded Software Engineering at embedded design services firm Cardinal Peak forecasts that industry will respond with IoT security services and standards.
What is the most critical area to secure when designing a connected consumer device?
VACHON: You always start with the requirements. What do I need to accomplish? What information am I going to have to send?
When you’re deploying IoT devices at scale and they’re all over the country, network security is obviously paramount. You want to make sure at the very least you’re doing HTTPS and not sending data in the clear. If you’re sending things in the clear people don’t even have to hack – it’s just there for them to see. If a thermostat is reporting information about temperature settings over HTTP, that’s a fairly simple way people can break into the system and figure out if users are not home because the temperature is set to 50 degrees rather than 70 degrees.
If you’re sending potentially sensitive information or want to prevent people from accessing information, you need to figure out a way to implement network encryption. Here, always go with the industry standard. If you go with the standard and someone figures out how to crack an Advanced Encryption Standard (AES) cipher, for example, you’re in the same boat as everyone else. If AES is broken, the newspaper headline will be “AES broken, some products compromised,” whereas if you went with a different cipher that wasn’t a standard the headline would be “Security compromised because of poor choice of technology.” There is some amount of safety in doing what everyone else is doing, at least in the sense that you won’t be singled out for making a mistake. Nobody is going to get fired for using AES just the same way that nobody ever got fired for using IBM technology.
Now, does that mean that everybody creating cheap IoT devices is doing that? I’m not convinced that companies in China making the cheapest possible device are putting in the time and effort to make sure everything is secure. We’ve seen that recently in the cameras that have been used in denial of service (DoS) attacks.
Given the overhead of network encryption in resource-constrained devices, are technologies like TrustZone finding increased usage?
VACHON: You define what your requirements are in terms of what you’re going to need to do, and obviously network encryption is the ultimate goal there. Then you need to figure out what hardware you need to meet those requirements. A lot of small processors can do HTTPS, and certainly if you have something that can do AES acceleration you are all the better off.
Most small embedded devices for IoT products are not using TrustZone. There’s an extra level of complexity there that people only really bother with when they are making sure the device itself is not compromised. For example, if you’re dealing with a device that uses High-bandwidth Digital Content Protection (HDCP) – the encryption for video that prevents movies from being transmitted in the clear – you need to protect those keys because if somebody is able to steal them from your device there are very large potential penalties and you could be liable for many millions of dollars. That’s typically the scenario where people go the TrustZone route because just breaking into the device itself may give you access to keys that are extremely valuable.
However, hacks aren’t typically at the hardware level. When you’re building a thermostat or an IoT-connected refrigerator, is there a danger of somebody being able to root your device? Yes, potentially, and from that they may actually be able to gain information about how to get into the network of those devices, and once there they could potentially do something harmful. But the security concerns from that perspective are less than being liable for millions of dollars if someone were to steal HDCP keys, or not being able to sell your product because HDCP keys are being revoked so your Samsung TV doesn’t work how it should. There’s also probably a resolution in terms of changing passwords or using an over-the-air (OTA) update to solve the problem.
If you’re creating a device that’s going to be selling for $10, you can’t afford all the extra development time required by technologies like TrustZone. Just getting the right hardware security part is simple to resolve as you can just add it to your bill of materials (BoM), calculate those costs early on, and make a very simple decision as to whether or not you need it. Where it is much harder to quantify is how much time you’re going to spend in software development because of a technology like TrustZone and dealing with cryptographic keys when starting a new product development.
Developers like to keep everything open when they’re developing because it’s easier, and all of the things that are convenient for developers – like SSHing into a target rather than connecting to the serial console – are security risks. Therefore, as you tighten security you make it harder and less convenient to develop, so development costs go up. That’s where people are going to have a hard time developing products that have TrustZone and additional security features in them.
In the absence of widespread expertise and development tools for secure connected devices, what solutions are available from industry?
VACHON: You’re starting to see the emergence of companies that specialize in providing IoT security solutions, and sell modules that have Wi-Fi, Bluetooth, zigbee or whatever radio protocol you’re trying to use, as well as TrustZone and all of the security features you need. They will have spent a lot of development time on these solutions, so small startups making connected thermostats, for instance, would just buy a part and the security company can amortize the development costs across millions of units, or at least a far greater number of units than the startup could generate. A few IoT security solution providers will emerge as the winners and will have solutions that provide a high level of security and address these issues for everyone. The reality is – whether you’re making a camera, thermostat, or refrigerator – the connectivity piece is the same and the security issues are the same, so you will have people who just solve that part of the problem: “Here’s my module that has a low-cost Cortex-M processor that you can buy for $10 or $5 or $2.”
There are a couple of vendors like Cypress, Marvell, and Nordic make wireless chips with a Cortex-M, and if they can throw in TrustZone, the network will be somewhat locked. The developer will interact with the part through a couple of GPIOs, a SPI interface, or I2C with a defined application programming interface (API) and only be able to access the network through the provided services. So, if someone were to buy a thermostat, for example, and open it up and try to hack the device, all they’d really be able to access easily is the code for temperature set points because the security vendors would have done a good job securing the parts that are important. These security vendors will provide a cloud backend and libraries that you can load onto wireless modules and provide whatever security is needed.
Given the fact that recent attacks on consumer devices often haven’t targeted the manufacturers of the devices themselves, who is going to be accountable for connected device security in the future?
VACHON: The question is going to be where the law goes. It’s reasonable to think that you’re not going to be electrocuted when using a camera, therefore the company that manufacturers it would be liable in the event that you were. Alternatively, it’s not reasonable to think that the manufacturer is liable if the camera is used to hit somebody over the head and kill them. Is it reasonable to expect that somebody is going to try to hack your camera to cause damage to someone else? I think you can make a case for that.
In a case where your product is leveraged in an attack to, say, take Amazon down for a day, at some point Amazon going to turn around and say, “I’m going to sue that manufacturer because they put a product out there that made me lose $X million worth of revenue.” That’s going to happen at some point, and I don’t know if there’s legislation in place to prevent that, but somebody is going to try it. If you’ve lost $100 million because of somebody’s flawed product, yes you’re going to go after them.
Ultimately, where does the responsibility lie? It will be with the original equipment manufacturer (OEM) because you have to make your product fool proof as much as you can. Where it’s headed is that the manufacturer is going to have to show that they have made a reasonable effort to prevent attacks, and therefore there will probably be an industry standard level of security. If you meet that expectation ,you’ll probably be in the clear; if you don’t, that’s where you’re going to be liable. Like with Underwriters Laboratory (UL) and Conformité Européene (CE) marks, this standard will show that the product has undergone some level of testing that indicates that it won’t “catch fire” under normal operation and is therefore safe to use. I suspect we will get to something like that where OEMs will have to submit their products to a set of tests that show they have taken basic precautions.
Cardinal Peak
LinkedIn: www.linkedin.com/company/cardinal-peak