Why Security Patching Does Not Fit IoT Devices
October 01, 2024
Blog
IoT devices are not actually new. They are just embedded systems mistakenly connected to the Internet. In what follows, I use the term “embedded system” instead of “IoT device” since it is more general.
Embedded system software is often referred to as firmware since it is usually stored in non-volatile memory such as flash. In what follows, I use the term firmware to refer to embedded system software and the term software to refer to apps and other software that runs on servers, desktop computers, etc.
Those who favor patching and updating firmware to remove vulnerabilities do not understand why firmware is not the same as software. The main difference is that firmware is multi-dimensional, whereas software is uni-dimensional. The two share the functional dimension – i.e. performing useful work – but that’s it. Firmware also has size, performance, power, hardware, and other dimensions that must be handled properly to achieve a successful design. Finding the sweet spot between the many firmware dimensions is challenging.
The fundamental error that is being made with respect to security patching is to think that a uni-dimensional solution can solve a multi-dimensional problem. It cannot, and why it cannot is the subject of this paper.
Patching vs. Unpatching
Embedded software engineers are all too aware that even a small patch may upset the delicate balance achieved in device firmware. If only 6% of CVEs are actually being exploited [1], that means patching a CVE is useful for improving a device only 6% of the time. Patching the other 94% of CVEs is a waste of time, both for firmware and for software. But matters are even worse for firmware. There is probably a 10 to 20% chance[1] that a patch will upset a delicate balance in the firmware and cause it to malfunction. Since there is a 6% chance that a security patch will improve the firmware and a 10-20% chance that it will harm the firmware, the security patch is not likely to be top-of-list for a responsible firmware engineer.
As an example of this, we struggled to make our profiling code more accurate. Nearly every time we made a change somewhere else in the code one or more profiling tests would start failing. After considerable effort, we were finally able to attribute the variation in profile times to the physical position in memory of the code being measured. It turns out that modern processors are even more complex than we think they are. Just moving the code, due to an unrelated change, resulted in a +/- 1.5% change in performance. This is an example of the hardware dimension in play, and it is not just an isolated case. For example, it is frequently necessary to put NOPs into low-level code in order to avoid critical races in the hardware.
Hardware Dimension
The hardware dimension has a major effect on firmware because firmware operates directly upon it. Modern MCUs have billions of transistors, and every one of them is doing something, yet MCUs are black boxes to us. Normally we find hardware problems by stumbling onto them and then developing workarounds, which mysteriously work. Firmware is built upon a shaky dimension that is getting worse as hardware complexity increases. To achieve true embedded system security, the hardware dimension must be shortened and made more transparent.
Because of hardware obscurity, it is not uncommon for firmware engineers to spend weeks or even months tracking down elusive problems that end up being caused by hardware bugs. As a consequence there is a mantra among experienced firmware engineers: “If it works don’t touch it”! This, of course, is contrary to the patch-and-put security mindset.
Power Dimension
Power is another important dimension for embedded systems. To up the ops (operations per second) it is necessary to increase the clock rate, and this requires more power. For processors and memories, physics dictates that electrical energy in == heat out. Most embedded systems do not have the luxury of fans nor of operating in air-conditioned environments. Thus, dissipating more heat that is attributable to the firmware running faster can be a problem — especially in confined and/or hot environments. Increased power also means shorter run times for battery-powered systems. So, upping the ops to handle the burden of security patches and security improvements may not be an option for some firmware.
When to Put
Unlike software, firmware is usually expected to run 24/7, with no interruptions and it is expected to run in unsupervised mode. So when is a good time to update firmware? Never!! Consequently, patches may go uninstalled for long periods of time or even never be installed. This is counter-productive for improving firmware security. A security regimen that does not require patches would be a better fit to the embedded world. The unsupervised operation requirement means that firmware needs to be rigorously tested, or else failures causing injury or damage may occur – another reason not to update. Engineers cannot be fiddling with firmware patches and rigorously testing to extinguish firmware fires at the same time. So, which is it to be?
The possibility of injury or damage is much higher for improperly tested firmware than it is for improperly tested software, but the patch-and-put solution does not take that into account. The probability that a firmware vulnerability will be exploited may often be lower than the probability of serious damage resulting from inadequate firmware testing after a patch is made. A standard method for deciding between these alternatives needs to be found. The current decision, in most cases, is probably to let the patches rot. Here again, a patchless solution would be a better fit for the realities that firmware developers must face.
Conclusion
The bottom line is that the patch-and-put solution, although developed over decades of software evolution, does not fit firmware well. A security solution that takes into account the multiple dimensions of firmware is needed. As things stand now, forcing patch-and-put upon the embedded community is likely to cause more harm than good, due to causing inadequately tested firmware to be shipped in order to meet mandated security patch quotas. Legislators and regulators should consider this before passing new laws and enforcing new regulations upon the embedded community.
Reference
- “Vulnerability Exploitation in the Wild”, Chris Hughes, 8/24.
[1] I don’t think anyone knows the actual number, but having made a lot of bad patches, myself, I think it is in this ballpark or even higher – maybe lower for better programmers.
Ralph Moore is a graduate of Caltech. He and a partner started Micro Digital Inc. in 1975 as one of the first microprocessor design services. Now Ralph is primarily the Micro Digital RTOS innovator. His current focus is to improve the security of IoT and embedded devices through firmware partitioning. He believes that it is the most practical approach for achieving acceptable security for devices connected to networks. He can be contacted at [email protected], or visit www.smxrtos.com/securesmx to learn more.