Eliminating buffer overflow vulnerabilities on the IoT
September 12, 2018
Story
Buffer overflows have been the most commonly exploited vulnerability in network-borne attacks over the last 30 years. This isn?t surprising given how buffers are created.
Buffer overflows have been the most commonly exploited vulnerability in network-borne attacks over the last 30 years. This isn’t surprising given how buffers are created.
Here is an example in C:
Step 1. Programmer uses the malloc function and defines the amount of buffer memory (32 bytes, for example)
Step 2. A pointer is returned that indicates the beginning of the buffer within memory
Step 3. Programmer uses the pointer (only) as a reference whenever they need to read from or write to that buffer
With the pointer in hand, it’s very easy for a programmer to forget the actual amount of memory that was allocated to a given buffer. Compilers use metadata to allocate the appropriate buffer size during assembly, but this metadata is typically discarded at build time to reduce footprint.
If data being transferred within or between programs subsequently overrun the buffer size that was originally defined, that data information overwrites adjacent memory. This can lead to memory access errors or crashes, as well as security vulnerabilities.
Buffer overflows and vulnerability exploitation
Hackers can use stack buffer overflows to replace executables with malicious code, which allows them to exploit system resources like heap memory or the call stack itself. Control-flow hijacking, for example, takes advantage of stack buffer overflows to redirect code execution to a location other than what would be used in normal operation.
Once in charge of the control flow, a control-flow hijacker can modify pointers and reuse existing code, while also potentially replacing code. Command of the control flow also permits attackers to modify pointers for use in indirect calls, jumps, and function returns that leave a valid graph to conceal their actions from defenders.
Such threats remain a challenge despite dynamic address space layout randomization (ASLR) mechanisms and stack canaries used to detect and prevent buffer overflows before code execution occurs.
Security: Software or silicon?
ASLR and stack canaries are software-based buffer overflow protection mechanisms that do make exploiting buffer overflows more difficult for attackers. ASLR, for instance, dynamically repositions memory areas so that a hacker effectively has to guess the address spaces of target components like base executables, libraries, and stack and heap memory. Unfortunately, recent vulnerabilities such as Spectre and Meltdown leak information from CPU branch predictors, which limits the effectiveness of ASLR for obvious reasons.
Stack canaries, on the other hand, insert small integers just before the return pointer in memory. These integers are checked to ensure they have not changed before a routine can use the corresponding return pointer. Still, it is possible for hackers to read the canary and simply overwrite it and the subsequent buffer without incident if they are sure to include the correct canary value. In addition, while canaries protect control data from being changed, they don’t protect the pointers or any other data.
Of course, another challenge with software-based security solutions is that they are highly susceptible to bugs. It is estimated that 15-50 bugs exist for every 1,000 lines of code, meaning that the more software present in a solution, the greater the number of vulnerabilities.
When addressing the disease rather than the symptoms of buffer overflows, a more robust method is to implement security in silicon – while stack buffer overflow exploitations are designed to manipulate software programs, addressing the root cause of such attacks begins with realizing that the processor is not able to determine whether a given program is executing properly.
Outside of mitigating the impact of software bugs, silicon can’t be altered remotely. But a processor or piece of silicon IP must be programmed to recognize, at runtime, whether an instruction attempting to write to memory or peripherals is performing a legal or illegal operation.
Dover Microsystems has developed such a technology called CoreGuard.
Silicon security at runtime
CoreGuard is a piece of silicon IP that can be integrated with RISC processor architectures to identify invalid instructions at runtime. Delivered as RTL, the solution can be optimized for various power and area requirements, or modified to support custom processor extensions.
As shown in Figure 2, the CoreGuard architecture includes a hardware interlock that controls all communication between the host processor and rest of the system. The hardware interlock funnels these communications into a Policy Enforcer.
Separately, CoreGuard uses updateable security rules called micropolicies, which are simple governing policies created in a high-level, proprietary language. These rules are installed in a secure, inaccessible region of memory isolated from other operating systems or application code. CoreGuard also reserves a small memory allocation here for application metadata usually discarded by the compiler, which is used to generate unique identifiers for all data and instructions in a system. These components load at system boot time.
When an instruction attempts to execute at runtime, the CoreGuard Policy Execution core or host processor running in privileged mode cross-references the instruction’s metadata against the defined micropolicies. The hardware interlock ensures that the processor outputs only valid instructions to memory or peripherals, thereby preventing invalid code from executing completely. The application is informed of the policy violation with something akin to a divide-by-zero error, and the user is notified.
All that is required for integration with a host processor in support for instruction trace outputs, stall inputs, a non-maskable interrupt (NMI) input, and interrupt outputs. For non-chip designers, Dover Microsystems recently announced that its CoreGuard technology is being designed into certain NXP processors.
Eliminating classes of attacks
In the case of buffer overflows, the benefits of a technology like CoreGuard are obvious. Buffer sizes captured as part of the oft-discarded compiler metadata can be incorporated to limit an attacker’s ability to manipulate the system stack from across the network. Going a step further, the same principles can be applied to control-flow hijacking in general because returns from various points in memory can be restricted before they take place.
In practice, this real-time awareness also creates a new playing field for the security industry. By being able to identify errors or attacks before damage occurs, users can elect to dynamically reallocate memory, switch to a separate, safer program or log events while continuing to run the same program. How to proceed is completely dependent upon the needs of the application or business case.
Have we seen the end of zero-day vulnerabilities? Only time will tell, but it appears we’re on the right path.