Three degrees of freedom
July 24, 2018
Blog
Embedded developers have always needed fine-grain control of code generation because every embedded system is unique, and the priorities and requirements vary from one system to another.
Developing embedded software has never been easy, but certain matters, as in so many aspects of life, were simpler in times past. Embedded developers have always needed fine-grain control of code generation because, as I often chant, every embedded system is unique, and the priorities and requirements vary from one system to another.
It used to be broadly a choice between speed and size of code – do you want fast code or small code? – but it is no longer that simple.
Although embedded compilers typically offer sophisticated controls, optimization of code is usually a matter of selecting a bias towards speed or size. It just happens that, most of the time, the fastest way to implement an algorithm is also biggest and the smallest is slowest. There are exceptions, of course, but they are not so common. The same principle applies to data, where it can be packed (smallest) or unpacked (fastest). This is commonly specified by a compiler optimization switch and may be overridden for specific objects using the keywords packed and unpacked, which are extensions to the standard C language.
Hardware developers have a similar choice. They code using a hardware definition language (like VHDL or Verilog) and use a synthesis tool (which is similar to a compiler) to implement the design. As with software, there are trade-offs between speed and size. However, for some time, another factor has been taken into consideration: power consumption.
Power is no longer purely a hardware issue, as the software increasingly has an influence on the power efficiency of a device. Conventional code optimization has an impact on power consumption. Small code means the device needs less memory, which reduces power consumption; fast code causes the CPU to burn less power getting the job done. It is not an easy matter to get the balance just right. I wonder if "optimize for power" will soon be a compiler feature.
The bottom line is that, instead of a two-way tension between speed and size, it now goes three ways: speed, size, and power.
Incidentally, there is an interesting analog that is manifest in another interest of mine: digital photography. In the past, you would choose a film of a particular “speed” (sensitivity), like ISO400. To get the correct exposure for a shot, you needed to select an appropriate shutter speed and aperture. Change one of these two parameters, and you need to adjust the other to compensate. This was sometimes inconvenient, when a photographer wanted control of both parameters. Now they can have that control, as ISO may be selected individually for each shot. So, there are three degrees of freedom.