High-performance embedded systems should take advantage of GPGPU
March 13, 2017
Blog
The software applications used by embedded systems in consumer and military applications are become more complex, putting added computation demands on...
The software applications used by embedded systems in consumer and military applications are become more complex, putting added computation demands on the hardware. System architects, product managers, and engineers must keep pace with the latest computing technology.
Borrowed from the gaming industry, where graphics and data processing set new limits, general-purpose computation on graphic processor units (GPGPUs) are serving as the heart of a new wave of embedded systems. The high power to performance ratio helps these new systems meet the calculation demands required in compute-intensive applications.
While trying to reuse existing software applications, we’re constantly adding new features and implementing new requirements. The code then becomes increasingly complex, the application become CPU “hungry” and eventually, you’re faced with:
- Complex CPU load balancing; we’re dancing on “razor blade” to satisfy our software application demands
- CPU choking; eventually we end up with extremely slow operating-system response where entire software architecture changes need to be made with a fine line between acceptable response and getting job done.
- Upgrading and overclocking; other methods for increasing computation power in an embedded system that can be costly (upgrading) or detrimental to component life (overclocking).
Using a GPU instead of a CPU reduces development time and “squeezes” maximum performance per watt from the computation engine. GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a central processing unit (CPU) to accelerate applications. If only a CPU is used as a main computing engine, eventually it’ll choke, a challenge seen all too often. But if some compute-intensive portion of an application is offloaded to the GPU, the rest of the application remains to run on CPU.
So, how does the GPU operate faster than the CPU? The GPU has evolved into an extremely flexible and powerful processor because of:
- Programmability
- Precision (floating point)
- Performance; thousands of cores to process parallel workloads
- Increased speed, thanks to the demands from the giant gaming industry
NVIDIA explains it very well:
A simple way to understand the difference between a CPU and GPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing, while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.
There are many applications can benefit from GPGPU technology. In fact, any application involved with mathematical calculation can be a very good candidate for this technology. These can include:
- Image processing; enemy detection, vehicle detection, missile guidance, obstacle detection, etc.
- Radar
- Sonar
- Video encoding and decoding (NTSC/PAL to H.264)
- Data encryption/decryption
- Database queries
- Motion detection
- Video stabilization
So, we thank all those who push the envelope to demanded better quality images, more data throughput and increased processing all in the name of gaming. GPGPUs have elevated beyond that world and moved into other complex, highly sophisticated realms—and are enabling better intelligence across many industries by reliably managing higher data throughput and balancing system processing for more efficient computing operations. Et more information on this topic in a white paper on GPGPU processing.
Dan Mor, Product Line Manager, Aitech Defense Systems: Since 2009, Mor has been working with Aitech, a leading supplier of rugged computer systems optimized for harsh environments. He currently serves as Product Line Manager of GPGPU and HPEC computers, leading the company’s GPGPU and HPEC roadmaps, and plays a key role in architecture design as well as strategy planning.