Embedded AI: Out of the Lab and into the Field
March 26, 2021
How do we transition from our historical lack of understanding about what’s going on under the hood of complex neural networks, and into an era of AI explainability around how these models operate?
In this week’s Embedded Insiders, the Insiders comment on a recent fire that shut down the Renesas Naka semiconductor fab where the company manufactures automotive chips.
Later, Rich is joined again by Zane Tsai, Director of the Platform Product Center at ADLINK Technology, to discuss how logistics companies are being affected by the race to deploy AI at the edge, and insights for developers looking to increase the efficiency and productivity of logistics automation systems.
The two are also joined by NVIDIA’s Amit Goel, Director of Product management for embedded AI platforms, who examines the complexities of integrating reliable performance into AI-based applications like independent automation. NVIDIA is currently building a hardware platform that will bring greater compute intelligence to autonomous systems at the edge. However, a solid software framework that can streamline the development, deployment, and management of the AI applications that will run on these devices is still critical. The company’s Isaac SDK and DeepStream SDK for AI-based multi-sensor processing, video, audio, and image understanding are positioned to support these workloads across the engineering and operational lifecycles of AI-enabled robots.
Finally, Tiera Oliver addresses the evolution of real-world AI. How do we transition from our historical lack of understanding about what’s going on under the hood of complex neural networks, and into an era of AI explainability around how these models operate? Will we ever be able to test, verify, and validate these workloads to the point that they can be heavily relied upon in safety-critical systems? Johanna Pingel and David Willingham, deep learning project managers at The MathWorks, believe we’re already on the way.