Maximize NVIDIA Technology When Deploying AI at the Edge

By Rich Nass

Executive Vice President

Embedded Computing Design

March 29, 2021

Blog

Maximize NVIDIA Technology When Deploying AI at the Edge

The race to develop Edge AI servers is on, and it’s a hotly contested race.

This fact is backed up by the fact that the “market” for AI is predicted by industry experts to take off in the next few years. That demand will fuel the need for better and faster AI Edge processing.

But being first to market is not the only criteria that OEMs are looking for. Having the horsepower to handle the massive amounts of data is key, as well as the I/O to move the data around the platform, particularly when vision is a part of the end application, like in manufacturing and logistics and smart cities and buildings.

The list of challenges brought on by the latest advanced AI applications include, but are not limited to latency, reliability, and system availability, meaning that OEMs want to make use of their existing core IT infrastructure where possible. This later point can be instrumental in the race to market.

Being able to make decisions at the Edge, rather than in the Cloud, offers huge benefits. For example, it permits decisions to be made in real time; it offers a far higher level of security, as the data never has to leave the premises; and finally, it can result in a cost savings by not having to employ a cellular network or some other expensive means of data transmission.

See It at GTC

One way to learn more about how this Edge-based technology can be deployed is to join Lanner at NVIDIA’s upcoming GTC, an digital/virtual event that unites the leaders in Edge, AI, and other related topics. GTC will run from April 12-16, kicking off with a keynote address delivered by NVIDIA CEO and Founder Jensen Huang. Later in the event, experts from Lanner will discuss how AI can be structured in a networked approach where AI workloads can be distributed within the edge networks. The discussion will start from the NVIDIA AI-accelerated customer premise equipment over the aggregated network edge, leading to the hyper-converged platform deployed at the centralized datacenter.

Specific topics to be covered by Lanner at GTC include:

  • How to use an AI-powered hyper-converged MEC server to enable intelligent transportation services

  • How to build efficient and intelligent networks with a network Edge AI platform

  • A hardware perspective on Edge AI inference and NGC-ready servers

The latest Edge servers are being designed around the NVIDIA GPU Cloud (NGC) which, as the name implies, is a GPU-accelerated cloud platform optimized for deep learning, inferencing, and related AI applications, using containers. NGC containers enable a software platform using minimal operating-system requirements, thereby simplifying (or nearly eliminating) driver installation. As all the software operates through the NGC container registry, the provisioning process can also be simpler, resulting in an agile application development environment. Essentially, NVIDIA has created an applications hub, making it easier to deploy the company’s hardware. And, as stated earlier, time to market is a key differentiator.

Servers Have Arrived

An example of an Edge server that takes advantage of the NGC is Lanner’s Edge AI Appliance LEC-2290 platform. NCG-ready validation includes extensive testing for a wide range of applications for on-premise, Cloud, and Edge deployments. The LEC-2290 can run various computer vision algorithms to improve the AI-based video applications, such as retail video analytics, machine vision, physical security, and traffic monitoring.

Designed with NVIDIA’s Tesla T4 Tensor core GPU, its key differentiators are that it employs purpose-built AI hardware, meaning that it can handle the rigors of vision-based AI applications (including inferencing) right from the get-go. And secondly, it’s designed with a scalable architecture, meaning that it can be upgraded over time to keep up with the fast-paced technology, aka futureproofing. In terms of the hardware configuration, the LEC-2290 is equipped with an Intel i7 CPU, 64 Gbytes of SODIMM memory, and TPM 2.0 and IPMI modules. With PoE capability, deployment is further simplified.

The bottom line is that Lanner’s latest Edge AI solution can help OEMs leverage the spectrum of NVIDIA GPU-accelerated software that’s been made available for real-time intelligent decision making.

Richard Nass’ key responsibilities include setting the direction for all aspects of OSM’s ECD portfolio, including digital, print, and live events. Previously, Nass was the Brand Director for Design News. Prior, he led the content team for UBM’s Medical Devices Group, and all custom properties and events. Nass has been in the engineering OEM industry for more than 30 years. In prior stints, he led the Content Team at EE Times, Embedded.com, and TechOnLine. Nass holds a BSEE degree from NJIT.

More from Rich