Edge Video Analysis (EVA) for Autonomous Machines with Computer Vision
February 21, 2020
Story
According to the International Federation of Robotics (IFR), the World Robotics report released on Jan. 20, shows that more than 2.4 million industrial robots are currently operating in factories.
The number of robots used in industrial applications continues to grow and their capabilities continue to evolve. The first stationary industrial robot, the Unimate, was installed in a factory by General Motors in 1961. According to the International Federation of Robotics (IFR), the World Robotics report released on Jan. 20, shows that more than 2.4 million industrial robots are currently operating in factories around the world. And this number is expected to grow dramatically in the coming years.
Following the rapid adoption of stationary robots, the next development was the deployment of mobile robots in the form of automatic guided vehicles (AGVs). Such robots use technologies like wires, magnets, or lines painted on the floor to navigate narrowly defined areas by following strictly specified paths.
The latest generation of industrial robots are known as autonomous mobile robots (AMRs). These robots employ a fusion of sophisticated sensor systems, like global positioning system (GPS) and computer vision, augmented with state-of-the-art artificial intelligence (AI) and deep learning (DL) technologies. These technologies allow AMRs to perform object detection and recognition and navigate their way around an uncontrolled environment in which the landscape may be constantly changing. Their perception and understanding of the environment mean that AMRs don’t just blindly travel from point A to point B like their AGV cousins; instead, they can detect and avoid obstacles and decide to take alternative routes depending on how they “see” the situation.
The Need for Edge Video Analysis (EVA)
AMRs can perform a wide variety of tasks, including routine inspections of industrial facilities, electrical substations, chemical plants, warehouses, and so on. Even better, they can do this 24/7/365.
These 21st century robots can perform tasks like reading meters, ensuring things are where they are supposed to be, and detecting things that are out of place, such as tools left lying around or objects obstructing entrances and exits, for example. Furthermore, these AI-enabled robots can detect potential problems like corrosion on fittings, cracks in pipes, jets of steam, and drips, leaks, and pools of liquid on the floor, at which point they can alert their human colleagues. As far as the humans are concerned, each AMR is like having additional eyes that never get tired, never get distracted, and never rest.
Using their sophisticated vision sensors and artificial intelligence, AMRs can perform tasks like reading meters and detecting potential problems. (Image source: pixabay.com)
Speaking of their coworkers, AMRs can also ensure that everyone is wearing their hard hats, reflective vests, and other safety equipment. They can also make sure that their human companions don't inadvertently wander into dangerous or prohibited areas.
In order to perform their tasks, AMRs need to be able to handle shocks and vibration, to be dustproof and waterproof, and to survive hostile environments exhibiting high humidity, extreme temperatures, and possibly high levels of acidity or alkalinity, all with long life and low maintenance requirements.
In particular, the AMRs' AI-enabled video analysis requires a tremendous amount of computational resources. There are several reasons why it isn’t possible to perform such computation in the cloud, including unacceptable latency and the possibility of losing the connection to the Internet. The last thing anyone wants is a robot racing toward them when its decision-making capability is taking place in a remote location over an unreliable connection.
Fortunately, advances in computing technology and AI algorithms have made it possible to perform edge video analysis (EVA), which is the ability to analyze the video on-location in real-time. This is done by exploiting the capabilities of today's extremely powerful microprocessor units (MPUs) combined with graphics processing units (GPUs), which boast thousands of small processors to perform AI-related tasks in a massively parallel fashion.
The ADLINK Advantage
ADLINK Technology designs and manufactures a wide range of products for embedded computing, test and measurement, and automation applications. ADLINK's product line includes computer-on-modules, industrial motherboards, data acquisition modules, and complete systems. In the case of the company's EVA solutions, the capabilities of powerful Intel microprocessors are dramatically boosted by the addition of NVIDIA GPUs, and deep learning platforms based on NVIDIA’s Jetson family provide a quick start for autonomous machine development.
In the case of AMRs, ADLINK's EVA solutions, which boast more than enough computational power to control the robot in addition to analyzing its surroundings, feature ruggedized units that are ideal for use in hostile environments. These include lockable connectors that can withstand the rigors of the shocks and vibration that are prevalent in the use of mobile robots. Furthermore, these systems feature state-of-the-art passive cooling technology (no fans).
With regard to the EVA systems, ADLINK's engineers have taken NVIDIA's GPUs, based on Pascal and the latest Turing architecture, and the Jetson family of system on modules, including Jetson Nano, TX2, Xavier NX, and AGX Xavier, and designed them onto GPU boards and deep learning platforms with a compact form factor. MXM GPU modules and Jetson-based deep learning platforms are two examples of how equivalent processing capability can be provided in a palm size form factor while consuming about half the power, which allows them to employ passive cooling. Additionally, these GPU cards are guaranteed by ADLINK to have a much longer commercial lifespan than conventional graphics subsystems.
ADLINK's EVA solutions support a wide range of camera types, such as USB cameras, Ethernet Cameras, GMSL II Cameras. Hence, the AMR's designers can select the optimal cameras for the target application.
Unlike solutions that require all of the robots to communicate via a central hub, which can lead to unacceptable latency, ADLINK uses a decentralized communications protocol that allows the robots to communicate with each other directly in real-time. (The robots can also transmit video streams to edge, fog, and cloud servers for additional analysis if required.)
In the next few years, autonomous mobile robots employing edge video analysis will be deployed in myriad diverse locations and environments to perform a vast array of applications—all designed to make our lives better—and ADLINK will continue to be at the forefront of this exciting technology.