Embedded Systems with Artificial Intelligence
February 22, 2019
Blog
Machine vision is one of the most important technologies for automating production processes in the context of Industry 4.0. At the same time, advanced DL methods based on AI are gaining importance.
Current technological trends such as Industry 4.0 (aka the Industrial Internet of Things) and the smart factory are changing industrial value creation processes at a profound level, characterized by a higher degree of digitization, connectivity, and automation. All involved components, including machines, robots, transfer and handling systems, sensors, and image acquisition devices, are consistently networked and communicate with one another via a variety of protocols. Innovative trends in robotics are also changing the face of industrial production. A new generation of smaller, more compact and more mobile robots are shaping the image of highly-automated assembly halls. Collaborative robots (cobots) share certain tasks with their human colleagues, work closely together, and often even hand off workpieces to one another. In addition, cobots can be quickly and flexibly retooled, enabling them to be used for a variety of production tasks.
Machine vision has become an indispensable part of this universally automated production scenario. The technology effectively plays a key role here: A number of image acquisition devices – such as cameras, scanners, and 3D sensors – posted at different locations seamlessly record the production processes. Integrated machine vision software then processes the digital image data generated and makes it available for numerous applications in the production chain. The software can, for example, unequivocally identify many different objects based on optical features and precisely position and align workpieces. The technology also supports fault inspection: Defective products are reliably identified and automatically rejected. As the “eye of production,” machine vision extensively monitors the entire production situation, thus making the processes safer and more efficient. This particularly applies to the interplay between cobots and their interactions with humans.
Compact devices become increasingly widespread
At the same time, it’s becoming more and more important that machine vision algorithms also run on and be optimized for embedded platforms. When the two technology worlds are seamlessly integrated with one another, this is known as embedded vision. Within the context of Industry 4.0, the use of compact devices with integrated embedded software – particularly smart cameras, mobile vision sensors, smartphones, tablets, and handheld devices – is increasing significantly. The reason for their proliferation in industrial environments is the fact that today’s devices are equipped with high-performance, industrial-grade processors with long-term availability. Such processors also enable them to perform complex machine vision tasks – provided that they have powerful and robust machine vision software. For this software to run error-free, it has to be compatible with and ideally optimized for a wide variety of embedded platforms, including the popular Arm® processor architecture. MVTec serves as an excellent example. The latest release of its HALCON standard machine vision software, HALCON 18.11, can easily be operated on these platforms, whether 64-bit or 32-bit. The benefit for users: Robust machine vision functions that normally run only on stationary PCs can also be used on all compact devices.
Modern embedded vision systems are able to meet the enormous demands of digitization – above all when they’re equipped with artificial intelligence (AI). These AI-based technologies include, for example, deep learning and convolutional neural networks (CNNs). What’s so special about these methods is that they enable extremely high and robust recognition rates.
In the case of deep learning processes, large volumes of digital image data, such as that generated by image acquisition devices, are first used to train a CNN. During this training process, special features that are typical of the particular “class” are learned automatically – including, for example, specific object properties and distinguishing features. Based on the results of training, the objects to be identified can be precisely categorized and recognized, after which it is possible to assign them directly to a specific class. With deep learning technologies, objects can not only be classified, but objects and faults can then be precisely localized.
Using deep learning in embedded vision applications
Today, deep learning features are already being used in many embedded vision applications. What all these applications have in common is that they typically generate large volumes of data and frequently involve non-industrial scenarios, such as autonomous driving. The relevant vehicles are already equipped with numerous sensors and cameras that collect digital data from prevailing traffic conditions. Integrated vision software analyzes the data streams in real time with the aid of deep learning algorithms. This makes it possible, for example, to recognize situations, process their information, and use it to precisely control the vehicle – which is what makes autonomous driving possible in the first place. Deep-learning-based embedded vision technologies are also often used in the smart city environment. In large cities, certain infrastructural processes such as street traffic, lighting, and power supply are digitally networked in order to provide residents with special service. And finally, these technologies are used in smart home applications – for example, in digital voice assistants and robotic vacuum cleaners.
Automation of machine vision processes
So what are the advantages of deep learning technologies in the embedded and machine vision environment?
Tedious, manual feature extraction is no longer necessary. Deep learning algorithms are able to automatically learn specific distinguishing features from the training data, such as texture, color, as well as gray level gradation, and weigh them according to relevance. Normally, this task would have to be performed manually by trained machine vision experts, making it extremely time-consuming and costly.
Object features are usually highly complex and almost impossible for humans to interpret. On the other hand, learning distinguishing criteria automatically from the training data saves a tremendous amount of effort, time, and money. A further benefit of deep learning is that it is also possible to distinguish among more abstract objects, whereas traditional, manual approaches can only classify objects that can be clearly described. This includes objects that have more complex, delicate structures or appear against extremely noisy backgrounds. In most cases, a human would be unable to discern any unambiguous, distinguishing features in these objects.
Because training requires extremely high computing power, complex neural networks are trained on correspondingly powerful PCs with high-end graphic processors. However, the fully trained network can also be used on a large number of embedded devices, which means that compact, robust embedded vision solutions can as well benefit from the highest possible recognition rates.
Conclusion
AI-based technologies such as deep learning and CNNs are becoming increasingly important, particularly in the highly automated Industry 4.0 environment. That’s why they are currently an essential component of state-of-the-art machine vision solutions. If the algorithms also run on relevant embedded platforms such as the Arm® process architecture, the robust machine vision software’s entire range of AI functions can be used on compact devices.