Design machine vision into your embedded system
April 06, 2016
Machine vision systems from companies like Cognex, Keyence, Banner, and others are used to check the quality of parts that an assembly line produces....
Machine vision systems from companies like Cognex, Keyence, Banner, and others are used to check the quality of parts that an assembly line produces. In a consumer role, computer vision can be used in security systems, to unlock a phone, and perhaps in the future to help drive your car.
Cognex, one of the leaders in industrial vision, introduced its first industrial optical character recognition (OCR) system, the DataMan, in 1982. Although amazing for its day, the company’s founders celebrated with a champagne toast when it was able to read the number 6—in 90 seconds! That company is now worth billions of dollars, and it, along with its competitors, can do much more complicated operations in a fraction of a second.
As an engineer, I’ve been involved with computer vision, using it to measure hole sizes, scan barcodes, and ensure that all the parts necessary for an assembly are present. They’re amazing systems, but given their complexity and expense, generally costing thousands of dollars including the necessary lighting and fixtures, can be overused. “Putting a camera” on something isn’t as simple as it sounds, and sometimes a physical gauge should at least be considered.
Along with the advances of commercial vision systems, advanced computers and cameras have come down in price to the point where you don’t need several MIT personnel, or even a manufacturing engineer, to set one up. With tools such as OpenCV (Open Source Computer Vision) and the Microsoft Kinect software development kit, anyone with access to a webcam can try their hand at vision applications.
People have responded to this new opportunity in droves, and when Microsoft put on an accelerator program for the Kinect in 2012, it received almost 500 responses. Ideas included applications in the medical industry, 3D modeling, retail, and one with the aim of making any surface interactive.
OpenCV can be used with Python, Java, and C, and has the ability to manipulate and analyze normal video and images from any source. You can see how to install and use it under Python in a YouTube video. As with the Kinect, potential applications go way beyond quality checks in a factory setting. It’ll be exciting to see what people will come up with using this technology.
On the other hand, in a factory setting at least, engineers are usually hesitant to embrace new technologies. If there’s a long learning curve, meaning lost production, it’s tough to justify an unknown system. One has to also think about the potential need for long-term and outside support.
So with new or traditional vision systems, be sure to document your work. The next person to take over your job will be grateful. Even better, if you’re in a situation that allows you to publish your work, the startup searching the Internet for a solution to a problem you’ve already solved could be a lifesaver!
Jeremy S. Cook is a freelance tech journalist and engineering consultant with over 10 years of factory automation experience. An avid maker and experimenter, you can follow his exploits on Twitter, @JeremySCook.