Computer Vision in Manufacturing
The superior long-read about the basis of Computer vision and its impact on Manufacturing businesses. We will explore what computer vision is, why it is needed in manufacturing, and what its advantages (and disadvantages) are. Finally, we will look into specific examples of computer vision in manufacturing.
Manufacturing is the most important sector of the US economy. According to the study The Manufacturing Footprint and the Importance of U.S. Manufacturing Jobs, it generated $2.1 trillion in GDP, with gross output of almost $6 trillion, in 2013. The industry employed 30 million people in direct and indirect jobs, supporting around 22% of the US population.
Yet, manufacturing has been on a steady decline for over 40 years. Both domestic policy failures and the rise of international competitors such as Japan and Germany in the 1970s followed by East Asian “tigers” and China undermined the global stand of US manufacturing.
In recent years, however, the advance of automation and robotics powered by artificial intelligence and machine learning, as well as by their “niche” applications such as computer vision, image recognition, and pattern recognition, enabled US manufacturers to regain some of their international competitiveness.
What Is Computer Vision?
Computer Vision in Manufacturing: Definitive Guide for Business Leaders
Computer vision is one of the key technologies (along AI and ML) that enable Industry 4.0. Business leaders should pay close attention to computer vision in manufacturing to drive efficiencies and gain an edge over competition.
Simply put, computer vision is a field of study (within a wider field of artificial intelligence) that develops techniques to help machines “see” in a human sense of a word; i.e. figure out what is depicted at digital images. If we go deeper into it, however, it will become evident that the technology lies at the intersection of computer vision, machine vision and image processing, and draws from a variety of disciplines.
Historically speaking, computer vision was born back in 1966 when a group of MIT staff started The Summer Vision Project, which aim was “to construct a system of programs which will divide a vidisector picture into regions such as likely objects, likely background areas, and chaos.” Basically, they connected a camera to a primitive computer, trying to teach the system to describe “what it sees.”
Since then, computer vision has quickly developed into a viable, sophisticated technology that is powered by statistical learning techniques, and also incorporates AI, machine learning, and, most importantly, deep learning.
Technology Behind Computer Vision
The researchers at MIT who first attached a camera to a computer thought that “a machine that sees” was in a few year’s grasp. They were mistaken, badly — it turned out that we, humans, were not capable to interpret our own ability to see things and, thus, could not efficiently re-engineer it for machines.
It took at least four decades for machine vision to revolutionize, from simple visual process algorithms to 3D modeling and image processing to machine/deep learning-powered vision systems.
The development of computer vision systems accelerated in 2012 when a convolutional neural network (CNN) called AlexNet successfully competed in the ImageNet Large Scale Visual Recognition Challenge. That event became a milestone in the evolution of machine vision systems, and it signified the transition from “machines that follow guidelines” to “machines that figure out what they see on their own.”
Compared to previous computer vision techniques, CNNs not only learn from data, but automatically identify optimal features of what they see. Then, by learning complex features, they identify parts of objects and entire objects in images, pixel by pixel.
The image above shows how a deep neural network identifies lines, then constructs more complex features of a car, and then figures out that it “sees” a car in the image. All of that is done hierarchically, in and across multiple layers of artificial neurons.
Other than the rise of sophisticated CNNs (and the AI revolution in general), other fundamental factors also contribute to the growth of machine vision technology. These are:
Easier access to cloud computing resources
Availability of high definition imaging (e.g. 4K & 10K formats)
Evolution of data storage, transmission, and processing standards
Development of a variety of CMOS sensors and industrial IoT
Accelerating transition towards Industry 4.0
Note: The use of deep learning and CNNs in machine vision has generated a great deal of hype lately. While neural networks are good at carrying out classification tasks by identifying characteristics learned from a set of training images, data quality, and lack of time and resources to support a full-scale machine vision project cut in on their competitiveness. Cheaper and simpler machine learning solutions that can be integrated with embedded vision systems and surveillance cameras offer a viable alternative.
When and Where Is Computer Vision Needed?
Computer vision can be applied and implemented in pretty much any industry, in which (a) there is a great amount of tasks that can be automated; (b) superlative accuracy of results is required; (c) the cost of human error is high enough to justify investment in technology.
Apart from manufacturing, several industries fit the description:
Medical imaging systems assist doctors to diagnose patients at a larger scale through automation. They must be highly accurate and cover a wide range of conditions to prevent error.
CV systems are used not only to identify enemies on the battlefield but also to guide missiles and drone strikes. Obviously, military systems must be superbly accurate to avoid civilian casualties.
Given how labor-intensive agriculture is, automation is highly justified. Mounted on drones, intelligent cameras help farmers control the growth and yield of crops to water, cultivate, and harvest more efficiently.
Another honorable mention is the automotive industry. Most automotive manufacturers are investing extensively in the development of self-driving cars — specifically, in those that are powered by lidar technology. On the contrary, Elon Musk’s Tesla cars are equipped with eight CV cameras that provide 360 degrees of visibility around the car. Tesla’s vision systems are more challenging to build and train (since they need huge amounts of data covering every situation on the road), but they are also more efficient than lidars.
At this point, it may seem that a considerable portion of computer vision systems are developed for the sole purpose of substituting a human worker — be it a radiologist, a defect inspector at a local factory, or a security guy at the airport. In reality, though, AI-powered machine vision solutions can not just assist workers in what they do, but drastically improve the results of their work.
Let’s say, your business employs ten defect inspectors. They are good at what they do — finding defects and dealing with rejected parts — but their operations are not scalable. If you want to produce more, you will have to hire more checkers.
Other problems appear as well.
Humans are prone to make mistakes. According to a Juran study of the effectiveness of manual vision inspection, 100% inspection is about 87% effective. In practice, this means that organizations need to check every detail at least three times to catch 100% of defects. The inefficiency of reviewers may be attributed to a wide range of so-called human factors that cover individual traits, environmental and organizational specifics, and social aspects.
Humans are biased. The human eye not only fails to catch errors and defects as accurately as machines, but also it suffers from various optical illusions. On top of that, eyesight is not as precise as machine vision — our capacity to measure depends on comparison (e.g. place two identical matchboxes on the table; then, put each of them next to a large book and a small book — the first matchbox will look smaller, the second — larger.)
The Role of Computer Vision in Manufacturing
Manufacturing is the sector that offers the most potential for application of machine vision.
For one thing, it is highly susceptible to automation. According to McKinsey, workers in manufacturing spend almost one-third of their time performing physical activities or operating machinery in a predictable environment aka performing tasks that can be automated. Then, most activities in manufacturing — be it predictable or unpredictable physical work, or more complex check-and-test tasks — require a high degree of accuracy. Errors at any stage of production are costly, and they cause defects, scrap, rework, and customer dissatisfaction.
And finally, as a technically advanced and ROI-intensive field, manufacturing has always fueled investment into new tools that can increase production, reduce cycle times, improve quality, and cut manufacturing lead times.
With the rapid transition towards Industry 4.0, computer vision has joined other technologies, from industrial IoT and wearables to predictive analytics and robot interfaces, in the workshops of smart factories.
In the industrial setting, computer vision augments human vision. While the latter is ideal for assessing and interpreting unstructured visual data, machines excel at speed, accuracy, and repeatability.
The advantages of machine vision are most noticeable at the visual inspection stage of production. Not only can computers simultaneously check dozens of thousands of items on a production line but “see” defects too small to be seen by the human eye. No wonder that industrial quality assurance is moving away from manual inspection towards automated optical inspection (AOI) and CV solutions powered by AI and deep learning.
Overall, computer vision brings about such substantial improvements as:
Better production agility and flexibility
Faster equipment setup time
Improved process control
Tighter inventory control
Reduced scrap rate
Lower production costs
Bear in mind, however, that computer vision has its drawbacks, too. First, CV solutions, specifically the ones that have deep learning under the hood, are hard to build and support, which explains their price tag. Second, CV learns on data and also collects data to operate, which raises the issue of privacy (e.g. facial recognition technology).
Use Cases of Computer Vision in Manufacturing
Computer vision and machine vision are part of a wide variety of AI solutions powered by either machine learning or deep learning. Let’s have a closer look at some of them.
#1 Predictive Maintenance
Organizations are increasingly moving away from preventive to predictive maintenance, because the latter is more effective and cost-efficient; i.e. it allows to proactively prevent breakdowns and to reduce downtime, material and labor costs. The fundamental difference is, predictive maintenance is administered only when it is needed while preventine one is scheduled at regular intervals no matter what.
For predictive maintenance systems to work, however, they need data about the current state of equipment and machinery, and here is when computer vision enters the scene. In combination with industrial IoT, high-resolution cameras collect the required data and push it to the local on-premises system or to the cloud for processing and analysis. Algorithms check and compare the data against specific “failure” patterns and, if any signs of failure are detected, alert engineers to schedule a maintenance session.
In a system as such, computer vision captures images of operating machinery and also of products on the production line. Any defects identified signal that something is wrong, too. Proactive reaction to potential failures puts predictive maintenance and predictive analytics on the pedestal as one of the major techniques that facilitate the transition to Industry 4.0.
#2 Defect Detection & Inspection
Other core techniques are defect detection and defect inspection. Defects and production errors are not just expensive, they are progressively expensive. The later in production the defect is detected, the costlier it becomes. So, in practical terms, finding an error further down the line makes less and less sense.
As we have mentioned previously, computer vision systems greatly excel humans at speed, accuracy, and repeatability. They catch defects that are not visible by a naked eye, faster and at a greater scale than any team of trained QA specialists.
As of today, enterprises abandon traditional methods of defect detection (manual inspection and AOI) and move towards computer vision solutions — those powered by sensors and deep learning (more advanced and costly ones) or those that combine high-resolution cameras and ML algorithms in the cloud. In both cases, images and other data are captured to be processed by pre-trained algorithms to identify defects by (a) matching the data against the “list” of previously identified defects, or (b) figuring out an entirely new defect, which DL systems can do.
Here is how.
Say, there is a huge steelworks that produces rolled steel. You have to be able to detect a wide range of defects before rolls are shipped: roll marks, zipper cracks, wavy edges, wage cracks, etc. This is what human quality inspectors usually do. Or, you could prevent these defects at earlier stages of production with a CV solution for steel defects detection. The latter approach is not just more practical, but it greatly facilitates final checks and tests by human inspectors.
Manufacturers are expected to ensure safety and security in the working environment. Not only do they face massive fines from OSHA if any fatal and non-fatal injuries occur to their employees, but also they have to cover all direct and indirect costs, including damage expenses, medical compensation, indemnity payment, loss of productivity, overtime costs, equipment damage, product and material damage, and more. No wonder that organizations aim to prevent injuries rather than react to them.
Personal protective equipment (PPE) and a variety of safety gear are at the core of safety in the workplace. PPE protects workers from burn injuries, toxic exposure, trips and falls, as well as from hazardous contacts with objects and equipment — the most common worker injuries in manufacturing.
It takes time and resources to enforce PPE compliance, however. In many cases, workers treat PPE as nuisance, just because it can be uncomfortable or ill-fitting. Rigorous training and safety engineers help ensure that workers wear protective gear. Alternatively, this task can be outsourced to AI and computer vision.
PPE detector for employee safety is an intelligent system that monitors factory floor in real time to identify workers missing or not wearing PPE properly. The identified PPE include: coats, protective glasses, gloves, masks, and hard hats. Whenever a violation is found, the system alerts safety engineers to take action. It is a perfect solution for manufacturers who want to scale security and safety while helping safety engineers work more efficiently.
#4 Package Inspection
If your website is the face of your company, packaging is the face of your product. Proper look of your products is key to their marketability, which impacts customer confidence and either damages or improves company reputation. This is especially important for such industries as foods and beverages, electronics, and retail goods.
Package inspection is a labor-intensive task that has traditionally required human intervention; i.e. each and every box should be inspected for integrity, completeness, labelling, and any signs of defects.
With the advance of AI and machine learning, however, manufacturers can now automate a significant portion of human checkers’ job on every stage of production.
For instance, high-precision cameras mounted on production lines of a pharmaceutical company can inspect drug blister packs for integrity, count, and completeness. When packed, other cameras can check the packaging for labelling, barcodes, best before dates, and more.
In the example on the left, the cameras are powered by machine learning and computer vision. Captured images from CCTV video streams are pushed via the internet connection to the cloud (or to an on-premises system) for processing and analysis. Any anomalies — be it improper container forming, incorrect fill level, or wrong count — are detected and labelled as defective for corrective action.
#5 Product Tracking & Tracing
In manufacturing, seemingly simple errors like using wrong materials or applying wrong processes to materials can cause losses of millions of dollars a day. It is critical for manufacturers to track-and-trace; i.e. to be able to follow specific parts and products throughout the manufacturing process, from raw materials to the shipment of final products.
Historically, logistics challenges associated with tracking and tracing of products prevented manufacturers from effectively protecting customers from counterfeit and made recalls a challenge of its own. More than that, inefficient track-and-trace causes massive direct and indirect damages in the first place.
Advanced product tracking and tracing systems, which are used at smart factories, combine IoT, barcoding, and CV technologies. Every part of a product, as well as cartons and pallets get unique identification at every stage of production and shipment. In this ecosystem, AI and computer vision help identify the items by reading IDs and barcodes, to ensure better visibility and transparency of operations. Overall, the application of AI enables manufacturers to improve inventory and to drive efficiencies in compliance and supplier checks.
For manufacturers it is critical that their facilities operate smoothly and continuously. Any insignificant bottleneck can potentially unsync production cycles, cause stoppages, and disrupt inventory and supply chain management. Sometimes bottlenecks as such are caused by physical limitations of access control.
In any industry, organizations should be able to (a) allow workers through turnstiles; (b) check their IDs for security purposes; (c) do it as quickly as possible not to disrupt production cycles. Fortunately, these goals can be easily achieved with facial recognition technology.
Automated safety and access monitoring solution enables organizations not only to simplify workers’ access to the facility, but also to monitor and track them on the shop floor. Workers get ID’ed based on their qualifications, access priority, working hours, etc. Thus, it allows supervisors to prevent improve security and enforce accountability while spending less time on surveillance.
Other notable use cases of computer vision in manufacturing: Additive manufacturing (also known as 3D printing), barcode reading, product assembly, supply chain optimization, facial recognition, object identification and avoidance.
Among many technologies that fall under the umbrella of AI, computer vision (machine vision) is one with the highest and broadest potential for real-world applications like medical image analysis, facial recognition, video surveillance, and AR/VR. The technology proves useful in manufacturing as well, though. Actually, it is an essential part of the ecosystem of technologies that drive the transition to Industry 4.0.
Manufacturers across the globe are racing to take advantage of AI-powered computer vision solutions to automate manual, routine tasks and to empower employees to work more efficiently. The manufacturers in the US are positioned perfectly to reinvigorate the nation’s manufacturing potential through AI, machine learning, and computer vision.
There are many computer vision use cases they can start with to enhance the manufacturing process, including but not limited to predictive maintenance, defect detection, defect inspection, PPE compliance and security monitoring, product and package inspection, product tracking and tracing, physical access control and monitoring, additive manufacturing, and barcode reading.
At VITech Lab, we strive to help businesses select appropriate use cases to start AI/ML transformation — those that fit an organization’s business goals and technology environment. Interested to learn more about our Computer Vision Solutions? Contact us at firstname.lastname@example.org or fill out the form here.
Thank you for reading!
Feel free to rate the article: