Analyzing your defect suite and classifying your defects prior to purchasing your web inspection system is a key step in your journey to zero defects. This articles examines how to categorize and also classify your defect suite.
Understanding your defect suite.
We looked at the effect of the minimum defect size, width and line speed on the complexity and cost of an inspection system in our first blog post. The next step is to analyze the defect suite and to understand which defects are key.
5 Steps to categorizing defects:
- Identify all the defects that are present where the machine vision system will be installed. Include those defects that are caused upstream such as substrate issues.
- Identify the source of each defect.
- Identify the defects that are causing customer complaints, and if possible use a Pareto chart.
- Identify any measurements that if out of specification would result in defective product downstream.
- Identify products and categorize according to color and substrate type.
The result is a table with a list of defects, the minimum acceptable size, the root cause if known, and any particular information as to the effect from the substrate. It's then important to start a collection process so that there is a set of samples that vendors can test. See sample test collection.
Which one is the Insect?
Now you have your defects list. The next step is for the defect to be categorized according to class and action. The following is an example of how to classify.
- Class 1- Critical defects that must be alarmed immediately and the operator must take action. This may require an alarm to be reset, the machine to stop until the reset is complete. It may also mean placing a physical mark put on the product such as a tag, so that it can be removed downstream prior to shipping.
- Class 2– Defects that if identified are not an issue unless there is a density of such defects within a time frame or a defined product area. Until then the defect is displayed as a warning, and when the density of that type exceeds a threshold, it is elevated to a class 1 type.
- Class 3- Small defects that need to be tracked for statistical issues but no action to be taken.
Each defect on the list should be classified. The next step is to assign priority according to returns. There may be a class 1 issue, but if it does occur on all substrates, and what percentage of returns are attributed to this problem. This is a key exercise.
You may ask why?
- Sometimes one defect class that seems easy for a human to see can define the scale of an automated optical inspection system. Simple examples are very subtle streaks. Large defects that are low contrast can be easily recognized by humans as we can integrate over large areas and can segment these problems. This is not the same for a vision system and may require special processing techniques.
- Maybe one defect out of a list of ten, may require an extra optical setup such as another bank of cameras and lights.
So the final column needs to be added to the list. This column defines if this defect is detected by system specification A, or A+B, or A+B+C, or by C alone. This will then allow you to allocate your capital where you can best get a return. So to recap, the steps to complete are:
- List of all defects
- List of all measurements
- List sources of defects
- List substrates and colors
- List minimum acceptable sizes
- Classify according to action
- Determine minimum equipment requirements for each defect
- Understand which optical setup is required for each defect
A little extra for those techies
Classification is always the most complex part of vision system specification. The ability to classify is often dependent on the number of camera angles, the lighting used and also the ability of the imaging algorithm to extract the appropriate information. To classify defects, they must be first segmented from the background. This is done using a variety of image processing techniques.
Print inspection systems use a golden template comparison while surface inspection systems normally use adaptive thresholding. The image is normally passed through some enhancement filters, then a threshold is applied, and the resultant image with segmented areas of interest are fed to a feature extraction tool. Key features include location, width, height, intensity, perimeter analysis, and elongation factors. Many more are calculated also. This data is then fed to a classification algorithm either a rule based system or a self-learning statistical solution such as K-Means. Deep learning or neural techniques are also used but can be slow to setup and complex to run in real time as they require large amounts of CPU/GPUs.