Defect detection in high-resolution imagery using two-stage Amazon Rekognition Custom Labels models
High-resolution imagery is very prevalent in today’s world, from satellite imagery to drones and DLSR cameras. From this imagery, we can capture damage due to natural disasters, anomalies in manufacturing equipment, or very small defects such as defects on printed circuit boards (PCBs) or semiconductors. Building anomaly detection models using high-resolution imagery can be challenging because modern computer vision models typically resize images to a lower resolution to fit into memory for training and running inference. Reducing the image resolution significantly means that visual information relating to the defect is degraded or completely lost.
One approach to overcome these challenges is to build two-stage models. Stage 1 models detect a region of interest, and Stage 2 models detect defects on the cropped region of interest, thereby maintaining sufficient resolution for small detects.
In this post, we go over how to build an effective two-stage defect detection system using Amazon Rekognition Custom Labels and compare results for this specific use case with one-stage models. Note that several one-stage models are effective even at lower or resized image resolutions, and others may accommodate large images in smaller batches.
Solution overview
For our use case, we use a dataset of images of PCBs with synthetically generated missing hole pins, as shown in the following example.
We use this dataset to demonstrate that a one-stage approach using object detection results in subpar detection performance for the missing hole pin defects. A two-step model is preferred, in which we use Rekognition Custom Labels first for object detection to identify the pins and then a second-stage model to classify cropped images of the pins into pins with missing holes or normal pins.
The training process for a Rekognition Custom Labels model consists of several steps, as illustrated in the following diagram.
First, we use Amazon Simple Storage Service (Amazon S3) to store the image data. The data is ingested in Amazon Sagemaker Jupyter notebooks, where typically a data scientist will inspect the images and preprocess them, removing any images that are of poor quality such as blurred images or poor lighting conditions, and resize or crop the images. Then data is split into training and test sets, and Amazon SageMaker Ground Truth labeling jobs are run to label the sets of images and output a train and test manifest file. The manifest files are used by Rekognition Custom Labels for training.
One-stage model approach
The first approach we take to identifying missing holes on the PCB is to label the missing holes and train an object detection model to identify the missing holes. The following is an image example from the dataset.
We train a model with a dataset with 95 images used as training and 20 images used for testing. The following table summarizes our results.
Evaluation Results | |||||
F1 Score | Average Precision | Overall Recall | |||
0.468 | 0.750 | 0.340 | |||
Training Time | Training Dataset | Testing Dataset | |||
Trained in 1.791 hours | 1 label, 95 images | 1 label, 20 images | |||
Per Label Performance | |||||
Label Name | F1 Score | Test Images | Precision | Recall | Assumed Threshold |
missing_hole |
0.468 | 20 | 0.750 | 0.340 | 0.053 |
The resulting model has high precision but low recall, meaning that when we localize a region for a missing hole, we’re usually correct, but we’re missing a lot of missing holes that are present on the PCB. To build an effective defect detection system, we need to improve recall. The low performance of this model may be due to the defects being small on this high-resolution image of the PCB, so the model has no reference of a healthy pin.
Next, we explore splitting the image into four or six crops depending on the PCB size and labeling both healthy and missing holes. The following is an example of the resulting cropped image.
We train a model with 524 images used as training and 106 images used for testing. We maintain the same PCBs used in train and test as the full board model. The results for cropped healthy pins vs. missing holes are shown in the following table.
Evaluation Results | |||||
F1 Score | Average Precision | Overall Recall | |||
0.967 | 0.989 | 0.945 | |||
Training Time | Training Dataset | Testing Dataset | |||
Trained in 2.118 hours | 2 labels, 524 images | 2 labels, 106 images | |||
Per Label Performance | |||||
Label Name | F1 Score | Test Images | Precision | Recall | Assumed Threshold |
missing_hole |
0.949 | 42 | 0.980 | 0.920 | 0.536 |
pin |
0.984 | 106 | 0.998 | 0.970 | 0.696 |
Both precision and recall have improved significantly. Training the model with zoomed-in cropped images and a reference to the model for healthy pins helped. However, recall is still at 92%, meaning that we would still miss 8% of the missing holes and let defects go by unnoticed.
Next, we explore a two-stage model approach in which we can improve the model performance further.
Two-stage model approach
For the two-stage model, we train two models: one for detecting pins and one for detecting if the pin is missing or not on zoomed-in cropped images of the pin. The following is an image from the pin detection dataset.
The data is similar to our previous experiment, in which we cropped the PCB into four or six cropped images. This time, we label all pins and don’t make any distinctions if the pin has a missing hole or not. We train this model with 522 images and test with 108 images, maintaining the same train/test split as previous experiments. The results are shown in the following table.
Evaluation Results | |||||
F1 Score | Average Precision | Overall Recall | |||
1.000 | 0.999 | 1.000 | |||
Training Time | Training Dataset | Testing Dataset | |||
Trained in 1.581 hours | 1 label, 522 images | 1 label, 108 images | |||
Per Label Performance | |||||
Label Name | F1 Score | Test Images | Precision | Recall | Assumed Threshold |
pin |
1.000 | 108 | 0.999 | 1.000 | 0.617 |
The model detects the pins perfectly on this synthetic dataset.
Next, we build the model to make the distinction for missing holes. We use cropped images of the holes to train the second stage of the model, as shown in the following examples. This model is separate from the previous models because it’s a classification model and will be focused on the narrow task of determining if the pin has a missing hole.
We train this second-stage model on 16,624 images and test on 3,266, maintaining the same train/test splits as the previous experiments. The following table summarizes our results.
Evaluation Results | |||||
F1 Score | Average Precision | Overall Recall | |||
1.000 | 1.000 | 1.000 | |||
Training Time | Training Dataset | Testing Dataset | |||
Trained in 6.660 hours | 2 labels, 16,624 images | 2 labels, 3,266 images | |||
Per Label Performance | |||||
Label Name | F1 Score | Test Images | Precision | Recall | Assumed Threshold |
anomaly |
1.000 | 88 | 1.000 | 1.000 | 0.960 |
normal |
1.000 | 3,178 | 1.000 | 1.000 | 0.996 |
Again, we receive perfect precision and recall on this synthetic dataset. Combining the previous pin detection model with this second-stage missing hole classification model, we can build a model that outperforms any single-stage model.
The following table summarizes the experiments we conducted.
Experiment | Type | Description | F1 Score | Precision | Recall |
1 | One-stage model | Object detection model to detect missing holes on full images | 0.468 | 0.75 | 0.34 |
2 | One-stage model | Object detection model to detect healthy pins and missing holes on cropped images | 0.967 | 0.989 | 0.945 |
3 | Two-stage model | Stage 1: Object detection on all pins | 1.000 | 0.999 | 1.000 |
Stage 2: Image classification of healthy pin or missing holes | 1.000 | 1.000 | 1.000 | ||
End-to-end average | 1.000 | 0.9995 | 1.000 |
Inference pipeline
You can use the following architecture to deploy the one-stage and two-stage models that we described in this post. The following main components are involved:
- Amazon API Gateway
- AWS Lambda
- An Amazon Rekognition custom endpoint
For one-stage models, you can send an input image to the API Gateway endpoint, followed by Lambda for any basic image preprocessing, and route to the Rekognition Custom Labels trained model endpoint. In our experiments, we explored one-stage models that can detect only missing holes, and missing holes and healthy pins.
For two-stage models, you can similarly send an image to the API Gateway endpoint, followed by Lambda. Lambda acts as an orchestrator that first calls the object detection model (trained using Rekognition Custom Labels), which generates the region of interest. The original image is then cropped in the Lambda function, and sent to another Rekognition Custom Labels classification model for detecting defects in each cropped image.
Conclusion
In this post, we trained one- and two-stage models to detect missing holes in PCBs using Rekognition Custom Labels. We reported results for various models; in our case, two-stage models outperformed other variants. We encourage customers with high-resolution imagery from other domains to test model performance with one- and two-stage models. Additionally, consider the following ways to expand the solution:
- Sliding window crops for your actual datasets
- Reusing your object detection models in the same pipeline
- Pre-labeling workflows using bounding box predictions
About the authors
Andreas Karagounis is a Data Science Manager at Accenture. He holds a masters in Computer Science from Brown University. He has a background in computer vision and works with customers to solve their business challenges using data science and machine learning.
Yogesh Chaturvedi is a Principal Solutions Architect at AWS with a focus in computer vision. He works with customers to address their business challenges using cloud technologies. Outside of work, he enjoys hiking, traveling, and watching sports.
Shreyas Subramanian is a Principal Data Scientist, and helps customers by using machine learning to solve their business challenges using the AWS platform. Shreyas has a background in large-scale optimization and machine learning, and in the use of machine learning and reinforcement learning for accelerating optimization tasks.
Selimcan “Can” Sakar is a cloud-first developer and Solutions Architect at AWS Accenture Business Group with a focus on emerging technologies such as GenAI, ML, and blockchain. When he isn’t watching models converge, he can be seen biking or playing the clarinet.
from AWS Machine Learning Blog https://ift.tt/1rSn4vZ
No comments