Automated defect detection system for manufacturing lines with 99.4% accuracy, processing 1000+ products per minute with real-time quality assessment.
A Tier 1 automotive parts supplier in the West Midlands engaged YF Studio for a 5-month engagement from January to May 2024 to replace their manual inspection process with an automated computer vision system. The client produces precision-machined engine brackets and housings for two major OEMs, running three production lines across a single facility with output of approximately 18,000 parts per shift.
5 months (Jan – May 2024)
2 computer vision engineers, 1 integration specialist
Automotive parts manufacturing (Tier 1 supplier)
Developed a comprehensive computer vision system for automated quality control in automotive parts manufacturing. The system detects surface defects, dimensional variations, and assembly issues in real-time, ensuring consistent product quality while reducing manual inspection costs.
Industrial PCs, High-res Cameras, LED Lighting Systems
PyTorch, TensorFlow, OpenCV, Scikit-learn
ResNet, EfficientNet, YOLOv7, Image Segmentation
PLC Communication, SCADA Systems, MES
The client's existing quality control relied on a team of six manual inspectors working in rotation, achieving roughly 87% defect detection rates with significant variability between shifts. A previous attempt to introduce a rule-based machine vision system (using simple thresholding and template matching) had been trialled in 2022 but was abandoned after three months — it could not cope with the natural variation in part surfaces and produced an unacceptable false rejection rate of over 12%, causing costly production stoppages.
Lighting proved to be a critical obstacle in the facility. Harsh fluorescent overhead lighting created specular reflections on machined aluminium surfaces, while shadows cast by conveyor belt mechanisms and overhead gantries varied depending on part position. Colour temperature also shifted noticeably between the three production lines due to different fixture ages and bulb types. Any viable solution had to be robust to these conditions without requiring a full lighting retrofit across the plant.
Key requirements included:
Deployed a synchronized 6-camera setup with diffused LED dome lighting at each inspection station, replacing the reliance on ambient fluorescent lighting. Dome illumination was chosen over directional ring lights because it minimises specular highlights on curved aluminium surfaces — a critical requirement identified during the site survey. Each camera captures at 5 megapixels and 60 fps, positioned at optimised angles (0°, 30°, 60° from vertical, with two lateral views) to ensure full surface coverage. We evaluated line-scan cameras as an alternative but ruled them out due to the variable conveyor speed on Line 2, which would have required complex encoder synchronisation.
Developed custom CNN models based on EfficientNet-B3 as the backbone, selected over ResNet-50 for its superior accuracy-to-compute ratio — critical given the per-station budget constraints on inference hardware. The models were trained on 50,000 annotated images of defective and non-defective parts, covering 15 defect categories including scratches, dents, porosity, burrs, corrosion spots, and dimensional variations. The annotation was carried out over 3 weeks by a team of 2 annotators using CVAT (Computer Vision Annotation Tool), with a structured QA review pass that rejected approximately 8% of initial labels due to ambiguous defect boundaries or mislabelling. We also applied offline augmentation (rotation, brightness jitter, synthetic shadow overlays) to improve robustness to the lighting variability observed across the three production lines.
Implemented a high-performance processing pipeline using TensorRT for model inference on NVIDIA T4 GPUs, achieving sub-60ms end-to-end latency per part. TensorRT was chosen over ONNX Runtime because it delivered approximately 35% lower latency on the target hardware in our benchmarks. The pipeline is structured as an asynchronous queue to decouple image acquisition from inference, preventing camera frame drops during peak throughput.
Created a hierarchical classification system that categorises defects by severity (critical, major, minor) and type, enabling automated sorting into pass, rework, and reject bins. The severity thresholds were calibrated in collaboration with the client's quality engineering team to align with their existing IATF 16949 nonconformance criteria, ensuring the automated system's decisions were directly comparable to historical manual inspection records.
The system processes images through multiple stages:
Implemented a hybrid approach combining:
While the system achieves strong overall performance, several known limitations were documented during acceptance testing:
Detection Accuracy
Baseline: 87% (manual)
Processing Time per Part
Baseline: 4–6 sec (manual)
Minimum Defect Size
Baseline: ~0.5mm (manual)
False Rejection Rate
Baseline: 12% (rule-based system)
The system maintains strong performance across all quality indicators, validated against a held-out test set of 5,000 images and confirmed during a 2-week parallel run alongside manual inspectors:
Following successful deployment, YF Studio continues to support the client with model maintenance and planned enhancements: