Back to Portfolio

Industrial Quality Control System Computer Vision Solution

Automated defect detection system for manufacturing lines with 99.4% accuracy, processing 1000+ products per minute with real-time quality assessment.

Project Context

A Tier 1 automotive parts supplier in the West Midlands engaged YF Studio for a 5-month engagement from January to May 2024 to replace their manual inspection process with an automated computer vision system. The client produces precision-machined engine brackets and housings for two major OEMs, running three production lines across a single facility with output of approximately 18,000 parts per shift.

Timeline

5 months (Jan – May 2024)

Team

2 computer vision engineers, 1 integration specialist

Industry

Automotive parts manufacturing (Tier 1 supplier)

Project Overview

Developed a comprehensive computer vision system for automated quality control in automotive parts manufacturing. The system detects surface defects, dimensional variations, and assembly issues in real-time, ensuring consistent product quality while reducing manual inspection costs.

Hardware

Industrial PCs, High-res Cameras, LED Lighting Systems

AI Framework

PyTorch, TensorFlow, OpenCV, Scikit-learn

Computer Vision

ResNet, EfficientNet, YOLOv7, Image Segmentation

Integration

PLC Communication, SCADA Systems, MES

The Challenge

The client's existing quality control relied on a team of six manual inspectors working in rotation, achieving roughly 87% defect detection rates with significant variability between shifts. A previous attempt to introduce a rule-based machine vision system (using simple thresholding and template matching) had been trialled in 2022 but was abandoned after three months — it could not cope with the natural variation in part surfaces and produced an unacceptable false rejection rate of over 12%, causing costly production stoppages.

Lighting proved to be a critical obstacle in the facility. Harsh fluorescent overhead lighting created specular reflections on machined aluminium surfaces, while shadows cast by conveyor belt mechanisms and overhead gantries varied depending on part position. Colour temperature also shifted noticeably between the three production lines due to different fixture ages and bulb types. Any viable solution had to be robust to these conditions without requiring a full lighting retrofit across the plant.

Key requirements included:

  • Detecting micro-defects as small as 0.1mm on machined aluminium surfaces
  • Processing 1,000+ parts per minute across three production lines
  • Handling variable lighting conditions, surface finishes, and reflective materials
  • Achieving a false rejection rate below 1% to avoid production delays
  • Integration with existing PLC/SCADA infrastructure and MES reporting
  • Compliance with IATF 16949 automotive quality standards

Our Solution

1. Multi-Camera Inspection System

Deployed a synchronized 6-camera setup with diffused LED dome lighting at each inspection station, replacing the reliance on ambient fluorescent lighting. Dome illumination was chosen over directional ring lights because it minimises specular highlights on curved aluminium surfaces — a critical requirement identified during the site survey. Each camera captures at 5 megapixels and 60 fps, positioned at optimised angles (0°, 30°, 60° from vertical, with two lateral views) to ensure full surface coverage. We evaluated line-scan cameras as an alternative but ruled them out due to the variable conveyor speed on Line 2, which would have required complex encoder synchronisation.

2. Advanced Defect Detection Models

Developed custom CNN models based on EfficientNet-B3 as the backbone, selected over ResNet-50 for its superior accuracy-to-compute ratio — critical given the per-station budget constraints on inference hardware. The models were trained on 50,000 annotated images of defective and non-defective parts, covering 15 defect categories including scratches, dents, porosity, burrs, corrosion spots, and dimensional variations. The annotation was carried out over 3 weeks by a team of 2 annotators using CVAT (Computer Vision Annotation Tool), with a structured QA review pass that rejected approximately 8% of initial labels due to ambiguous defect boundaries or mislabelling. We also applied offline augmentation (rotation, brightness jitter, synthetic shadow overlays) to improve robustness to the lighting variability observed across the three production lines.

3. Real-time Processing Pipeline

Implemented a high-performance processing pipeline using TensorRT for model inference on NVIDIA T4 GPUs, achieving sub-60ms end-to-end latency per part. TensorRT was chosen over ONNX Runtime because it delivered approximately 35% lower latency on the target hardware in our benchmarks. The pipeline is structured as an asynchronous queue to decouple image acquisition from inference, preventing camera frame drops during peak throughput.

4. Intelligent Classification System

Created a hierarchical classification system that categorises defects by severity (critical, major, minor) and type, enabling automated sorting into pass, rework, and reject bins. The severity thresholds were calibrated in collaboration with the client's quality engineering team to align with their existing IATF 16949 nonconformance criteria, ensuring the automated system's decisions were directly comparable to historical manual inspection records.

Technical Implementation

Image Processing Pipeline

The system processes images through multiple stages:

  • Preprocessing: Noise reduction, contrast enhancement, and normalization
  • Feature Extraction: Edge detection, texture analysis, and geometric measurements
  • Classification: Multi-class defect detection using ensemble models
  • Post-processing: Confidence scoring and decision making

Model Architecture

Implemented a hybrid approach combining:

  • EfficientNet-B3 backbone for feature extraction (pretrained on ImageNet, fine-tuned on domain data)
  • Custom spatial attention modules for defect localisation within the part region of interest
  • Ensemble of 3 models (varying augmentation seeds) for improved accuracy and reduced variance
  • Transfer learning with progressive unfreezing to retain low-level feature representations

Limitations & Edge Cases

While the system achieves strong overall performance, several known limitations were documented during acceptance testing:

  • Reflective surfaces: Performance drops to 96.2% detection accuracy on chrome-plated and highly reflective surfaces due to specular highlights saturating the camera sensor. A polarising filter attachment is recommended for production lines handling these part types, and is planned for retrofit in Q3 2024.
  • Novel defect types: The system is trained on 15 known defect categories. Entirely novel defect types (e.g., a new supplier's material exhibiting unfamiliar grain patterns) may be classified as "unknown" and flagged for human review rather than auto-rejected.
  • Part changeover: When switching between significantly different part geometries, a 15-minute recalibration cycle is required to update the region-of-interest templates. This is handled automatically but adds downtime during product changeovers.
  • Ambient light interference: Although the dome lighting largely isolates the inspection zone, direct sunlight from nearby loading bay doors during summer months caused a measurable (0.3%) accuracy dip in acceptance testing. Blackout curtains were recommended for the affected station.

Results & Impact

99.4%

Detection Accuracy

Baseline: 87% (manual)

<60ms

Processing Time per Part

Baseline: 4–6 sec (manual)

0.1mm

Minimum Defect Size

Baseline: ~0.5mm (manual)

0.6%

False Rejection Rate

Baseline: 12% (rule-based system)

Business Impact

  • Reduced manual inspection headcount from 6 inspectors per shift to 1 supervisor overseeing the automated system — an estimated 70% reduction in inspection labour costs
  • Improved defect detection consistency: shift-to-shift detection variance dropped from ±8% (manual) to ±0.3% (automated)
  • Decreased false rejection rate from 12% (previous rule-based system) to 0.6%, significantly reducing unnecessary rework and scrap costs
  • Enabled 24/7 quality monitoring across all three production lines without human fatigue degradation
  • Generated per-part traceability logs and shift-level quality analytics, supporting the client's IATF 16949 audit requirements

Quality Metrics

The system maintains strong performance across all quality indicators, validated against a held-out test set of 5,000 images and confirmed during a 2-week parallel run alongside manual inspectors:

  • Precision: 98.8% — Low false positive rate, minimising unnecessary rejections
  • Recall: 99.4% — Captures the vast majority of genuine defects
  • F1-Score: 99.1% — Strong balance of precision and recall
  • System Availability: 99.2% — Accounts for planned recalibration and occasional camera cleaning cycles
  • Mean Time to Detection: 45ms — Well within the 60ms budget required for line-speed operation

Ongoing & Next Steps

Following successful deployment, YF Studio continues to support the client with model maintenance and planned enhancements:

  • Reflective surface retrofit (Q3 2024): Installing polarising filter attachments on Line 3 to address the 96.2% accuracy limitation on chrome-plated parts
  • Model retraining pipeline: Quarterly retraining on newly collected edge-case images to maintain accuracy as part geometry and supplier materials evolve
  • 3D defect analysis: Evaluating structured light scanning for volumetric defect measurement (depth of scratches and dents), expected to enter pilot in early 2025
  • Predictive quality analytics: Correlating defect trends with upstream process parameters (CNC tool wear, material batch) to enable proactive intervention before defect rates rise
  • Second facility rollout: The client has requested a scoping study for deploying the system at their second manufacturing site in the East Midlands