Back to Portfolio

Robotic Vision System Intelligent Automation

Advanced computer vision system for PCB component pick-and-place operations with 98.8% success rate, ±0.05mm placement tolerance, and real-time 6-DOF grasp planning for SMD components.

Project Context

Client

A consumer electronics contract manufacturer in Shenzhen with UK engineering office

Timeline

7-month engagement, April – October 2023

Team

1 robotics vision specialist, 1 controls engineer

The client manufactures consumer electronics for several major brands and needed to increase throughput on their SMD (surface-mount device) component placement lines. Their existing pick-and-place machines handled standard components well, but a growing proportion of their product mix involved irregularly-shaped connectors, shielding cans, and non-standard packages that defeated the fixed-template vision systems on their legacy machines. The UK engineering office managed the vision system R&D, with deployment and integration handled at the Shenzhen production facility.

Project Overview

Developed a sophisticated robotic vision system for automated PCB component pick-and-place operations. The system picks SMD components from feeder trays and places them on PCBs with ±0.05mm tolerance, combining advanced computer vision, machine learning, and 6-DOF robotic control to handle 50+ component types including irregularly-shaped connectors and shielding cans that defeated the client's legacy template-matching systems.

Hardware

Universal Robots UR5e, Intel RealSense, Industrial Cameras

AI Framework

ROS2, OpenCV, PyTorch, MoveIt

Computer Vision

YOLOv7, Point Cloud Processing, 3D Reconstruction

Control Systems

PID Control, Trajectory Planning, Force Feedback

The Challenge

The client needed to automate the placement of non-standard SMD components — irregularly-shaped connectors, metal shielding cans, and custom packages — that their existing template-based pick-and-place machines could not handle. Key challenges included:

  • Detecting and classifying 50+ different SMD component types from feeder trays
  • Handling components with varying shapes, sizes, and orientations (0.4mm pitch QFPs to 20mm shielding cans)
  • Achieving ±0.05mm placement precision on PCB pads
  • Maintaining detection accuracy under production-floor lighting conditions
  • Integrating with existing conveyor and feeder tray systems
  • Ensuring 24/7 operation with minimal maintenance across multiple production lines

What Had Been Tried Before

The client's legacy pick-and-place machines used fixed template matching for component detection. This worked reliably for standard rectangular passives and ICs but failed on the growing range of non-standard components in their product mix — shielding cans with variable reflectivity, connectors with protruding pins, and custom mechanical parts. The template library required manual re-tuning for each new component, taking 2–3 days per component type, and even after tuning the detection rate for irregular shapes rarely exceeded 88%. The client had also trialled a commercial AI-based vision add-on, but it was optimised for warehouse logistics (large objects, loose tolerance) and could not achieve the sub-millimetre precision required for PCB assembly.

Our Solution

1. Multi-Camera Vision System

Deployed a synchronised multi-camera setup with RGB and structured-light depth cameras to capture comprehensive 3D information about components in feeder trays and on PCBs. The system uses structured light rather than stereo vision for depth estimation.

Why structured light over stereo: At the sub-millimetre scale of SMD components, stereo matching struggles with the low-texture surfaces of bare PCBs and metallic component bodies. Structured-light projection provides reliable depth at 0.02mm resolution regardless of surface texture, which was essential for accurate pose estimation of reflective shielding cans.

2. Advanced Object Detection & Classification

Developed a custom YOLOv7 model trained on a large-scale dataset of electronic components. 100,000+ training images were collected over 4 weeks using an automated capture rig that photographed components in feeder trays under varied orientations and lighting. Synthetic augmentation generated an additional 300,000 samples covering edge-case orientations and partial occlusions. Manual annotation of edge cases — particularly components with ambiguous orientation markers — required 2 weeks of specialist labelling by engineers familiar with PCB assembly. The model achieves 98.8% detection and classification accuracy across all 50+ component types.

Why YOLOv7 over alternatives: We benchmarked YOLOv7 against EfficientDet and Detectron2 (Faster R-CNN). EfficientDet achieved comparable accuracy but at 3x the inference latency, which would have pushed cycle time above the 2.5-second target. Faster R-CNN was more accurate on the smallest components (0201 passives) but could not run at the required frame rate on the target GPU (NVIDIA RTX A4000). YOLOv7 provided the best accuracy-latency trade-off for our component size range.

3. 3D Pose Estimation

Implemented 6-DOF pose estimation combining point cloud registration with learned keypoint detection, enabling the robot to determine each component's exact position and orientation with sub-millimetre accuracy for precise placement on PCB pads.

Why hybrid over pure learned pose: Pure deep-learning pose estimators (e.g., PoseCNN) had difficulty generalising across the wide range of component geometries without per-component fine-tuning. Our hybrid approach uses learned keypoints to initialise an ICP (Iterative Closest Point) refinement step against CAD models, achieving consistent ±0.05mm accuracy across all component types without component-specific training.

4. Intelligent Grasp Planning

Created a grasp planning system that analyses component geometry from the 3D point cloud and selects optimal grasp points based on stability, accessibility, and collision avoidance with adjacent components in feeder trays.

Why analytical over learned grasping: Learned grasp planners (e.g., GraspNet) are effective for novel objects but introduce variability that is unacceptable at ±0.05mm placement tolerance. Our analytical planner uses component CAD models to compute deterministic grasp poses, guaranteeing repeatable pick orientation. For components without CAD models, we fall back to a constrained learned planner that is limited to a pre-validated set of grasp templates.

Technical Implementation

Vision Pipeline

The vision system processes data through multiple stages:

  • Image Acquisition: Synchronized capture from multiple cameras
  • Preprocessing: Calibration, distortion correction, and enhancement
  • Object Detection: YOLO-based detection and classification
  • 3D Reconstruction: Point cloud generation and processing
  • Pose Estimation: 6DOF pose calculation for each object

Robotic Control

Advanced control algorithms ensure precise manipulation:

  • Trajectory planning with collision avoidance
  • Force feedback for delicate object handling
  • Adaptive control for varying object properties
  • Error recovery and retry mechanisms

Limitations & Edge Cases

  • Transparent & Reflective Components: Transparent components (glass display covers) and highly reflective surfaces (polished metal shields) reduce detection accuracy to 94.2%. The structured-light depth sensor produces noisy point clouds on these surfaces, degrading pose estimation. We mitigate this with polarised lighting and surface-specific detection thresholds, but it remains the primary failure mode.
  • Controlled Lighting Requirement: The system requires controlled lighting conditions. Ambient light variations above 200 lux (e.g., from skylights or open loading bay doors) cause measurable drift in pose estimation, increasing placement error by up to 0.03mm. Production cells must be enclosed or fitted with consistent LED overhead lighting.
  • New Component Onboarding: Adding a new component type requires a CAD model (or 200+ annotated training images if no CAD is available) and approximately 4 hours of calibration and validation. This is significantly faster than the 2–3 days required by the legacy template system but is not instantaneous.
  • Vacuum Gripper Limitations: Components smaller than 0.4mm x 0.2mm (0201 package size) cannot be reliably picked with the current vacuum nozzle. Handling these requires a dedicated micro-placement head, which is outside the current system scope.
  • Thermal Drift: After 8+ hours of continuous operation, thermal expansion of the camera mounting bracket introduces up to 0.02mm systematic offset. The system runs an automated recalibration cycle every 6 hours to compensate.

Results & Impact

98.8%

Success Rate (up from 88% with legacy template matching)

±0.05mm

Placement Accuracy (was ±0.5mm with legacy system)

2.3s

Full Pick-Place-Verify Cycle Time

50+

Component Types (was 12 with legacy system)

Performance Metrics (Before / After)

  • Detection accuracy on non-standard components: 88% → 98.8%
  • Placement accuracy: ±0.5mm → ±0.05mm (10x improvement)
  • Supported component types: 12 → 50+ without per-component manual tuning
  • New component onboarding time: 2–3 days → 4 hours
  • System uptime: 94% → 99.4% with automated error recovery and retry

Cycle Time Breakdown

The 2.3-second cycle time reflects the full pick-place-verify loop for precision PCB assembly, not just the pick or place motion. The breakdown is as follows:

  • Grasp planning: 0.8s — evaluating multiple grasp candidates for irregularly-shaped components, selecting the most stable vacuum nozzle contact point
  • Pick motion + vacuum engagement: 0.4s
  • Transfer + 6-DOF alignment: 0.5s — including in-transit rotation to match target pad orientation
  • Precision placement at ±0.05mm tolerance: 0.3s — final approach with force feedback to confirm contact
  • Post-placement visual verification: 0.3s — confirming component is correctly seated before proceeding to next pick

For standard rectangular passives with known orientation, grasp planning is deterministic and the cycle time drops to 1.6 seconds.

System Capabilities

The robotic vision system can handle various tasks:

  • Object Detection: Identify and locate components in cluttered environments
  • Classification: Distinguish between different component types and variants
  • Pose Estimation: Determine 6DOF pose for precise manipulation
  • Grasp Planning: Select optimal grasp points for stable manipulation
  • Quality Inspection: Detect defects and quality issues during handling
  • Adaptive Behavior: Learn and adapt to new component types

System Integration

Seamlessly integrated with existing manufacturing infrastructure:

  • Production Line Integration: Direct communication with conveyor systems
  • Quality Control: Integration with inspection and testing systems
  • Data Management: Real-time data logging and analytics
  • Maintenance Systems: Predictive maintenance and health monitoring
  • Safety Systems: Integration with safety sensors and emergency stops

Deployment Scope

The system is deployed across the client's PCB assembly operations:

  • Non-Standard SMD Placement: Primary application — irregularly-shaped connectors, shielding cans, and custom mechanical components that defeat template matching
  • Mixed-Component Trays: Handling feeder trays with multiple component types in a single pass, reducing tray changeover downtime
  • Post-Reflow Inspection: Visual verification of component placement after solder reflow, flagging misaligned or tombstoned components
  • Rework Assistance: Identifying and picking misplaced components for rework, guided by inspection system feedback

Ongoing & Next Steps

Active development and planned near-term improvements:

  • In progress: Expanding the training dataset with polarised-light captures to improve detection of transparent and reflective components (targeting 97%+ accuracy, up from current 94.2%)
  • In progress: Online learning pipeline that incorporates production-line failure cases into nightly model retraining, reducing manual annotation effort for new edge cases
  • Planned (Q1 2024): Dual-arm coordination for simultaneous pick-and-place on opposite sides of the PCB, targeting 40% cycle time reduction for double-sided boards
  • Under evaluation: Integration with solder paste inspection (SPI) data to adapt placement force based on paste volume, reducing defect rates on fine-pitch components
  • Under evaluation: Deploying a second system at the client's Dongguan facility, which would require adapting the lighting enclosure for a different production line layout