Here is a visual tour of the authors’ work on implementing visual obstacle detection in Duckietown.
Figure 1. Example Image from Monocular Camera.
Figure 2. Image Transformed to Bird’s Eye View.
Figure 3. Final Detection Output.
Figure 4. Cropped Image for Efficient Detection.
Figure 5. Display of Obstacle Boxes in Bird’s Eye View.
Figure 6. Position and Radius of Obstacle.
Figure 7. Dangerous vs. Non-Dangerous Obstacles.
Figure 8. Search Lines for Lane Boundary Detection.
Figure 9. Initial Logic Stages for Commissioning.
Figure 10. Top-View Variable Definitions.
Figure 11. Geometry of Scene and Obstacle Positioning.
Figure 12. Software Architecture Overview.
Figure 13. Motion Blur Impact on Obstacle Detection.
Figure 14. Adaptive Bounding Box for Lane Curvature.
Visual Obstacle Detection: objective and importance
This project aims to develop a visual obstacle detection system using inverse perspective mapping with the goal to enable autonomous systems to detect obstacles in real time using images from a monocular RGB camera. It focuses on identifying specific obstacles, such as yellow Duckies and orange cones, in Duckietown.
The system ensures safe navigation by avoiding obstacles within the vehicle’s lane or stopping when avoidance is not feasible. It does not utilize learning algorithms, prioritizing a hard-coded approach due to hardware constraints. The objective includes enhancing obstacle detection reliability under varying illumination and object properties.
It is intended to simulate realistic scenarios for autonomous driving systems. Key metrics of evaluation were selected to be detection accuracy, false positives, and missed obstacles under diverse conditions.
The method and the challenges visual obstacle detection using Inverse Perspective Mapping
The system processes images from a monocular RGB camera by applying inverse perspective mapping to generate a bird’s-eye view, assuming all pixels lie on the ground plane to simplify obstacle distortion detection. Obstacle detection involves HSV color filtering, image segmentation, and classification using eigenvalue analysis. The reaction strategies include trajectory planning or stopping based on the detected obstacle’s position and lane constraints.
Computational efficiency is a significant challenge due to the hardware limitations of Raspberry Pi, necessitating the avoidance of real-time re-computation of color corrections. Variability in lighting and motion blur impact detection reliability, while accurate calibration of camera parameters is essential for precise 3D obstacle localization. Integration of avoidance strategies faces additional challenges due to inaccuracies in pose estimation and trajectory planning.
Visual Obstacle Detection using Inverse Perspective Mapping: Full Report
Visual Obstacle Detection using Inverse Perspective Mapping: Authors
Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.
Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.
These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.