Visual Feedback for Autonomous Navigation in Duckietown

Features for Efficient Autonomous Navigation in Duckietown

Features for Efficient Autonomous Navigation in Duckietown

Project Resources

Project highlights

Visual Feedback for Autonomous Navigation in Duckietown - the objectives

This project from students at TUM (Technische Universität of Munich) builds on the preexisting Duckietown autonomy stack to add/reintegrate/improve upon much-needed autonomous navigation features: improved control (pure pursuit instead of PID), red stop line detection, AprilTag detection, intersection navigation, and obstacle detection (using YOLO v3), making Duckietowns more complex and interesting!

The resulting agent includes modules for lane following, stop line detection, and intersection handling using AprilTags, following the legacy infrastructure of Duckietown.

The autonomy pipeline relies heavily on vision as the primary means of perception: lane edges are projected from image space to the ground plane using inverse perspective mapping learned after running a camera calibration procedure.

The Duckiebot then estimates a dynamic target point by offsetting yellow or white lane markers depending on visibility. The curvature is computed based on the geometric relation between the Duckiebot and the goal point, and the steering command is derived from this curvature.

The Duckiebot velocity and angular velocity are then modulated using a second-degree polynomial function based on detected path geometry.

Visual input from an onboard monocular camera is processed through a lane filter with adaptive Gaussian variance scaling relative to frame timing.

When running by an intersection, stop lines are detected using HSV color segmentation. AprilTag detection determines intersection decisions, with tag IDs mapped to turn directions.

Every module is implemented as an independent ROS package with dedicated launch files, coordinated via a central launch file. A YOLOv3 object detection model, trained on a custom Duckietown dataset, provides real-time obstacle recognition.

The challenges and approach

One major hurdle was integrating object detection models like Single-Shot Detector (SSD) and YOLO with the Duckiebot’s ROS-based camera system.

While the SSD model was trained on a custom Duckietown dataset, ROS publisher-subscriber mismatches prevented live inference. Transitioning to the YOLO model involved adapting annotation formats and re-training for compatibility with the YOLO architecture. In lane following, the default controller from Duckietown demos showed high deviation, prompting the implementation of a modified pure pursuit approach. 

Additional challenges arose from limited computational resources on the Duckiebot, with CPU overuse causing processing delays when running all modules concurrently. The approach focused on modular development, isolating lane following, stop line detection, and intersection navigation into separate ROS packages with fine-tuned parameters. The pure pursuit algorithm was adapted for ground-projected lane estimation, dynamic speed control, and target point calculation based on visible lane markers. Integration of AprilTag-based intersection logic and LED signaling provided directional control at intersections.

This structured, iterative methodology enabled real-time, vision-guided behavior while operating within the constraints.

Project Report

Did this work spark your curiosity?

Visual Feedback for Autonomous Navigation in Duckietown: Authors

Servesh Khandwe is currently working as a Software Engineer at Porsche Digital, Germany.

Ayush Kumar is currently working as a Research Assistant at Fraunhofer IIS, Germany.

Parth Karkar is currently working as an Analytical Consultant at Mutares SE & Co. KGaA, Germany.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.