Pure pursuit gif compress

Pure Pursuit Lane Following with Obstacle Avoidance

Pure Pursuit Lane Following with Obstacle Avoidance

Project Resources

Project highlights

Pure Pursuit Controller with Dynamic Speed and Turn Handling
Pure Pursuit Controller with Dynamic Speed and Turn Handling
Duckiebot lane following with pure pursuit and obstacle avoidance using image processing in Duckietown
Pure Pursuit with Image Processing-Based Obstacle Detection
Duckiebots navigating curves in Duckietown using pure pursuit and obstacle avoidance with onboard object detection
Duckiebots Avoiding Obstacles with Pure Pursuit Control

Pure Pursuit Lane Following with Obstacle Avoidance - the objectives

Pure pursuit is a geometric path tracking algorithm used in autonomous vehicle control systems. It calculates the curvature of the road ahead by determining a target point on the trajectory and computing the required angular velocity to reach that point based on the vehicle’s kinematics.

Unlike proportional integral derivative (PID) control, which adjusts control outputs based on continuous error correction, pure pursuit uses a lookahead point to guide the vehicle along a trajectory, enabling stable convergence to the path without oscillations. This method avoids direct dependency on derivative or integral feedback, reducing complexity in environments with sparse or noisy error signals.

This project aims to implement a pure pursuit-based lane following system integrated with obstacle avoidance for autonomous Duckiebot navigation. The goal is to enable real-time tracking of lane centerlines while maintaining safety through detection and response to dynamic obstacles such as other Duckiebots or cones.

The pipeline includes a modified ground projection system, an adaptive pure pursuit controller for path tracking, and both image processing and deep learning-based object detection modules for obstacle recognition and avoidance.

The challenges and approach

The primary challenges in this project include robust target point estimation under variable lighting and environmental conditions, real-time object detection with limited computational resources, and smooth trajectory control in the presence of dynamic obstacles.

The approach involves modular integration of perception, planning, and control subsystems.

For perception, the system uses both classical image processing methods and a trained deep learning model for object detection, enabling redundancy and simulation compatibility.

For planning and control, the pure pursuit controller dynamically adjusts speed and steering based on the estimated target point and obstacle proximity. Target point estimation is achieved through ground projection, a transformation that maps image coordinates to real-world planar coordinates using a calibrated camera model. Real-time parameter tuning and feedback mechanisms are included to handle variations in frame rate and sensor noise.

Obstacle positions are also ground-projected and used to trigger stop conditions within a defined safety zone, ensuring collision avoidance through reactive control.

Looking for similar projects?

Pure Pursuit Lane Following with Obstacle Avoidance: Authors

Soroush Saryazdi is currently leading the Neural Networks team at Matic, supervised by Navneet Dalal.

Dhaivat Bhatt is currently working as a Machine learning research engineer at Samsung AI centre, Toronto.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Autonomous Navigation System Development in Duckietown

Autonomous Navigation System Development in Duckietown

Autonomous Navigation System Development in Duckietown

Project Resources

Project highlights

Autonomous Navigation System Development in Duckietown - the objectives

The primary objective of this project is to develop and refine an Autonomous Navigation System within the Duckietown environment, leveraging ROS-based control and computer vision to enable reliable lane following and safe intersection navigation. This includes calibrating sensor inputs, particularly from the camera, IMU, and encoders, and integrating advanced algorithms such as Dijkstra algorithm for optimal path planning. The project aims to ensure that the Duckiebot can autonomously detect lanes, stop lines, and obstacles while dynamically computing the shortest path to any designated point within the mapped environment. Additionally, the system is designed to transition smoothly between operational states (lane following, intersection handling, and recovery) using a refined Finite State Machine approach, all while maintaining robust communication within the ROS ecosystem.

Project Report

The challenges and approach

The project faced several challenges, beginning with hardware constraints, such as the physical limitations of wheel traction and battery lifespan, which affected motion stability and operational time. The integration of various ROS packages, some with incomplete documentation and inconsistent coding practices, complicated the development of a reliable and maintainable codebase. The method adopted involved precise sensor calibration to ensure accurate perception and control, incorporating camera intrinsic and extrinsic calibration for improved visual data interpretation, and adjusting wheel parameters to maintain balanced motion. The lane following module required parameter tuning for gain, trim, and heading correction to adapt to Duckietown’s environment. The original FSM-based intersection navigation system was re-engineered due to unreliability in node transitions, replaced with a distance-based approach for intersection stops and turns, ensuring deterministic and reliable behavior. Dijkstra’s algorithm was implemented to create a structured graph representation of the city map, enabling dynamic path planning that adapts to real-time inputs from the perception system. Custom web dashboards built with React.js and roslibjs facilitated monitoring and debugging by providing live data feedback and control interfaces. Through this rigorous and iterative process, the project achieved a robust autonomous navigation system capable of precise path planning and safe maneuvering within Duckietown.

Did this work spark your curiosity?

Autonomous Navigation System Development in Duckietown: Authors

Julien-Alexandre Bertin Klein is currently a Bachelor of Science (BSc.), Information Engineering at the Technical University of Munich, Germany.

Andrea Pellegrin is currently a Bachelor of Science (BSc.), Information Engineering at the Technical University of Munich, Germany.

Fathia Ismail is currently a Bachelor of Science (BSc.), Information Engineering at the Technical University of Munich, Germany.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Extended Kalman Filter (EKF) SLAM for Duckiebots

Extended Kalman Filter (EKF) SLAM for Duckiebots

Extended Kalman Filter (EKF) SLAM for Duckiebots

Project Resources

Project highlights

In SLAM, everything that can drift will drift, and the role of the filter is to drift more slowly than entropy.

Extended Kalman Filter (EKF) SLAM for Duckiebots - the objectives

This SLAM-Duckietown project addresses a famous challenge in robotics: concurrently estimating the agent’s pose and mapping the environment under uncertainty.

This project implements an Extended Kalman Filter (EKF) SLAM algorithm on Duckiebots (DB21-J4), combining odometry from wheel encoders and landmark observations from April tags.

The objective is to maintain an evolving posterior over the Duckiebot’s pose (x,y,θ) and landmark positions by recursively integrating noisy control inputs and observations.

This upgrade shifts Duckiebots from open-loop dead reckoning units into closed-loop, state-estimating agents. For Duckietown, it reinforces its use as an experimental ground for real-world robotics challenges, including data association, observability, filter consistency, and multi-sensor fusion.

The challenges and approach

The system applies the EKF-SLAM pipeline in two stages: motion prediction and measurement correction.

Prediction propagates the robot’s belief through a non-holonomic kinematic model under process noise, using arc-based interpolation to reduce discretization error.

Correction incorporates April tag detections via a Perspective-n-Point (PnP) solution, updating the state with landmark-relative observations under observation noise. The state vector grows dynamically as new landmarks are observed, and the covariance matrix tracks both robot and landmark uncertainty.

The technical challenges include maintaining filter consistency under linearization errors, ensuring landmark observability despite partial fields of view, and synchronizing asynchronous data from wheel encoders, camera frames, and Vicon ground-truth captures.

Moreover, AprilTag detection is constrained by lighting artifacts and pose ambiguity at shallow viewing angles, introducing non-Gaussian errors that the EKF must approximate linearly. 

Moreover, tuning noise parameters presents the classical tradeoff: too little noise leads to overconfidence and divergence; too much noise leads to filter paralysis. Deployment exposes the systemic difference between simulation and physical experiments: real Duckiebots do not move with perfect kinematics, cameras suffer from radial distortion, and computation suffers from non-deterministic latency.

In SLAM, everything that can drift will drift, and the role of the filter is to drift more slowly than entropy.

Did this work spark your curiosity?

Extended Kalman Filter (EKF) SLAM for Duckiebots: Authors

AmirHossein Zamani was a former Duckietown student, and currently, he is pursuing his Ph.D. in Computer Science at Mila (Quebec AI Institute) and  Concordia University, Canada. He is also working as an AI Research Scientist Intern at Autodesk in Montreal, Canada.

Léonard Oest O’Leary was a former Duckietown student, and currently, he is pursuing his Master of Science in Computer Science at the University of Montreal, Canada.

Kevin Lessard was a former Duckietown student, and currently, he is pursuing his Master of Science in Machine Learning at Mila – Quebec AI Institute in Montreal, Canada.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Path Planning for Multi-Robot Navigation in Duckietown

Path Planning for Multi-Robot Navigation in Duckietown

Path Planning for Multi-Robot Navigation in Duckietown

Project Resources

Project highlights

Path planning for multi-robot navigation in Duckietown - the objectives

Navigating Duckietown should not feel like solving a maze blindfolded!

The “Goto-N” path planning algorithm gives Duckiebots the map, the plan, and the smarts to take the optimal path from here to there, without wandering around by turning the map into a graph and every turn into a calculated choice.

While Duckiebots have long been able to follow lanes and avoid obstacles, truly strategic navigation, thinking beyond the next tile, toward a distant goal, requires a higher level of reasoning. In a dynamic Duckietown, robots need more than instincts. They need a plan.

This project introduces a node-based path-planning system that represents Duckietown as a graph of interconnected positions. Using this abstraction, Duckiebots can evaluate both allowable and optimal routes, adapt to different goal positions, and plan their moves intelligently.

The Goto-N project integrates several key concepts like:

  • Nodegraph representation: transforms the tile-based Duckietown map into a graph of quarter-tile nodes, capturing all possible robot positions and transitions.

  • Allowable and optimal move generation: differentiates between all legal movements and the most efficient moves toward a goal, supporting informed decision-making.

  • Termination-aware planning: computes optimal actions relative to a chosen destination, enabling precise goal-reaching behaviors.

  • Multi-robot scalability: validates the planner across one, two, and three Duckiebots to assess coordination, efficiency, and performance under shared conditions.

  • Real-world implementation and validation: demonstrates the effectiveness of Goto-N through trials in the Autolab, comparing planned movements to real robot behavior.

The challenges and approach

Navigating Duckietown poses several technical challenges: translating a continuous environment into a discrete planning space, handling edge cases like partial tile positions, and enabling efficient coordination among multiple autonomous agents.

The Goto-N project addresses these by discretizing the Duckietown map into a graph of ¼-tile resolution nodes, capturing all possible robot poses and orientations. 

Using this representation, the system classifies allowable moves based on physical constraints and tile connectivity, then computes optimal moves to minimize distance or steps to a termination node using heuristics and precomputed lookup tables.

A Python-based pipeline then ingests the map layout, builds the nodegraph, and generates movement policies, which are then validated through simulated and physical trials. The system scales to multiple Duckiebots by assigning independent paths while analyzing overlap and bottlenecks in shared spaces, ensuring robust, efficient multi-robot planning.

Path planning (Goto-n) in Duckietown: full report

The design and implementation of this path planning algorithm is documented in the following report.

Path planning (goto-n) in Duckietown: Authors

Alexander Hatteland is currently working as a Consultant at Boston Consulting Group (BCG), Switzerland.

Marc-Philippe Frey is currently working as a Consultant at Boston Consulting Group (BCG), Switzerland.

Demetris Chrysostomou is currently a PhD candidate at Delft University of Technology, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

City Rescue: Autonomous Duckiebot Recovery System

City Rescue: Autonomous Recovery System for Duckiebots

City Rescue: Autonomous Recovery System for Duckiebots

Project Resources

Project highlights

City rescue: autonomous recovery system for Duckiebots - the objectives

Would it not be desirable to have the city we drive in monitor our vehicle, as a guardian angel ready to intervene in case of distress offering autonomous recovery services? 

The project, “City Rescue” is a first step towards enabling a continuous monitoring system from traffic lights and watchtowers, smart infrastructure in Duckietown, aimed at localization and communicating with Duckiebots as they autonomously operate in town.  

Despite the robust autonomy algorithms guiding the behaviors of Duckietown in Duckietowns, distress situations such as lane departures, crashes, or stoppages, might happen. In these cases human intervention is often necessary to reset experiments. 

This project introduces an automated monitoring and rescue system that identifies distressed agents, classifies their distress state, and calculates and communicates corrective actions to restore Duckiebots to normal operation.

The City-Rescue project incorporates several key components to achieve autonomous monitoring and recovery of distressed Duckiebots:

  • Distress detection: classifies failure states such as lane departure, collision, and immobility using real-time localization data.

  • Lightweight real-time localization: implements a simplified localization system using AprilTags and watchtower cameras, optimizing computational efficiency for real-time tracking.

  • Decentralized rescue architecture: employs a central Rescue Center and multiple Rescue Agents, each dedicated to an individual Duckiebot, enabling simultaneous rescues.

  • Closed-loop control for recovery: uses a proportional-integral (PI) controller to execute corrective movements, bringing Duckiebots back to lane-following mode.

City Rescue is a great example of vehicle-to-infrastructure (v2i) interactions in Duckietown.

The challenges and approach

The City Rescue autonomous recovery system employs a server-based architecture, where a central “Rescue Center” continuously processes localization data and assigns rescue tasks to dedicated Rescue Agents.

The localization system uses appropriately placed reference AprilTags and watchtower cameras, tuned for low-latency operation by bypassing computationally expensive optimization routines. The rescue mechanism is driven by a PI controller, which calculates corrective movements based on deviations from an ideal trajectory.

The main challenges in implementing this city behavior include localization inaccuracies, due to the limited coverage of watchtower cameras, and distress event positioning on the map.

The localization inaccuracies are mitigated by performing camera calibration procedures on the watchtower cameras, as well as by performing an initial city offset calibration procedure. The success rate of the executed maneuvers varies with map topographical complexity; recovery from curved road or intersection sections is less reliable than from straight lanes.

Finally, the lack of inter-robot communication can lead to cascading failure scenarios when multiple Duckiebots collide.

City rescue: full report

The design and implementation of this autonomous recovery system is documented in the following report.

City rescue in Duckietown: Authors

Carl Philipp Biagosch is the co-founder at Mantis Ropeway Technologies, Switzerland.

Jason Hu is currently working as a Scientific Assistant at ETH Zurich, Switzerland.

Martin Xu is currently working as a data scientist at QuantCo, Germany.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Adaptive trim before and after

Adaptive Lane Following with Auto-Trim Tuning

Adaptive Lane Following with Auto-Trim Tuning

Project Resources

Before and after:

Training:

Project highlights

Calibration of sensor and actuators is always important in setting up robot systems, especially in the context of autonomous operations. Manual tweaking of calibration parameters though is a nuisance, albeit necessary when every physical instance of the robots is slightly different from each other. 

In this project, the authors developed a process to automatically calibrate the trim parameter in the Duckiebot, i.e., allowing it to go straight when an equal command to both wheel motors is provided. 

Adaptive lane following in Duckietown: beyond manual odometry calibration

The objective of this project is to develop a process to autonomously calibrate the wheel trim parameter of Duckiebots, eliminating the need for manual tuning or improving upon it. Manual tuning of this parameter, as part of the odometry calibration procedure, is needed to account for the invevitable slight differences existing across different Duckiebots, due to manufacturing, assembly, handling difference, etc.

Creating an automatic trim calibration procedure enhances the Duckiebot’s lane following behavior, by continuously adjusting the wheel alignment based on real-time lane pose feedback. Duckiebots typically require manual calibration for the odometry, which introduces variability and reduces scalability in autonomous mobility experiments. 

By implementing a Model-Reference Adaptive Control (MRAC) based approach, the project ensures consistent performance despite mechanical variations or external disturbances. This is desireable for large-scale Duckietown deployments where the robots need to maintain uniform behavior across different assemblies. 

Adaptive control reduces dependence on predefined parameters, allowing Duckiebots to self-correct without external intervention. This enables more reproducible fleet-level performance, useful for research in autonomous navigation. This project supports experimentation in self-calibrating robotic systems through application of adaptive control research.

Model Reference Adaptive Control (MRAC) for adaptive lane following in Duckietown

The method employs a Model-Reference Adaptive Control (MRAC) framework that iteratively estimates the optimal trim value during lane following by processing lane pose feedback from the vision pipeline, and comparing expected and actual motion to compute a correction factor. An adaptation law updates the trim dynamically based on real-time error minimization.

Pose estimation relies on a vision-based lane filter, which introduces latency and noise, affecting convergence stability. The adaptive controller must maintain stability while ensuring convergence to an optimal trim value within a finite time window. 

The performance of this approach is constrained by sensor inaccuracies, requiring threshold-based filtering to exclude unreliable pose data. The algorithm operates in real-world conditions where road surface variations, lighting changes, and mechanical wear affect performance. Synchronizing lane pose data with controller updates while minimizing computation delays is a key challenge, and ensuring that the adaptive controller does not introduce oscillations or instability in the control loop requires parameter tuning.

Adaptive lane following: full report

Check out the full report here. 

Adaptive lane following in Duckietown: Authors

Pietro Griffa is currently working as a Systems and Estimation Engineer at Verity, Switzerland.

Simone Arreghini is currently pursuing his Ph. D. at IDSIA USI-SUPSI, Switzerland.

Rohit Suri was a mentor on this project and is currently working as a Senior Research Scientist at Venti Technologies, Singapore.

Aleksandar Petrov was a mentor on this project and is currently pursuing his Ph. D.  at the University of Oxford, United Kingdom.

Jacopo Tani was a supervisor on this project and is currently the CEO at Duckietown.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Flexible tether control in heterogeneous marsupial systems

Flexible tether control in marsupial systems

Flexible tether control in marsupial systems

Project Resources

Project highlights

Wouldn’t it be great to have a base station transfer power, data and other information to other autonomous vehicles through a tethered connection? But how to deal with the challenges arising from controlling the length and tension of the tether? 

Here is an overview of the authors’ results: 

Flexible tether control in Duckietown: objective and importance

Managing tethers effectively is an important challenge in autonomous robotic systems, especially in heterogeneous marsupial robot setups where multiple robots work together to achieve a task.

Tethers provide power and data connections between agents, but poor management can lead to tangling, restricted movement, or unnecessary strain.

This work implements a flexible tethering approach that balances slackness and tautness to improve system performance and reliability.

Using the Duckiebot DB21J as a test passenger agent, the study introduces a tether control system that adapts to different conditions, ensuring smoother operation and better resource sharing. By combining aspects of both taut and slacked tether models, this work contributes to making multi-robot systems more efficient and adaptable in various environments.

The method and challenges in implementing flexible tether control in Duckietown

The authors developed a custom-built spool mechanism designed to actively adjust tether length using real-time sensor feedback. The tether system comprises a custom-built spool mechanism, integrated with sensor feedback for real-time tether length adjustments.

To coordinate these adjustments, the system was implemented within a standard ROS-based framework, ensuring efficient data management.

To evaluate the system’s effectiveness, the authors tested different slackness and control gain parameters while the Duckiebot followed a predefined square path. By analyzing the spool’s reactivity and the consistency of the tether’s behavior, they assessed the system’s performance across varying conditions.

Several challenges emerged during testing, e.g., maintaining the right balance of tether slackness was critical, as excess slack risked entanglement, while insufficient slack could restrict mobility.

Hardware limitations affected the spool’s responsiveness, requiring careful tuning of control parameters. Additionally, environmental factors, such as potential obstacles, underscored the need for a more adaptive control mechanism in future iterations.

Flexible tether control: full report

Check out the full report here. 

Flexible tether control in heterogeneous marsupial systems in Duckietown: Authors

Carson Duffy is a computer engineer who studied at the Texas A&M University, USA.

Dr. Jason O’Kane is a faculty research advisor at Texas A&M. 

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Project Resources

Project highlights

Here is a visual tour of the author’s work on implementing deep reinforcement learning for autonomous lane following in Duckietown.

Deep reinforcement learning for autonomous lane following in Duckietown: objective and importance

Would it not be great if we could train an end-to-end neural network in simulation, plug it in the physical robot and have it drive safely on the road? 

Inspired by this idea, Mickyas worked to implement deep reinforcement learning (DRL) for autonomous lane following in Duckietown, training the agent using sim-to-real transfer. 

The project focuses on training DRL agents, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC), to learn steering control using high-dimensional camera inputs. It integrates an autoencoder to compress image observations into a latent space, improving computational efficiency. 

The hope is for the trained DRL model to generalize from simulation to real-world deployment on a Duckiebot. This involves addressing domain adaptation, camera input variations, and real-time inference constraints, amongst other implementation challenges.

Autonomous lane following is a fundamental component of self-driving systems, requiring continuous adaptation to environmental changes, especially whn using vision as main sensing modality. This project identifies limitations in existing DRL algorithms when applied to real-world robotics, and explores modifications in reward functions, policy updates, and feature extraction methods analyzing the results through real world experimentation.

The method and challenges in implementing deep reinforcement learning in Duckietown

The method involves training a DRL agent in a simulated Duckietown environment (Gym Duckietown Simulator) using an autoencoder for feature extraction. 

The encoder compresses image data into a latent space, reducing input dimensions for policy learning. The agent receives sequential encoded frames as observations and optimizes steering actions based on reward-driven updates. The trained model is then transferred to a real Duckiebot using a ROS-based communication framework. 

Challenges for pulling this off include accounting for discrepancies between simulated and real-world camera inputs, which affect performance and generalization. Differences in lighting, surface textures, and image normalization require domain adaptation techniques.

Moreover, computational limitations on the Duckiebot prevent direct onboard execution, requiring a distributed processing setup.

Reward shaping influences learning stability, and improper design of the reward function leads to policy exploitation or suboptimal behavior. Debugging DRL models is complex due to interdependencies between network architecture, exploration strategies, and training dynamics. 

The project addresses these challenges by refining preprocessing, incorporating domain randomization, and modifying policy structures.

Deep reinforcement learning for autonomous lane following: full report

Deep reinforcement learning for autonomous lane following in Duckietown: Authors

Mickyas Tamiru Asfaw is currently working as an AI Robotics and Innovation Engineer at the CESI lineact laboratory, France.

David Bertoin is currently working as a ML Applied Scientist at Photoroom, France.

Valentin Guillet is currently working as a Research engineer at IRT Saint Exupéry, France.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing visual obstacle detection in Duckietown.

Visual Obstacle Detection: objective and importance

This project aims to develop a visual obstacle detection system using inverse perspective mapping with the goal to enable autonomous systems to detect obstacles in real time using images from a monocular RGB camera. It focuses on identifying specific obstacles, such as yellow Duckies and orange cones, in Duckietown.

The system ensures safe navigation by avoiding obstacles within the vehicle’s lane or stopping when avoidance is not feasible. It does not utilize learning algorithms, prioritizing a hard-coded approach due to hardware constraints. The objective includes enhancing obstacle detection reliability under varying illumination and object properties.

It is intended to simulate realistic scenarios for autonomous driving systems. Key metrics of evaluation were selected to be detection accuracy, false positives, and missed obstacles under diverse conditions. 

The method and the challenges visual obstacle detection using Inverse Perspective Mapping

The system processes images from a monocular RGB camera by applying inverse perspective mapping to generate a bird’s-eye view, assuming all pixels lie on the ground plane to simplify obstacle distortion detection. Obstacle detection involves HSV color filtering, image segmentation, and classification using eigenvalue analysis. The reaction strategies include trajectory planning or stopping based on the detected obstacle’s position and lane constraints.

Computational efficiency is a significant challenge due to the hardware limitations of Raspberry Pi, necessitating the avoidance of real-time re-computation of color corrections. Variability in lighting and motion blur impact detection reliability, while accurate calibration of camera parameters is essential for precise 3D obstacle localization. Integration of avoidance strategies faces additional challenges due to inaccuracies in pose estimation and trajectory planning.

Visual Obstacle Detection using Inverse Perspective Mapping: Full Report

Visual Obstacle Detection using Inverse Perspective Mapping: Authors

Julian Nubert is currently a Research Assistant & Doctoral Candidate at the Max Planck Institute for Intelligent Systems, Germany.

Niklas Funk is a PHD Graduate Student at Technische Universität Darmstadt, Germany.

Fabio Meier is currently working as the Head of Operational Data Intelligence at Sensirion Connected Solutions, Switzerland.

Fabrice Oehler is working as a Software Engineer at Sensirion, Switzerland.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Intersection Navigation in Duckietown Using 3D Image Feature

Intersection Navigation in Duckietown Using 3D Image Features

Intersection Navigation in Duckietown Using 3D Image Features

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing intersection navigation using 3D image features in Duckietown.

Intersection Navigation in Duckietown: Advancing with 3D Image Features

Intersection navigation in Duckietown using 3D image features is an approach intented to improve autonomous intersection navigation, enhancing decision-making and path planning in complex Duckietown environments, i.e., made of several road loops and road intersections. 

The traditional approach to intersection navigation in Duckietown is naive: (a) stop at the red line before the intersection, (b) read Apriltag-equipped traffic signs (providing information on the shape and coordination mechanism at intersections); (c) decide which direction to take; (d) coordinate with other vehicles at the intersection to avoid collisions; (e) navigate through the intersection. This last step is performed in an open-loop fashion, leveraging the known appearance specifications of intersections in Duckietown. 

By incorporating 3D image features in the perception pipeline, extrapolated from the Duckietown road lines, Duckiebots can achieve a representation of their pose while crossing the intersection, closing, therefore, the loop and improving navigation accuracy, in addition to facilitating the development of new strategies for intersection navigation, such as real-time path optimization. 

Combining 3D image features with methods, such as Bird’s Eye View (BEV) transformations allows for comprehensive representations of the intersection. The integration of these techniques improves the accuracy of stop line detection and obstacle avoidance contributes to advancing autonomous navigation algorithms and supports real-world deployment scenarios.

ChatGPT representation of Duckietown intersection navigation challenges.
An AI representation of Duckietown intersection navigation challenges

The method and the challenges of intersection navigation using 3D features

The thesis involves implementing the MILE model (Model-based Imitation LEarning for urban driving), trained on the CARLA simulator, into the Duckietown environment to evaluate its performance in navigating unprotected intersections.

Experiments were conducted using the Gym-Duckietown simulator, where Duckiebots navigated a 4-way intersection across multiple trajectories. Metrics such as success rate, drivable area compliance, and ride comfort were used to assess performance.

The findings indicate that while the MILE model achieved state-of-the-art performance in the CARLA simulator, its generalization to the Duckietown environment without additional training was, as probably expected due to the sim2real gap, limited.

The BEVs generated by MILE were not sufficiently representative of the actual road surface in Duckietown, leading to suboptimal navigation performance. In contrast, the homographic BEV method, despite its assumption of a flat world plane, provided more accurate representations for intersection navigation in this context.

As for most approaches in robotics, there are limitation and tradeoffs to analyze.

Here are some technical challenges of the proposed approach:

  • Generalization across environments: one of the challenges is ensuring that the 3D image feature representation generalizes well across different simulation environments, such as Duckietown and CARLA. The differences in scale, road structures, and dynamics between simulators can impact the performance of the navigation system.
  • Accuracy of BEV representations: the transformation of camera images into Bird’s Eye View (BEV) representations has reduced accuracy, especially when dealing with low-resolution or distorted input data.
  • Real-time processing: the integration of 3D image features for navigation requires substantial computational resources with respect to utilizing 2D features instead. Achieving near real-time processing speeds for navigation tasks such as intersection navigation, is challenging.

Intersection Navigation in Duckietown Using 3D Image Feature: Full Report

Intersection Navigation in Duckietown Using 3D Image Feature: Authors

Jasper Mulder is currently working as a Junior Outdoor expert at Bever, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.