Transformer Visual Control for Dynamic Obstacle Avoidance

Transformer Visual Control for Dynamic Obstacle Avoidance

General Information

Transformer Visual Control for Dynamic Obstacle Avoidance

This work details a transformer visual control approach for autonomous robotic obstacle avoidance in dynamic environments. It introduces the GAS-H-Trans model, which integrates a dual-coupling grouped aggregation strategy with transformer-based attention mechanisms. 

Key components of the approach include grouped spatial feature aggregation, Harris hawk optimization (HHO) for parameter tuning, and semantic segmentation for real-time visual perception. The output of the segmentation is used to compute potential fields for navigation. An artificial potential field (APF) method, further optimized using particle swarm optimization (PSO), enhances obstacle avoidance. The system was evaluated in Unity3D virtual environments and on datasets including KITTI, and ImageNet. 

The model architecture improves local and global feature extraction, enabling adaptive navigation. Simulation results demonstrate that GAS-H-Trans outperforms baseline models in segmentation accuracy and avoidance reliability. The implementation uses Transformer structures, self-attention, and heuristic optimization for enhanced environmental understanding.

Experiments using Duckietown-based simulations confirm that the proposed Transformer Visual Control strategy with GAS-H-Trans significantly improves obstacle avoidance reliability with respect to typical approaches.

Highlights - Transformer Visual Control for Dynamic Obstacle Avoidance

Here is a visual tour of this work. For all the details, check out the full paper.

Abstract

In the author’s words:

Accurate obstacle recognition and avoidance are critical for ensuring the safety and operational efficiency of autonomous robots in dynamic and complex environments. Despite significant advances in deep-learning techniques in these areas, their adaptability in dynamic and complex environments remains a challenge. To address these challenges, we propose an improved Transformer-based architecture, GAS-H-Trans. 

This approach uses a grouped aggregation strategy to improve the robot’s semantic understanding of the environment and enhance the accuracy of its obstacle avoidance strategy. This method employs a Transformer-based dual-coupling grouped aggregation strategy to optimize feature extraction and improve global feature representation, allowing the model to capture both local and long-range dependencies. 

The Harris hawk optimization (HHO) algorithm is used for hyperparameter tuning, further improving model performance. A key innovation of applying the GAS-H-Trans model to obstacle avoidance tasks is the implementation of a secondary precise image segmentation strategy. By placing observation points near critical obstacles, this strategy refines obstacle recognition, thus improving segmentation accuracy and flexibility in dynamic motion planning. The particle swarm optimization (PSO) algorithm is incorporated to optimize the attractive and repulsive gain coefficients of the artificial potential field (APF) methods. 

This approach mitigates local minima issues and enhances the global stability of obstacle avoidance. Comprehensive experiments are conducted using multiple publicly available datasets and the Unity3D virtual robot environment. The results show that GAS-H-Trans significantly outperforms existing baseline models in image segmentation tasks, achieving the highest mIoU (85.2%). In virtual environment obstacle avoidance tasks, the GAS-H-Trans + PSO-optimized APF framework achieves an impressive obstacle avoidance success rate of 93.6%. These results demonstrate that the proposed approach provides superior performance in dynamic motion planning, offering a promising solution for real-world autonomous navigation applications.

Conclusion - Transformer Visual Control for Dynamic Obstacle Avoidance

Here is the author’s summary and overview of lessons learned from this work:

In this study, we proposed the GAS-H-Trans framework for image segmentation and dynamic obstacle avoidance in autonomous robots. The key contributions are summarized as follows. (1) Dual-coupling grouped aggregation strategy: A Transformer-based dualcoupling grouped aggregation method optimizes feature extraction and enhances global feature representation, thereby improving the model’s perception performance in dynamic motion planning. (2) Harris hawk optimization (HHO): The integration of the HHO algorithm into the GAS-Trans framework optimizes the number of Transformer layers and iterations, improving model accuracy and reducing computational costs. (3) PSOoptimized artificial potential field (APF): We integrated the PSO algorithm with APF to optimize the attractive and repulsive gain coefficients, addressing local minima issues and enhancing the global stability of the obstacle avoidance system. 

This study also proposes a secondary precise image segmentation strategy. By setting the observation points near critical obstacles for fine-tuned segmentation, the flexibility and accuracy of the segmentation model’s environmental perception are effectively enhanced, thereby improving the robot’s obstacle avoidance capabilities. 

Through the integration of PSO-optimized APF with image segmentation, the GAS-HTrans + PSO-optimized APF framework demonstrated significant improvements in obstacle avoidance. In the experimental validation of this study, the obstacles remained static throughout the navigation process. Using this method, the autonomous robot dynamically adjusted its obstacle avoidance trajectory based on segmented environmental features. This integration significantly enhanced environmental perception capabilities and the accuracy of obstacle avoidance decisions, enabling more efficient navigation in static obstacle environments. 

Extensive experiments on publicly available datasets (Duckiebot, KITTI, ImageNet) and in the Unity3D virtual robot environment validate the effectiveness of the proposed framework. The GAS-H-Trans framework outperformed traditional models in image segmentation tasks, achieving the highest mIoU of 85.2%. Furthermore, in virtual obstacle avoidance experiments, the GAS-H-Trans + PSO-optimized APF framework achieved an obstacle avoidance success rate of 93.6%. 

These results effectively validate the proposed strategy, which combines secondary image segmentation from GAS-H-Trans with the PSO-optimized APF method, significantly improving obstacle avoidance performance in dynamic motion planning. Additionally, the GAS-H-Trans framework has the potential to be extended to fully dynamic environments by incorporating real-time object tracking and adaptive obstacle modeling. However, some limitations exist. The majority of the experiments were conducted in simulated environments, and future research will focus on validating the framework in real-world scenarios and improving real-time performance. 

Additionally, the integration of multi-modal sensor data (such as LiDAR and ultrasonic sensors) will be an important direction for future work to further enhance environmental perception and robustness. 

In conclusion, the new framework offers an innovative solution for autonomous robot obstacle avoidance in dynamic motion planning. Its powerful environmental perception and obstacle avoidance performance demonstrate significant potential for practical applications. With further optimization and real-world validation, this framework will play a crucial role in the future development of autonomous navigation and robotics technology.

Did this work spark your curiosity?

Project Authors

Yuhu Tang is affiliated with the School of Artificial Intelligence and Big Data, Hefei University, Hefei 230601, China.

Ying Bai is affiliated with the School of Artificial Intelligence and Big Data, Hefei University, Hefei 230601, China.

Qiang Chen is affiliated with School of Electrical Engineering and Automation
National and Local Joint Engineering Laboratory for Renewable Energy Access to Grid Technology, Hefei University of Technology, Hefei, China, Hefei University, Hefei 230601, China.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Extended Kalman Filter (EKF) SLAM for Duckiebots

Extended Kalman Filter (EKF) SLAM for Duckiebots

Extended Kalman Filter (EKF) SLAM for Duckiebots

Project Resources

Project highlights

In SLAM, everything that can drift will drift, and the role of the filter is to drift more slowly than entropy.

Extended Kalman Filter (EKF) SLAM for Duckiebots - the objectives

This SLAM-Duckietown project addresses a famous challenge in robotics: concurrently estimating the agent’s pose and mapping the environment under uncertainty.

This project implements an Extended Kalman Filter (EKF) SLAM algorithm on Duckiebots (DB21-J4), combining odometry from wheel encoders and landmark observations from April tags.

The objective is to maintain an evolving posterior over the Duckiebot’s pose (x,y,θ) and landmark positions by recursively integrating noisy control inputs and observations.

This upgrade shifts Duckiebots from open-loop dead reckoning units into closed-loop, state-estimating agents. For Duckietown, it reinforces its use as an experimental ground for real-world robotics challenges, including data association, observability, filter consistency, and multi-sensor fusion.

The challenges and approach

The system applies the EKF-SLAM pipeline in two stages: motion prediction and measurement correction.

Prediction propagates the robot’s belief through a non-holonomic kinematic model under process noise, using arc-based interpolation to reduce discretization error.

Correction incorporates April tag detections via a Perspective-n-Point (PnP) solution, updating the state with landmark-relative observations under observation noise. The state vector grows dynamically as new landmarks are observed, and the covariance matrix tracks both robot and landmark uncertainty.

The technical challenges include maintaining filter consistency under linearization errors, ensuring landmark observability despite partial fields of view, and synchronizing asynchronous data from wheel encoders, camera frames, and Vicon ground-truth captures.

Moreover, AprilTag detection is constrained by lighting artifacts and pose ambiguity at shallow viewing angles, introducing non-Gaussian errors that the EKF must approximate linearly. 

Moreover, tuning noise parameters presents the classical tradeoff: too little noise leads to overconfidence and divergence; too much noise leads to filter paralysis. Deployment exposes the systemic difference between simulation and physical experiments: real Duckiebots do not move with perfect kinematics, cameras suffer from radial distortion, and computation suffers from non-deterministic latency.

In SLAM, everything that can drift will drift, and the role of the filter is to drift more slowly than entropy.

Did this work spark your curiosity?

Extended Kalman Filter (EKF) SLAM for Duckiebots: Authors

AmirHossein Zamani was a former Duckietown student, and currently, he is pursuing his Ph.D. in Computer Science at Mila (Quebec AI Institute) and  Concordia University, Canada. He is also working as an AI Research Scientist Intern at Autodesk in Montreal, Canada.

Léonard Oest O’Leary was a former Duckietown student, and currently, he is pursuing his Master of Science in Computer Science at the University of Montreal, Canada.

Kevin Lessard was a former Duckietown student, and currently, he is pursuing his Master of Science in Machine Learning at Mila – Quebec AI Institute in Montreal, Canada.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

VAE-Based Out-of-Distribution Detectors for Embedded Deployment

VAE-Based Out-of-Distribution Detectors for Embedded Systems

General Information

VAE-Based Out-of-Distribution Detectors for Embedded Systems

Out-of-distribution (OOD) detection is essential for maintaining safety in machine learning systems, especially those operating in the real world. It helps identify inputs that differ significantly from the training data, which could lead to unexpected or unsafe behavior.

Variational Autoencoders (VAEs) are neural networks that compress input data into a smaller latent space (a compact set of features) and reconstructs the input from this compressed version.

In OOD detection, if the reconstruction fails or doesn’t fit the expected latent space, the input is flagged as unfamiliar, i.e., out-of-distribution. While VAEs are effective, they are computationally expensive, making them hard to deploy on small, embedded devices like Duckiebots.

To solve this challenge, building upon previous work (Embedded Out-of-Distribution Detection on an Autonomous Robot Platform), the researchers applied three model compression techniques:

  • Pruning: Removes low-importance weights or neurons to shrink and speed up the model.
  • Knowledge distillation: Trains a smaller “student” model to mimic a larger “teacher” model.
  • Quantization: Lowers numerical precision (e.g., from 32-bit to 8-bit) to save memory and improve speed.

Two VAE-based OOD detectors were evaluated:

  • β-VAE: A variant of VAE that learns more interpretable latent features (controlled by a parameter called β).
  • Optical Flow Detector: Analyzes how pixels move across video frames to detect unusual motion.

Both models were trained and tested using data collected in Duckietown, and the models were measured using Area under the Receiver Operating Characteristic Curve (AUROC), which shows how well the model separates known from unknown inputs, memory footprint, and execution latency. The compressed models achieved faster inference times, smaller memory usage, and only minor drops in detection accuracy.

Highlights - VAE-Based Out-of-Distribution Detectors for Embedded Systems

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Out-of-distribution (OOD) detectors can act as safety monitors in embedded cyber-physical systems by identifying samples outside a machine learning model’s training distribution to prevent potentially unsafe actions. However, OOD detectors are often implemented using deep neural networks, which makes it difficult to meet real-time deadlines on embedded systems with memory and power constraints. We consider the class of variational autoencoder (VAE) based OOD detectors where OOD detection is performed in latent space, and apply quantization, pruning, and knowledge distillation. 

These techniques have been explored for other deep models, but no work has considered their combined effect on latent space OOD detection. While these techniques increase the VAE’s test loss, this does not correspond to a proportional decrease in OOD detection performance and we leverage this to develop lean OOD detectors capable of real-time inference on embedded CPUs and GPUs. We propose a design methodology that combines all three compression techniques and yields a significant decrease in memory and execution time while maintaining AUROC for a given OOD detector. 

We demonstrate this methodology with two existing OOD detectors on a Jetson Nano and reduce GPU and CPU inference time by 20% and 28% respectively while keeping AUROC within 5% of the baseline.

Conclusion - VAE-Based Out-of-Distribution Detectors for Embedded Systems

Here are the conclusions from the author of this paper:

We explored different neural network compression techniques on β-VAE and optical flow OOD detectors using a mobile robot powered by a Jetson Nano. Based on our analysis of results for quantization, knowledge distillation, and pruning, we proposed a design strategy to find the model with the best execution time and memory usage while maintaining some accuracy metric for a given VAE-based OOD detector. We successfully demonstrated this methodology on an optical flow OOD detector and showed that our methodology’s ability to aggressively prune and compress a model is due to the unique attributes of VAE-based OOD detection. 

Despite our methodology’s good performance, it requires access to OOD samples at design time to act as a crossvalidation set. In our case study, we assume OOD samples arise from a particular generating distribution, but this may not be the case in general. Furthermore, it only guides the search for a faster architecture, but does not guarantee the optimum result. Nevertheless, we believe having a design methodology that combines quantization, knowledge distillation, and pruning allows engineers to exploit the combined powers of these techniques instead of considering them individually.

Project Authors

Aditya Bansal is currently working as a Machine Learning Engineer at Adobe, United States.

Michael Yuhas is currenly working as a Research Assistant at Nanyang Technological University, Singapore.

Arvind Easwaran is an Associate Professor at Nanyang Technological University, Singapore.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Path Planning for Multi-Robot Navigation in Duckietown

Path Planning for Multi-Robot Navigation in Duckietown

Path Planning for Multi-Robot Navigation in Duckietown

Project Resources

Project highlights

Path planning for multi-robot navigation in Duckietown - the objectives

Navigating Duckietown should not feel like solving a maze blindfolded!

The “Goto-N” path planning algorithm gives Duckiebots the map, the plan, and the smarts to take the optimal path from here to there, without wandering around by turning the map into a graph and every turn into a calculated choice.

While Duckiebots have long been able to follow lanes and avoid obstacles, truly strategic navigation, thinking beyond the next tile, toward a distant goal, requires a higher level of reasoning. In a dynamic Duckietown, robots need more than instincts. They need a plan.

This project introduces a node-based path-planning system that represents Duckietown as a graph of interconnected positions. Using this abstraction, Duckiebots can evaluate both allowable and optimal routes, adapt to different goal positions, and plan their moves intelligently.

The Goto-N project integrates several key concepts like:

  • Nodegraph representation: transforms the tile-based Duckietown map into a graph of quarter-tile nodes, capturing all possible robot positions and transitions.

  • Allowable and optimal move generation: differentiates between all legal movements and the most efficient moves toward a goal, supporting informed decision-making.

  • Termination-aware planning: computes optimal actions relative to a chosen destination, enabling precise goal-reaching behaviors.

  • Multi-robot scalability: validates the planner across one, two, and three Duckiebots to assess coordination, efficiency, and performance under shared conditions.

  • Real-world implementation and validation: demonstrates the effectiveness of Goto-N through trials in the Autolab, comparing planned movements to real robot behavior.

The challenges and approach

Navigating Duckietown poses several technical challenges: translating a continuous environment into a discrete planning space, handling edge cases like partial tile positions, and enabling efficient coordination among multiple autonomous agents.

The Goto-N project addresses these by discretizing the Duckietown map into a graph of ¼-tile resolution nodes, capturing all possible robot poses and orientations. 

Using this representation, the system classifies allowable moves based on physical constraints and tile connectivity, then computes optimal moves to minimize distance or steps to a termination node using heuristics and precomputed lookup tables.

A Python-based pipeline then ingests the map layout, builds the nodegraph, and generates movement policies, which are then validated through simulated and physical trials. The system scales to multiple Duckiebots by assigning independent paths while analyzing overlap and bottlenecks in shared spaces, ensuring robust, efficient multi-robot planning.

Path planning (Goto-n) in Duckietown: full report

The design and implementation of this path planning algorithm is documented in the following report.

Path planning (goto-n) in Duckietown: Authors

Alexander Hatteland is currently working as a Consultant at Boston Consulting Group (BCG), Switzerland.

Marc-Philippe Frey is currently working as a Consultant at Boston Consulting Group (BCG), Switzerland.

Demetris Chrysostomou is currently a PhD candidate at Delft University of Technology, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Semantic Image Segmentation Methods in Duckietown

Semantic Image Segmentation Methods in Duckietown

General Information

Semantic Image Segmentation Methods in Duckietown

In Duckietown, where self-driving agents (i.e., Duckiebots) operate in structured environments, segmentation is essential for lane detection, object recognition, and obstacle avoidance. Semantic Image Segmentation assigns a class label to each pixel in an image, allowing autonomous systems to interpret their surroundings. 

This research evaluates four deep learning models – SegNet, U-Net, FC-DenseNet, and DeepLab-v3 by comparing their efficiency, accuracy, and real-time applicability. Understanding the trade-offs between these models helps optimize perception for Duckiebots navigating the Duckietown.

These models rely on Convolutional Neural Networks (CNNs) to extract hierarchical features. SegNet prioritizes memory efficiency, U-Net incorporates skip connections for improved localization, FC-DenseNet enhances feature reuse through dense connectivity, and DeepLab-v3 captures multi-scale context with atrous spatial pyramid pooling. Each model presents a balance between computational cost and segmentation accuracy, influencing its suitability for embedded systems like Duckiebots.

Implementing semantic segmentation in Duckietown enhances autonomy by enabling self-driving agents to interpret complex visual inputs. The selection of an appropriate segmentation model depends on processing constraints and real-time performance needs. By integrating optimized segmentation techniques, Duckiebots improve decision-making in structured environments.

Highlights - Semantic Image Segmentation Methods in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The article focuses on evaluation of the applicability of existing semantic segmentation algorithms for the Duckietown simulator. Duckietown is an open research project in the field of autonomously controlled robots. The article explores classical semantic image segmentation algorithms. Their analysis for applicability in Duckietown is carried out.

With the help of them, we want to make a dataset for training neural networks. The following was investigated: edge-detection techniques, threshold algorithms, region growing, segmentation algorithms based on clustering, neural networks. The article also reviewed networks designed for semantic image segmentation and machine learning frameworks, taking into account all the limitations of the Duckietown simulator.

Experiments were conducted to evaluate the accuracy of semantic segmentation algorithms on such classes of Duckietown objects as road and background. Based on the results of the analysis, region growing algorithms and clustering algorithms were selected and implemented.

Experiments were conducted to evaluate the accuracy on such classes of Duckietown objects as road, background and traffic signs. After evaluating the accuracy of the algorithms considered, it was decided to use Color segmentation, Mean Shift, Thresholding algorithms and Segmentation of signs by April-tag for image preprocessing. For neural networks, experiments were conducted to evaluate the accuracy of semantic segmentation algorithms on such classes of Duckietown objects as road and background. After evaluating the accuracy of the algorithms considered, it was decided to select the DeepLab-v3 neural network. Separate module was created for semantic image segmentation in Duckietown.

Conclusion - Semantic Image Segmentation Methods in Duckietown

Here are the conclusions from the author of this paper:

The article analyzes the applicability of semantic segmentation algorithms in the Duckietown simulator, which simulates autopilot robots in an urban environment. 

It was found that methods based on classical computer vision algorithms are inferior to methods based on neural networks in terms of stability, segmentation accuracy and speed of operation. It was proposed to use classical computer vision algorithms for marking images and preparing datasets and neural networks for segmentation on robots. 

CV algorithms taking into account the features of the Duckietown simulator. Thus, classical computer vision algorithms, such as area-building algorithms and clustering algorithms, were chosen for image preprocessing. OpenCV and Scikit-image libraries were selected for the experiment. The best result during the testing was obtained using MeanShift and cv2.threshold together, and road signs were segmented most successfully using April tag. 

Also, after testing the selected neural networks, it was decided to select the DeepLab-v3 neural network as an adapted semantic segmentation algorithm for the Duckietown simulator. After testing the trained DeepLab-v3 neural network model on Duckiebot, a separate module for semantic image segmentation was created in the Duckietown open research project. In the future, it is planned to add such classes of Duckietown objects as a duck in the role of a pedestrian, road markings (red, yellow, white) and Duckiebot.

Project Authors

Inspirational Duckietown placeholder

Kristina S. Lanchukovskaya is affiliated with the department of IT, Novosibirsk State University, Novosibirsk, Russia.

Inspirational Duckietown placeholder

Dasha E. Shabalina is affiliated with the department of IT, Novosibirsk State University, Novosibirsk, Russia.

Tatiana V. Liakh is a Senior Lecturer at the Department of Computer Science, Electrical and Space Engineering, Novosibirsk State University, Novosibirsk, Russia.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

City Rescue: Autonomous Duckiebot Recovery System

City Rescue: Autonomous Recovery System for Duckiebots

City Rescue: Autonomous Recovery System for Duckiebots

Project Resources

Project highlights

City rescue: autonomous recovery system for Duckiebots - the objectives

Would it not be desirable to have the city we drive in monitor our vehicle, as a guardian angel ready to intervene in case of distress offering autonomous recovery services? 

The project, “City Rescue” is a first step towards enabling a continuous monitoring system from traffic lights and watchtowers, smart infrastructure in Duckietown, aimed at localization and communicating with Duckiebots as they autonomously operate in town.  

Despite the robust autonomy algorithms guiding the behaviors of Duckietown in Duckietowns, distress situations such as lane departures, crashes, or stoppages, might happen. In these cases human intervention is often necessary to reset experiments. 

This project introduces an automated monitoring and rescue system that identifies distressed agents, classifies their distress state, and calculates and communicates corrective actions to restore Duckiebots to normal operation.

The City-Rescue project incorporates several key components to achieve autonomous monitoring and recovery of distressed Duckiebots:

  • Distress detection: classifies failure states such as lane departure, collision, and immobility using real-time localization data.

  • Lightweight real-time localization: implements a simplified localization system using AprilTags and watchtower cameras, optimizing computational efficiency for real-time tracking.

  • Decentralized rescue architecture: employs a central Rescue Center and multiple Rescue Agents, each dedicated to an individual Duckiebot, enabling simultaneous rescues.

  • Closed-loop control for recovery: uses a proportional-integral (PI) controller to execute corrective movements, bringing Duckiebots back to lane-following mode.

City Rescue is a great example of vehicle-to-infrastructure (v2i) interactions in Duckietown.

The challenges and approach

The City Rescue autonomous recovery system employs a server-based architecture, where a central “Rescue Center” continuously processes localization data and assigns rescue tasks to dedicated Rescue Agents.

The localization system uses appropriately placed reference AprilTags and watchtower cameras, tuned for low-latency operation by bypassing computationally expensive optimization routines. The rescue mechanism is driven by a PI controller, which calculates corrective movements based on deviations from an ideal trajectory.

The main challenges in implementing this city behavior include localization inaccuracies, due to the limited coverage of watchtower cameras, and distress event positioning on the map.

The localization inaccuracies are mitigated by performing camera calibration procedures on the watchtower cameras, as well as by performing an initial city offset calibration procedure. The success rate of the executed maneuvers varies with map topographical complexity; recovery from curved road or intersection sections is less reliable than from straight lanes.

Finally, the lack of inter-robot communication can lead to cascading failure scenarios when multiple Duckiebots collide.

City rescue: full report

The design and implementation of this autonomous recovery system is documented in the following report.

City rescue in Duckietown: Authors

Carl Philipp Biagosch is the co-founder at Mantis Ropeway Technologies, Switzerland.

Jason Hu is currently working as a Scientific Assistant at ETH Zurich, Switzerland.

Martin Xu is currently working as a data scientist at QuantCo, Germany.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Adaptive lane following featured image

Proxy Domains for Evaluation and Learning

General Information

Proxy Domains for Evaluation and Learning

Running robotics experiments in the real world is often costly in terms of time, money, and effort. For this reason, robotics development and testing rely on proxy domains (e.g., simulations) before real-world deployment. But how to gauge the degree of usefulness of using proxy domains in the development process, and are all domains equally useful? 

Intuitively, the answer to the above questions will depend on the type of robot, the task it has to achieve, and the environment in which it operates. Evaluating a proxy domain’s usefulness for a specific combination of these circumstances, specifically for the training of autonomous agents, is tackled in this work by establishing quantification metrics and assessing them in Duckietown.

The key aspects of this work are:

  • Proxy Usefulness Metrics: introduction of Proxy Relative Predictivity Value (PRPV) and Proxy Learning Value (PLV) to measure a proxy’s ability to predict real-world performance and aid agent learning. PRPV helps identify simulations that accurately predict real-world results, while PLV measures their effectiveness in training agents.

  • Prediction vs. Learning: differentiation of proxies used for accurate performance prediction from those for data generation in training.

  • Experiments: demonstration of how tuning proxy domain parameters (e.g., sensor delays, camera angle) affects predictivity and learning efficiency.

These metrics improve proxy selection and tuning for robotics research and education, and Duckietown enables rapid prototyping of these ideas for mobile autonomous vehicles. 

Highlights - Proxy Domains for Evaluation and Learning in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

In many situations it is either impossible or impractical to develop and evaluate agents entirely on the target domain on which they will be deployed. This is particularly true in robotics, where doing experiments on hardware is much more arduous than in simulation. This has become arguably more so in the case of learning-based agents. To this end, considerable recent effort has been devoted to developing increasingly realistic and higher fidelity simulators. However, we lack any principled way to evaluate how good a “proxy domain” is, specifically in terms of how useful it is in helping us achieve our end objective of building an agent that performs well in the target domain. In this work, we investigate methods to address this need. We begin by clearly separating two uses of proxy domains that are often conflated: 1) their ability to be a faithful predictor of agent performance and 2) their ability to be a useful tool for learning. In this paper, we attempt to clarify the role of proxy domains and establish new proxy usefulness (PU) metrics to compare the usefulness of different proxy domains. We propose the relative predictive PU to assess the predictive ability of a proxy domain and the learning PU to quantify the usefulness of a proxy as a tool to generate learning data. Furthermore, we argue that the value of a proxy is conditioned on the task that it is being used to help solve. We demonstrate how these new metrics can be used to optimize parameters of the proxy domain for which obtaining ground truth via system identification is not trivial.

Conclusion - Proxy Domains for Evaluation and Learning in Duckietown

Here are the conclusions from the author of this paper:

“We introduce new metrics to assess the usefulness of proxy domains for agent learning. In a robotics setting it is common to use simulators for development and evaluation to reduce the need to deploy on real hardware. We argue that it is necessary to to take into account the specific task when evaluating the usefulness of the the proxy. We establish novel metrics for two specific uses of a proxy. When the proxy domain is used to predict performance in the target domain, we offer the PRPV to assess the usefulness of the proxy as a predictor, and we argue that the task needs to be imposed but not the agent. When a proxy is used to generate training data for a learning algorithm, we propose the PLV as a metric to assess usefulness of the source domain, which is dependent on a specific task and a learning algorithm. We demonstrated the use of these measures for predicting parameters in the Duckietown environment. Future work will involve more rigorous treatment of the optimization problems posed to find optimal parameters, possibly in connection with differentiable simulation environments.”

Project Authors

Anthony Courchesne is currently working as an MLOps Engineer ar Maneva, Canada.

Andrea Censi is currently working as the Deputy Director, Chair of Dynamic Systems and Control at ETH Zurich, Switzerland.

Liam Paull is an Associate Professor at the Universite de Montreal, Canada and also serves as the Chief Education Officer at Duckietown.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Adaptive trim before and after

Adaptive Lane Following with Auto-Trim Tuning

Adaptive Lane Following with Auto-Trim Tuning

Project Resources

Before and after:

Training:

Project highlights

Calibration of sensor and actuators is always important in setting up robot systems, especially in the context of autonomous operations. Manual tweaking of calibration parameters though is a nuisance, albeit necessary when every physical instance of the robots is slightly different from each other. 

In this project, the authors developed a process to automatically calibrate the trim parameter in the Duckiebot, i.e., allowing it to go straight when an equal command to both wheel motors is provided. 

Adaptive lane following in Duckietown: beyond manual odometry calibration

The objective of this project is to develop a process to autonomously calibrate the wheel trim parameter of Duckiebots, eliminating the need for manual tuning or improving upon it. Manual tuning of this parameter, as part of the odometry calibration procedure, is needed to account for the invevitable slight differences existing across different Duckiebots, due to manufacturing, assembly, handling difference, etc.

Creating an automatic trim calibration procedure enhances the Duckiebot’s lane following behavior, by continuously adjusting the wheel alignment based on real-time lane pose feedback. Duckiebots typically require manual calibration for the odometry, which introduces variability and reduces scalability in autonomous mobility experiments. 

By implementing a Model-Reference Adaptive Control (MRAC) based approach, the project ensures consistent performance despite mechanical variations or external disturbances. This is desireable for large-scale Duckietown deployments where the robots need to maintain uniform behavior across different assemblies. 

Adaptive control reduces dependence on predefined parameters, allowing Duckiebots to self-correct without external intervention. This enables more reproducible fleet-level performance, useful for research in autonomous navigation. This project supports experimentation in self-calibrating robotic systems through application of adaptive control research.

Model Reference Adaptive Control (MRAC) for adaptive lane following in Duckietown

The method employs a Model-Reference Adaptive Control (MRAC) framework that iteratively estimates the optimal trim value during lane following by processing lane pose feedback from the vision pipeline, and comparing expected and actual motion to compute a correction factor. An adaptation law updates the trim dynamically based on real-time error minimization.

Pose estimation relies on a vision-based lane filter, which introduces latency and noise, affecting convergence stability. The adaptive controller must maintain stability while ensuring convergence to an optimal trim value within a finite time window. 

The performance of this approach is constrained by sensor inaccuracies, requiring threshold-based filtering to exclude unreliable pose data. The algorithm operates in real-world conditions where road surface variations, lighting changes, and mechanical wear affect performance. Synchronizing lane pose data with controller updates while minimizing computation delays is a key challenge, and ensuring that the adaptive controller does not introduce oscillations or instability in the control loop requires parameter tuning.

Adaptive lane following: full report

Check out the full report here. 

Adaptive lane following in Duckietown: Authors

Pietro Griffa is currently working as a Systems and Estimation Engineer at Verity, Switzerland.

Simone Arreghini is currently pursuing his Ph. D. at IDSIA USI-SUPSI, Switzerland.

Rohit Suri was a mentor on this project and is currently working as a Senior Research Scientist at Venti Technologies, Singapore.

Aleksandar Petrov was a mentor on this project and is currently pursuing his Ph. D.  at the University of Oxford, United Kingdom.

Jacopo Tani was a supervisor on this project and is currently the CEO at Duckietown.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Erkent featured image robotics and rescue

Ozgur Erkent: robotic rescue operations with Duckietown

Ozgur Erkent: robotic rescue operations with Duckietown

Meet Ozgur Erkent, Assistant Professor at Hacettepe University’s Computer Engineering Department in Turkey, who is teaching and doing research with Duckietown.

Ankara, Turkey, January 2025: Prof. Ozgur Erkent shares how Duckietown is shaping robotics education at Hacettepe University. From hands-on learning in his Introduction to Robotics course, to real-world applications in rescue operations, he explains why he believes Duckietown is an invaluable tool for students exploring autonomous systems.

Bringing hands-on robotics to the classroom

At Hacettepe University, Professor Ozgur Erkent is using Duckietown in his curriculum and providing students with hands-on learning experiences that bridge theory and real-world applications. 

Good morning and welcome! Could you introduce yourself and your work?

My name is Ozgur Erkent and I am an Assistant Professor at Hacettepe University’s Computer Engineering Department. I have been here for nearly three years, focusing on mobile robots and autonomous vehicles. My work involves both teaching and research in these areas.

Hacettepe University Duckietown lab robotics and rescue
How did you first discover Duckietown?
I first heard about Duckietown while working as a researcher in France. A colleague returning from Colombia shared how undergraduates were using Duckiebots in their projects. That caught my interest, and when I joined Hacettepe University, I saw an opportunity to integrate it into my courses.
What course do you use Duckietown for, and what does it involve?
I use Duckietown in my Introduction to Robotics course, which is open to third- and fourth-year students in the Artificial Intelligence Engineering program. The course has a laboratory component where students work with Duckiebots and Duckiedrones to apply robotics concepts practically.

I also wrote a project funded by NVIDIA through the “Bridge To Turkiye Fund”, that focuses on rescue robotics. After the devastating earthquake in Turkey two years ago, NVIDIA launched an initiative to support research aimed at disaster response. With NVIDIA as the sponsor, we were able to purchase the Duckiebots, Duckiedrones and related tools for the Robotics Lab course. I proposed a project that leverages Duckietown kits to train students in SLAM (Simultaneous Localization and Mapping), sensor integration, and autonomous navigation—key skills for robotics applications in search and rescue operations. Through this project, students may gain hands-on experience in developing robotic systems that could one day assist in real-world disaster relief efforts.

Hacettepe University Duckietown lab robotics and rescue

Robotics is more than just algorithms; it’s about solving real-world challenges. Duckietown helps students bridge that gap in a meaningful way.

How have students reacted to working with Duckietown?

Many students come from a software background, so working with real hardware is a new challenge. Some find it difficult at first, but those who enjoy hands-on work really thrive. They even help their peers with assembly and troubleshooting. It’s a valuable learning experience. If I were to design something for undergraduate students learning robotics, it would probably look a lot like Duckietown. I think it would be a great addition, as it would help students get hands-on experience with the basics of robotics.

Hacettepe University Duckietown lab robotics and rescue
Hacettepe University Duckietown lab robotics and rescue

If I were to design something for undergraduate students learning robotics, it would probably look a lot like Duckietown. I think it would be a great addition, as it would help students get hands-on experience with the basics of robotics.

Besides Duckiebots, are you using any other tools?

Yes, I have also introduced Duckiedrones, which are especially popular in Turkey. The national foundation supports drone projects, and students are eager to explore them. Several groups are already working on Duckiedrone-based initiatives.

Duckiedrone DD24
What do you think about the Duckietown community and support?
The community is a big advantage. Universities considering Duckietown should definitely check out its forums and resources. The support available makes a big difference in implementing the platform effectively.
Any final thoughts?
I’m excited to see where these projects lead. Robotics is more than just algorithms; it’s about solving real-world challenges. Duckietown helps students bridge that gap in a meaningful way.

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

Deep Reinforcement Learning for Agent-Based Autonomous Robot

Deep Reinforcement and Transfer Learning for Robot Autonomy

General Information

Deep Reinforcement and Transfer Learning for Robot Autonomy

Developing autonomous robotic systems is challenging. When using machine learning based approaches, one of the main challenges is the high cost and complexity of real-world training. Running real world experiments is time consuming and depending on the application, can be expensive as well.

This work uses Deep Reinforcement Learning (DRL) and tackles this challenge through Transfer Learning (TL). DRL enables robots to learn optimal behaviors through trial-and-error, guided by reward-based feedback. Transfer Learning then addresses the high cost of generating training data by leveraging simulation environments.

Running experiments in simulation is time and cost efficient, the trained agent can then be deployed on a physical robot, in a process known as Sim2Real transfer. Ideally, this approach significantly reduces training costs and accelerates real-world deployment.

In this work, training occurs in a simulated Duckietown environment using Deep Deterministic Policy Gradient (DDPG) and TL techniques to mitigate the expected difference between simulated and real-world environments. The resulting agent is then deployed  on a custom-built robot in a physical Duckietown city for evaluation.

Results show that the DRL-based model successfully learns lane-following and navigation autonomous behaviors in simulation, and performance comparison with real world experiments is provided.  

Highlights - Deep Reinforcement Learning for Agent-Based Autonomous Robot

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Real robots have different constraints, such as battery capacity limit, hardware cost, etc., which make it harder to train models and conduct experiments on physical robots. Transfer learning can be used to omit those constraints by training a self-driving system in a simulated environment, with a goal of running it later in a real world. Simulated environment should resemble a real one as much as possible to enhance transfer process. This paper proposes a specification of an autonomous robotic system using agent-based approach. It is modular and consists of various types of components (agents), which vary in functionality and purpose. 

Thanks to system’s general structure, it may be transferred to other environments with minimal adjustments to agents’ modules. The autonomous robotic system is implemented and trained in simulation and then transferred to real robot and evaluated on a model of a city. A two-wheeled robot uses a single camera to get observations of the environment in which it is operates. Those images are then processed and given as an input to the deep neural network, that predicts appropriate action in the current state. Additionally, the simulator provides a reward for each action, which is used by the reinforcement learning algorithm to optimize weights in the neural network, in order to improve overall performance.

Conclusion - Deep Reinforcement Learning for Agent-Based Autonomous Robot

Here are the conclusions from the author of this paper:

“After several breakthroughs in the field of Deep Reinforcement Learning, it became one of the most popular researched topics in Machine Learning and a common approach to the problem of autonomous driving. This paper presents the process of training an autonomous robotic system using popular actor-critic algorithm in the simulator, which may then also be run on real robot. It was possible to train an agent in real-time using trial-and-error approach without the need to collect vast amounts of labeled data. The neural network learned how to control the robot and how to follow the lanes, without any explicit guidelines. Only a few functions have been used to transform the data sent between environment and the agent, in order to make the learning process smoother and faster. 

For evaluation purposes, a real robot and a small city model have been built, based on the Duckietown platform specification. This hardware has been used to evaluate in the real world the performance of the system, trained in simulator. Also, additional Transfer Learning techniques were used, in order to adjust the observations and actions in the real robot, due to the differences with simulated environment. Although, the performance in real environment was worse than in simulator, certain trained models were still able to guide the robot around a simple road loop, which shows a potential for such approach. As a result, the use of the simulator greatly reduced the time and effort needed to train the system, and transfer methods were used to deploy it in the real world. 

The Duckietown platform provides a baseline, which was modified and refactored to follow the system structure. The simulator and its components are thoroughly documented, the detailed instructions explain how to train and run the robot both in simulation and in real world and evaluate the results. Duckietown provides complete sets of parts, necessary to build the robot and small city, however, it was decided to build custom robot, according to the guidelines. The robot uses a single camera to get observations of the surrounding environment. 

The reinforcement learning algorithm was used to learn a policy, which tries to choose optimal actions based on the those observations with the help of reward function, that provides a feedback for previous decisions. It was possible to significantly reduce the effort required to train a model, thanks to the simulator, as the process does not require constant human supervision and involvement. Such approach proves to be very promising, as the agent learned how to do the lane-following task without any explicit labels, and has shown good performance in the simulated environment. Although, there is still a room for improvement, when it comes to transferring the model to real world, which requires various adaptations and adjustments to be made for The robot to properly execute maneuvers and show stability in its actions.”

Project Authors

Vladyslav Kyryk is currently working as a Data Scientist at Finitec, Warsaw, Poland.

Maksym Figat  is working as an Assistant Professor at Warsaw University of Technology, Poland.

Maryan Kyryk is currently serving as the Co-Founder & CEO at Maxitech, Ukraine.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.