VAE-Based Out-of-Distribution Detectors for Embedded Deployment

VAE-Based Out-of-Distribution Detectors for Embedded Systems

General Information

VAE-Based Out-of-Distribution Detectors for Embedded Systems

Out-of-distribution (OOD) detection is essential for maintaining safety in machine learning systems, especially those operating in the real world. It helps identify inputs that differ significantly from the training data, which could lead to unexpected or unsafe behavior.

Variational Autoencoders (VAEs) are neural networks that compress input data into a smaller latent space (a compact set of features) and reconstructs the input from this compressed version.

In OOD detection, if the reconstruction fails or doesn’t fit the expected latent space, the input is flagged as unfamiliar, i.e., out-of-distribution. While VAEs are effective, they are computationally expensive, making them hard to deploy on small, embedded devices like Duckiebots.

To solve this challenge, building upon previous work (Embedded Out-of-Distribution Detection on an Autonomous Robot Platform), the researchers applied three model compression techniques:

  • Pruning: Removes low-importance weights or neurons to shrink and speed up the model.
  • Knowledge distillation: Trains a smaller “student” model to mimic a larger “teacher” model.
  • Quantization: Lowers numerical precision (e.g., from 32-bit to 8-bit) to save memory and improve speed.

Two VAE-based OOD detectors were evaluated:

  • β-VAE: A variant of VAE that learns more interpretable latent features (controlled by a parameter called β).
  • Optical Flow Detector: Analyzes how pixels move across video frames to detect unusual motion.

Both models were trained and tested using data collected in Duckietown, and the models were measured using Area under the Receiver Operating Characteristic Curve (AUROC), which shows how well the model separates known from unknown inputs, memory footprint, and execution latency. The compressed models achieved faster inference times, smaller memory usage, and only minor drops in detection accuracy.

Highlights - VAE-Based Out-of-Distribution Detectors for Embedded Systems

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Out-of-distribution (OOD) detectors can act as safety monitors in embedded cyber-physical systems by identifying samples outside a machine learning model’s training distribution to prevent potentially unsafe actions. However, OOD detectors are often implemented using deep neural networks, which makes it difficult to meet real-time deadlines on embedded systems with memory and power constraints. We consider the class of variational autoencoder (VAE) based OOD detectors where OOD detection is performed in latent space, and apply quantization, pruning, and knowledge distillation. 

These techniques have been explored for other deep models, but no work has considered their combined effect on latent space OOD detection. While these techniques increase the VAE’s test loss, this does not correspond to a proportional decrease in OOD detection performance and we leverage this to develop lean OOD detectors capable of real-time inference on embedded CPUs and GPUs. We propose a design methodology that combines all three compression techniques and yields a significant decrease in memory and execution time while maintaining AUROC for a given OOD detector. 

We demonstrate this methodology with two existing OOD detectors on a Jetson Nano and reduce GPU and CPU inference time by 20% and 28% respectively while keeping AUROC within 5% of the baseline.

Conclusion - VAE-Based Out-of-Distribution Detectors for Embedded Systems

Here are the conclusions from the author of this paper:

We explored different neural network compression techniques on β-VAE and optical flow OOD detectors using a mobile robot powered by a Jetson Nano. Based on our analysis of results for quantization, knowledge distillation, and pruning, we proposed a design strategy to find the model with the best execution time and memory usage while maintaining some accuracy metric for a given VAE-based OOD detector. We successfully demonstrated this methodology on an optical flow OOD detector and showed that our methodology’s ability to aggressively prune and compress a model is due to the unique attributes of VAE-based OOD detection. 

Despite our methodology’s good performance, it requires access to OOD samples at design time to act as a crossvalidation set. In our case study, we assume OOD samples arise from a particular generating distribution, but this may not be the case in general. Furthermore, it only guides the search for a faster architecture, but does not guarantee the optimum result. Nevertheless, we believe having a design methodology that combines quantization, knowledge distillation, and pruning allows engineers to exploit the combined powers of these techniques instead of considering them individually.

Project Authors

Aditya Bansal is currently working as a Machine Learning Engineer at Adobe, United States.

Michael Yuhas is currenly working as a Research Assistant at Nanyang Technological University, Singapore.

Arvind Easwaran is an Associate Professor at Nanyang Technological University, Singapore.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Path Planning for Multi-Robot Navigation in Duckietown

Path Planning for Multi-Robot Navigation in Duckietown

Path Planning for Multi-Robot Navigation in Duckietown

Project Resources

Project highlights

Path planning for multi-robot navigation in Duckietown - the objectives

Navigating Duckietown should not feel like solving a maze blindfolded!

The “Goto-N” path planning algorithm gives Duckiebots the map, the plan, and the smarts to take the optimal path from here to there, without wandering around by turning the map into a graph and every turn into a calculated choice.

While Duckiebots have long been able to follow lanes and avoid obstacles, truly strategic navigation, thinking beyond the next tile, toward a distant goal, requires a higher level of reasoning. In a dynamic Duckietown, robots need more than instincts. They need a plan.

This project introduces a node-based path-planning system that represents Duckietown as a graph of interconnected positions. Using this abstraction, Duckiebots can evaluate both allowable and optimal routes, adapt to different goal positions, and plan their moves intelligently.

The Goto-N project integrates several key concepts like:

  • Nodegraph representation: transforms the tile-based Duckietown map into a graph of quarter-tile nodes, capturing all possible robot positions and transitions.

  • Allowable and optimal move generation: differentiates between all legal movements and the most efficient moves toward a goal, supporting informed decision-making.

  • Termination-aware planning: computes optimal actions relative to a chosen destination, enabling precise goal-reaching behaviors.

  • Multi-robot scalability: validates the planner across one, two, and three Duckiebots to assess coordination, efficiency, and performance under shared conditions.

  • Real-world implementation and validation: demonstrates the effectiveness of Goto-N through trials in the Autolab, comparing planned movements to real robot behavior.

The challenges and approach

Navigating Duckietown poses several technical challenges: translating a continuous environment into a discrete planning space, handling edge cases like partial tile positions, and enabling efficient coordination among multiple autonomous agents.

The Goto-N project addresses these by discretizing the Duckietown map into a graph of ¼-tile resolution nodes, capturing all possible robot poses and orientations. 

Using this representation, the system classifies allowable moves based on physical constraints and tile connectivity, then computes optimal moves to minimize distance or steps to a termination node using heuristics and precomputed lookup tables.

A Python-based pipeline then ingests the map layout, builds the nodegraph, and generates movement policies, which are then validated through simulated and physical trials. The system scales to multiple Duckiebots by assigning independent paths while analyzing overlap and bottlenecks in shared spaces, ensuring robust, efficient multi-robot planning.

Path planning (Goto-n) in Duckietown: full report

The design and implementation of this path planning algorithm is documented in the following report.

Path planning (goto-n) in Duckietown: Authors

Alexander Hatteland is currently working as a Consultant at Boston Consulting Group (BCG), Switzerland.

Marc-Philippe Frey is currently working as a Consultant at Boston Consulting Group (BCG), Switzerland.

Demetris Chrysostomou is currently a PhD candidate at Delft University of Technology, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Semantic Image Segmentation Methods in Duckietown

Semantic Image Segmentation Methods in Duckietown

General Information

Semantic Image Segmentation Methods in Duckietown

In Duckietown, where self-driving agents (i.e., Duckiebots) operate in structured environments, segmentation is essential for lane detection, object recognition, and obstacle avoidance. Semantic Image Segmentation assigns a class label to each pixel in an image, allowing autonomous systems to interpret their surroundings. 

This research evaluates four deep learning models – SegNet, U-Net, FC-DenseNet, and DeepLab-v3 by comparing their efficiency, accuracy, and real-time applicability. Understanding the trade-offs between these models helps optimize perception for Duckiebots navigating the Duckietown.

These models rely on Convolutional Neural Networks (CNNs) to extract hierarchical features. SegNet prioritizes memory efficiency, U-Net incorporates skip connections for improved localization, FC-DenseNet enhances feature reuse through dense connectivity, and DeepLab-v3 captures multi-scale context with atrous spatial pyramid pooling. Each model presents a balance between computational cost and segmentation accuracy, influencing its suitability for embedded systems like Duckiebots.

Implementing semantic segmentation in Duckietown enhances autonomy by enabling self-driving agents to interpret complex visual inputs. The selection of an appropriate segmentation model depends on processing constraints and real-time performance needs. By integrating optimized segmentation techniques, Duckiebots improve decision-making in structured environments.

Highlights - Semantic Image Segmentation Methods in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The article focuses on evaluation of the applicability of existing semantic segmentation algorithms for the Duckietown simulator. Duckietown is an open research project in the field of autonomously controlled robots. The article explores classical semantic image segmentation algorithms. Their analysis for applicability in Duckietown is carried out.

With the help of them, we want to make a dataset for training neural networks. The following was investigated: edge-detection techniques, threshold algorithms, region growing, segmentation algorithms based on clustering, neural networks. The article also reviewed networks designed for semantic image segmentation and machine learning frameworks, taking into account all the limitations of the Duckietown simulator.

Experiments were conducted to evaluate the accuracy of semantic segmentation algorithms on such classes of Duckietown objects as road and background. Based on the results of the analysis, region growing algorithms and clustering algorithms were selected and implemented.

Experiments were conducted to evaluate the accuracy on such classes of Duckietown objects as road, background and traffic signs. After evaluating the accuracy of the algorithms considered, it was decided to use Color segmentation, Mean Shift, Thresholding algorithms and Segmentation of signs by April-tag for image preprocessing. For neural networks, experiments were conducted to evaluate the accuracy of semantic segmentation algorithms on such classes of Duckietown objects as road and background. After evaluating the accuracy of the algorithms considered, it was decided to select the DeepLab-v3 neural network. Separate module was created for semantic image segmentation in Duckietown.

Conclusion - Semantic Image Segmentation Methods in Duckietown

Here are the conclusions from the author of this paper:

The article analyzes the applicability of semantic segmentation algorithms in the Duckietown simulator, which simulates autopilot robots in an urban environment. 

It was found that methods based on classical computer vision algorithms are inferior to methods based on neural networks in terms of stability, segmentation accuracy and speed of operation. It was proposed to use classical computer vision algorithms for marking images and preparing datasets and neural networks for segmentation on robots. 

CV algorithms taking into account the features of the Duckietown simulator. Thus, classical computer vision algorithms, such as area-building algorithms and clustering algorithms, were chosen for image preprocessing. OpenCV and Scikit-image libraries were selected for the experiment. The best result during the testing was obtained using MeanShift and cv2.threshold together, and road signs were segmented most successfully using April tag. 

Also, after testing the selected neural networks, it was decided to select the DeepLab-v3 neural network as an adapted semantic segmentation algorithm for the Duckietown simulator. After testing the trained DeepLab-v3 neural network model on Duckiebot, a separate module for semantic image segmentation was created in the Duckietown open research project. In the future, it is planned to add such classes of Duckietown objects as a duck in the role of a pedestrian, road markings (red, yellow, white) and Duckiebot.

Project Authors

Inspirational Duckietown placeholder

Kristina S. Lanchukovskaya is affiliated with the department of IT, Novosibirsk State University, Novosibirsk, Russia.

Inspirational Duckietown placeholder

Dasha E. Shabalina is affiliated with the department of IT, Novosibirsk State University, Novosibirsk, Russia.

Tatiana V. Liakh is a Senior Lecturer at the Department of Computer Science, Electrical and Space Engineering, Novosibirsk State University, Novosibirsk, Russia.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

City Rescue: Autonomous Duckiebot Recovery System

City Rescue: Autonomous Recovery System for Duckiebots

City Rescue: Autonomous Recovery System for Duckiebots

Project Resources

Project highlights

City rescue: autonomous recovery system for Duckiebots - the objectives

Would it not be desirable to have the city we drive in monitor our vehicle, as a guardian angel ready to intervene in case of distress offering autonomous recovery services? 

The project, “City Rescue” is a first step towards enabling a continuous monitoring system from traffic lights and watchtowers, smart infrastructure in Duckietown, aimed at localization and communicating with Duckiebots as they autonomously operate in town.  

Despite the robust autonomy algorithms guiding the behaviors of Duckietown in Duckietowns, distress situations such as lane departures, crashes, or stoppages, might happen. In these cases human intervention is often necessary to reset experiments. 

This project introduces an automated monitoring and rescue system that identifies distressed agents, classifies their distress state, and calculates and communicates corrective actions to restore Duckiebots to normal operation.

The City-Rescue project incorporates several key components to achieve autonomous monitoring and recovery of distressed Duckiebots:

  • Distress detection: classifies failure states such as lane departure, collision, and immobility using real-time localization data.

  • Lightweight real-time localization: implements a simplified localization system using AprilTags and watchtower cameras, optimizing computational efficiency for real-time tracking.

  • Decentralized rescue architecture: employs a central Rescue Center and multiple Rescue Agents, each dedicated to an individual Duckiebot, enabling simultaneous rescues.

  • Closed-loop control for recovery: uses a proportional-integral (PI) controller to execute corrective movements, bringing Duckiebots back to lane-following mode.

City Rescue is a great example of vehicle-to-infrastructure (v2i) interactions in Duckietown.

The challenges and approach

The City Rescue autonomous recovery system employs a server-based architecture, where a central “Rescue Center” continuously processes localization data and assigns rescue tasks to dedicated Rescue Agents.

The localization system uses appropriately placed reference AprilTags and watchtower cameras, tuned for low-latency operation by bypassing computationally expensive optimization routines. The rescue mechanism is driven by a PI controller, which calculates corrective movements based on deviations from an ideal trajectory.

The main challenges in implementing this city behavior include localization inaccuracies, due to the limited coverage of watchtower cameras, and distress event positioning on the map.

The localization inaccuracies are mitigated by performing camera calibration procedures on the watchtower cameras, as well as by performing an initial city offset calibration procedure. The success rate of the executed maneuvers varies with map topographical complexity; recovery from curved road or intersection sections is less reliable than from straight lanes.

Finally, the lack of inter-robot communication can lead to cascading failure scenarios when multiple Duckiebots collide.

City rescue: full report

The design and implementation of this autonomous recovery system is documented in the following report.

City rescue in Duckietown: Authors

Carl Philipp Biagosch is the co-founder at Mantis Ropeway Technologies, Switzerland.

Jason Hu is currently working as a Scientific Assistant at ETH Zurich, Switzerland.

Martin Xu is currently working as a data scientist at QuantCo, Germany.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Adaptive lane following featured image

Proxy Domains for Evaluation and Learning

General Information

Proxy Domains for Evaluation and Learning

Running robotics experiments in the real world is often costly in terms of time, money, and effort. For this reason, robotics development and testing rely on proxy domains (e.g., simulations) before real-world deployment. But how to gauge the degree of usefulness of using proxy domains in the development process, and are all domains equally useful? 

Intuitively, the answer to the above questions will depend on the type of robot, the task it has to achieve, and the environment in which it operates. Evaluating a proxy domain’s usefulness for a specific combination of these circumstances, specifically for the training of autonomous agents, is tackled in this work by establishing quantification metrics and assessing them in Duckietown.

The key aspects of this work are:

  • Proxy Usefulness Metrics: introduction of Proxy Relative Predictivity Value (PRPV) and Proxy Learning Value (PLV) to measure a proxy’s ability to predict real-world performance and aid agent learning. PRPV helps identify simulations that accurately predict real-world results, while PLV measures their effectiveness in training agents.

  • Prediction vs. Learning: differentiation of proxies used for accurate performance prediction from those for data generation in training.

  • Experiments: demonstration of how tuning proxy domain parameters (e.g., sensor delays, camera angle) affects predictivity and learning efficiency.

These metrics improve proxy selection and tuning for robotics research and education, and Duckietown enables rapid prototyping of these ideas for mobile autonomous vehicles. 

Highlights - Proxy Domains for Evaluation and Learning in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

In many situations it is either impossible or impractical to develop and evaluate agents entirely on the target domain on which they will be deployed. This is particularly true in robotics, where doing experiments on hardware is much more arduous than in simulation. This has become arguably more so in the case of learning-based agents. To this end, considerable recent effort has been devoted to developing increasingly realistic and higher fidelity simulators. However, we lack any principled way to evaluate how good a “proxy domain” is, specifically in terms of how useful it is in helping us achieve our end objective of building an agent that performs well in the target domain. In this work, we investigate methods to address this need. We begin by clearly separating two uses of proxy domains that are often conflated: 1) their ability to be a faithful predictor of agent performance and 2) their ability to be a useful tool for learning. In this paper, we attempt to clarify the role of proxy domains and establish new proxy usefulness (PU) metrics to compare the usefulness of different proxy domains. We propose the relative predictive PU to assess the predictive ability of a proxy domain and the learning PU to quantify the usefulness of a proxy as a tool to generate learning data. Furthermore, we argue that the value of a proxy is conditioned on the task that it is being used to help solve. We demonstrate how these new metrics can be used to optimize parameters of the proxy domain for which obtaining ground truth via system identification is not trivial.

Conclusion - Proxy Domains for Evaluation and Learning in Duckietown

Here are the conclusions from the author of this paper:

“We introduce new metrics to assess the usefulness of proxy domains for agent learning. In a robotics setting it is common to use simulators for development and evaluation to reduce the need to deploy on real hardware. We argue that it is necessary to to take into account the specific task when evaluating the usefulness of the the proxy. We establish novel metrics for two specific uses of a proxy. When the proxy domain is used to predict performance in the target domain, we offer the PRPV to assess the usefulness of the proxy as a predictor, and we argue that the task needs to be imposed but not the agent. When a proxy is used to generate training data for a learning algorithm, we propose the PLV as a metric to assess usefulness of the source domain, which is dependent on a specific task and a learning algorithm. We demonstrated the use of these measures for predicting parameters in the Duckietown environment. Future work will involve more rigorous treatment of the optimization problems posed to find optimal parameters, possibly in connection with differentiable simulation environments.”

Project Authors

Anthony Courchesne is currently working as an MLOps Engineer ar Maneva, Canada.

Andrea Censi is currently working as the Deputy Director, Chair of Dynamic Systems and Control at ETH Zurich, Switzerland.

Liam Paull is an Associate Professor at the Universite de Montreal, Canada and also serves as the Chief Education Officer at Duckietown.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Adaptive trim before and after

Adaptive Lane Following with Auto-Trim Tuning

Adaptive Lane Following with Auto-Trim Tuning

Project Resources

Before and after:

Training:

Project highlights

Calibration of sensor and actuators is always important in setting up robot systems, especially in the context of autonomous operations. Manual tweaking of calibration parameters though is a nuisance, albeit necessary when every physical instance of the robots is slightly different from each other. 

In this project, the authors developed a process to automatically calibrate the trim parameter in the Duckiebot, i.e., allowing it to go straight when an equal command to both wheel motors is provided. 

Adaptive lane following in Duckietown: beyond manual odometry calibration

The objective of this project is to develop a process to autonomously calibrate the wheel trim parameter of Duckiebots, eliminating the need for manual tuning or improving upon it. Manual tuning of this parameter, as part of the odometry calibration procedure, is needed to account for the invevitable slight differences existing across different Duckiebots, due to manufacturing, assembly, handling difference, etc.

Creating an automatic trim calibration procedure enhances the Duckiebot’s lane following behavior, by continuously adjusting the wheel alignment based on real-time lane pose feedback. Duckiebots typically require manual calibration for the odometry, which introduces variability and reduces scalability in autonomous mobility experiments. 

By implementing a Model-Reference Adaptive Control (MRAC) based approach, the project ensures consistent performance despite mechanical variations or external disturbances. This is desireable for large-scale Duckietown deployments where the robots need to maintain uniform behavior across different assemblies. 

Adaptive control reduces dependence on predefined parameters, allowing Duckiebots to self-correct without external intervention. This enables more reproducible fleet-level performance, useful for research in autonomous navigation. This project supports experimentation in self-calibrating robotic systems through application of adaptive control research.

Model Reference Adaptive Control (MRAC) for adaptive lane following in Duckietown

The method employs a Model-Reference Adaptive Control (MRAC) framework that iteratively estimates the optimal trim value during lane following by processing lane pose feedback from the vision pipeline, and comparing expected and actual motion to compute a correction factor. An adaptation law updates the trim dynamically based on real-time error minimization.

Pose estimation relies on a vision-based lane filter, which introduces latency and noise, affecting convergence stability. The adaptive controller must maintain stability while ensuring convergence to an optimal trim value within a finite time window. 

The performance of this approach is constrained by sensor inaccuracies, requiring threshold-based filtering to exclude unreliable pose data. The algorithm operates in real-world conditions where road surface variations, lighting changes, and mechanical wear affect performance. Synchronizing lane pose data with controller updates while minimizing computation delays is a key challenge, and ensuring that the adaptive controller does not introduce oscillations or instability in the control loop requires parameter tuning.

Adaptive lane following: full report

Check out the full report here. 

Adaptive lane following in Duckietown: Authors

Pietro Griffa is currently working as a Systems and Estimation Engineer at Verity, Switzerland.

Simone Arreghini is currently pursuing his Ph. D. at IDSIA USI-SUPSI, Switzerland.

Rohit Suri was a mentor on this project and is currently working as a Senior Research Scientist at Venti Technologies, Singapore.

Aleksandar Petrov was a mentor on this project and is currently pursuing his Ph. D.  at the University of Oxford, United Kingdom.

Jacopo Tani was a supervisor on this project and is currently the CEO at Duckietown.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Erkent featured image robotics and rescue

Ozgur Erkent: robotic rescue operations with Duckietown

Ozgur Erkent: robotic rescue operations with Duckietown

Meet Ozgur Erkent, Assistant Professor at Hacettepe University’s Computer Engineering Department in Turkey, who is teaching and doing research with Duckietown.

Ankara, Turkey, January 2025: Prof. Ozgur Erkent shares how Duckietown is shaping robotics education at Hacettepe University. From hands-on learning in his Introduction to Robotics course, to real-world applications in rescue operations, he explains why he believes Duckietown is an invaluable tool for students exploring autonomous systems.

Bringing hands-on robotics to the classroom

At Hacettepe University, Professor Ozgur Erkent is using Duckietown in his curriculum and providing students with hands-on learning experiences that bridge theory and real-world applications. 

Good morning and welcome! Could you introduce yourself and your work?

My name is Ozgur Erkent and I am an Assistant Professor at Hacettepe University’s Computer Engineering Department. I have been here for nearly three years, focusing on mobile robots and autonomous vehicles. My work involves both teaching and research in these areas.

Hacettepe University Duckietown lab robotics and rescue
How did you first discover Duckietown?
I first heard about Duckietown while working as a researcher in France. A colleague returning from Colombia shared how undergraduates were using Duckiebots in their projects. That caught my interest, and when I joined Hacettepe University, I saw an opportunity to integrate it into my courses.
What course do you use Duckietown for, and what does it involve?
I use Duckietown in my Introduction to Robotics course, which is open to third- and fourth-year students in the Artificial Intelligence Engineering program. The course has a laboratory component where students work with Duckiebots and Duckiedrones to apply robotics concepts practically.

I also wrote a project funded by NVIDIA through the “Bridge To Turkiye Fund”, that focuses on rescue robotics. After the devastating earthquake in Turkey two years ago, NVIDIA launched an initiative to support research aimed at disaster response. With NVIDIA as the sponsor, we were able to purchase the Duckiebots, Duckiedrones and related tools for the Robotics Lab course. I proposed a project that leverages Duckietown kits to train students in SLAM (Simultaneous Localization and Mapping), sensor integration, and autonomous navigation—key skills for robotics applications in search and rescue operations. Through this project, students may gain hands-on experience in developing robotic systems that could one day assist in real-world disaster relief efforts.

Hacettepe University Duckietown lab robotics and rescue

Robotics is more than just algorithms; it’s about solving real-world challenges. Duckietown helps students bridge that gap in a meaningful way.

How have students reacted to working with Duckietown?

Many students come from a software background, so working with real hardware is a new challenge. Some find it difficult at first, but those who enjoy hands-on work really thrive. They even help their peers with assembly and troubleshooting. It’s a valuable learning experience. If I were to design something for undergraduate students learning robotics, it would probably look a lot like Duckietown. I think it would be a great addition, as it would help students get hands-on experience with the basics of robotics.

Hacettepe University Duckietown lab robotics and rescue
Hacettepe University Duckietown lab robotics and rescue

If I were to design something for undergraduate students learning robotics, it would probably look a lot like Duckietown. I think it would be a great addition, as it would help students get hands-on experience with the basics of robotics.

Besides Duckiebots, are you using any other tools?

Yes, I have also introduced Duckiedrones, which are especially popular in Turkey. The national foundation supports drone projects, and students are eager to explore them. Several groups are already working on Duckiedrone-based initiatives.

Duckiedrone DD24
What do you think about the Duckietown community and support?
The community is a big advantage. Universities considering Duckietown should definitely check out its forums and resources. The support available makes a big difference in implementing the platform effectively.
Any final thoughts?
I’m excited to see where these projects lead. Robotics is more than just algorithms; it’s about solving real-world challenges. Duckietown helps students bridge that gap in a meaningful way.

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

Deep Reinforcement Learning for Agent-Based Autonomous Robot

Deep Reinforcement and Transfer Learning for Robot Autonomy

General Information

Deep Reinforcement and Transfer Learning for Robot Autonomy

Developing autonomous robotic systems is challenging. When using machine learning based approaches, one of the main challenges is the high cost and complexity of real-world training. Running real world experiments is time consuming and depending on the application, can be expensive as well.

This work uses Deep Reinforcement Learning (DRL) and tackles this challenge through Transfer Learning (TL). DRL enables robots to learn optimal behaviors through trial-and-error, guided by reward-based feedback. Transfer Learning then addresses the high cost of generating training data by leveraging simulation environments.

Running experiments in simulation is time and cost efficient, the trained agent can then be deployed on a physical robot, in a process known as Sim2Real transfer. Ideally, this approach significantly reduces training costs and accelerates real-world deployment.

In this work, training occurs in a simulated Duckietown environment using Deep Deterministic Policy Gradient (DDPG) and TL techniques to mitigate the expected difference between simulated and real-world environments. The resulting agent is then deployed  on a custom-built robot in a physical Duckietown city for evaluation.

Results show that the DRL-based model successfully learns lane-following and navigation autonomous behaviors in simulation, and performance comparison with real world experiments is provided.  

Highlights - Deep Reinforcement Learning for Agent-Based Autonomous Robot

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Real robots have different constraints, such as battery capacity limit, hardware cost, etc., which make it harder to train models and conduct experiments on physical robots. Transfer learning can be used to omit those constraints by training a self-driving system in a simulated environment, with a goal of running it later in a real world. Simulated environment should resemble a real one as much as possible to enhance transfer process. This paper proposes a specification of an autonomous robotic system using agent-based approach. It is modular and consists of various types of components (agents), which vary in functionality and purpose. 

Thanks to system’s general structure, it may be transferred to other environments with minimal adjustments to agents’ modules. The autonomous robotic system is implemented and trained in simulation and then transferred to real robot and evaluated on a model of a city. A two-wheeled robot uses a single camera to get observations of the environment in which it is operates. Those images are then processed and given as an input to the deep neural network, that predicts appropriate action in the current state. Additionally, the simulator provides a reward for each action, which is used by the reinforcement learning algorithm to optimize weights in the neural network, in order to improve overall performance.

Conclusion - Deep Reinforcement Learning for Agent-Based Autonomous Robot

Here are the conclusions from the author of this paper:

“After several breakthroughs in the field of Deep Reinforcement Learning, it became one of the most popular researched topics in Machine Learning and a common approach to the problem of autonomous driving. This paper presents the process of training an autonomous robotic system using popular actor-critic algorithm in the simulator, which may then also be run on real robot. It was possible to train an agent in real-time using trial-and-error approach without the need to collect vast amounts of labeled data. The neural network learned how to control the robot and how to follow the lanes, without any explicit guidelines. Only a few functions have been used to transform the data sent between environment and the agent, in order to make the learning process smoother and faster. 

For evaluation purposes, a real robot and a small city model have been built, based on the Duckietown platform specification. This hardware has been used to evaluate in the real world the performance of the system, trained in simulator. Also, additional Transfer Learning techniques were used, in order to adjust the observations and actions in the real robot, due to the differences with simulated environment. Although, the performance in real environment was worse than in simulator, certain trained models were still able to guide the robot around a simple road loop, which shows a potential for such approach. As a result, the use of the simulator greatly reduced the time and effort needed to train the system, and transfer methods were used to deploy it in the real world. 

The Duckietown platform provides a baseline, which was modified and refactored to follow the system structure. The simulator and its components are thoroughly documented, the detailed instructions explain how to train and run the robot both in simulation and in real world and evaluate the results. Duckietown provides complete sets of parts, necessary to build the robot and small city, however, it was decided to build custom robot, according to the guidelines. The robot uses a single camera to get observations of the surrounding environment. 

The reinforcement learning algorithm was used to learn a policy, which tries to choose optimal actions based on the those observations with the help of reward function, that provides a feedback for previous decisions. It was possible to significantly reduce the effort required to train a model, thanks to the simulator, as the process does not require constant human supervision and involvement. Such approach proves to be very promising, as the agent learned how to do the lane-following task without any explicit labels, and has shown good performance in the simulated environment. Although, there is still a room for improvement, when it comes to transferring the model to real world, which requires various adaptations and adjustments to be made for The robot to properly execute maneuvers and show stability in its actions.”

Project Authors

Vladyslav Kyryk is currently working as a Data Scientist at Finitec, Warsaw, Poland.

Maksym Figat  is working as an Assistant Professor at Warsaw University of Technology, Poland.

Maryan Kyryk is currently serving as the Co-Founder & CEO at Maxitech, Ukraine.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Flexible tether control in heterogeneous marsupial systems

Flexible tether control in marsupial systems

Flexible tether control in marsupial systems

Project Resources

Project highlights

Wouldn’t it be great to have a base station transfer power, data and other information to other autonomous vehicles through a tethered connection? But how to deal with the challenges arising from controlling the length and tension of the tether? 

Here is an overview of the authors’ results: 

Flexible tether control in Duckietown: objective and importance

Managing tethers effectively is an important challenge in autonomous robotic systems, especially in heterogeneous marsupial robot setups where multiple robots work together to achieve a task.

Tethers provide power and data connections between agents, but poor management can lead to tangling, restricted movement, or unnecessary strain.

This work implements a flexible tethering approach that balances slackness and tautness to improve system performance and reliability.

Using the Duckiebot DB21J as a test passenger agent, the study introduces a tether control system that adapts to different conditions, ensuring smoother operation and better resource sharing. By combining aspects of both taut and slacked tether models, this work contributes to making multi-robot systems more efficient and adaptable in various environments.

The method and challenges in implementing flexible tether control in Duckietown

The authors developed a custom-built spool mechanism designed to actively adjust tether length using real-time sensor feedback. The tether system comprises a custom-built spool mechanism, integrated with sensor feedback for real-time tether length adjustments.

To coordinate these adjustments, the system was implemented within a standard ROS-based framework, ensuring efficient data management.

To evaluate the system’s effectiveness, the authors tested different slackness and control gain parameters while the Duckiebot followed a predefined square path. By analyzing the spool’s reactivity and the consistency of the tether’s behavior, they assessed the system’s performance across varying conditions.

Several challenges emerged during testing, e.g., maintaining the right balance of tether slackness was critical, as excess slack risked entanglement, while insufficient slack could restrict mobility.

Hardware limitations affected the spool’s responsiveness, requiring careful tuning of control parameters. Additionally, environmental factors, such as potential obstacles, underscored the need for a more adaptive control mechanism in future iterations.

Flexible tether control: full report

Check out the full report here. 

Flexible tether control in heterogeneous marsupial systems in Duckietown: Authors

Carson Duffy is a computer engineer who studied at the Texas A&M University, USA.

Dr. Jason O’Kane is a faculty research advisor at Texas A&M. 

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

PID and Convolutional Neural Network (CNN) in Duckietown

PID and Convolutional Neural Networks (CNN) in Duckietown

General Information

PID and Convolutional Neural Networks (CNN) in Duckietown

Ever wondered how the legendary PID controller compares to a more “modern”  convolutional neural network (CNN) design, in controlling a Duckiebot in driving in Duckietown? 

This work analyzes the performance differences between classical control techniques and machine learning-based approaches for autonomous navigation. The Duckiebot follows a designated path using image-based feedback, where the PID controller corrects deviations through proportional, integral, and derivative adjustments. The CNN-based method leverages image feature extraction to generate control commands, reducing reliance on predefined system models. 

Key aspects covered include differential drive mechanics, real-time image processing, and ROS-based implementation. The study also outlines the impact of training data selection on CNN performance. Comparative analysis highlights the strengths and limitations of both approaches. The conclusions emphasize the applicability of PID and CNN techniques in Duckietown, demonstrating their role in advancing robotic autonomy.

Highlights - PID and Convolutional Neural Network (CNN) in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The paper presents the design and practical implementation by students of a control system using a classic PID controller and a controller using artificial neural networks. The control object is a Duckiebot robot, and the task it is to perform is to drive the robot along a designated line (line follower). 

The purpose of the proposed activities is to familiarize students with the advantages and disadvantages of the two controllers used and for them to acquire the ability to implement control systems in practice. The article briefly describes how the two controllers work, how to practically implement them, and how to practically implement the exercise.

Conclusion - PID and Convolutional Neural Network (CNN) in Duckietown

Here are the conclusions from the author of this paper:

“The PID controller is used successfully in many control systems, and its implementation is relatively simple. There are also a number of methods and algorithms for adjusting controller parameters for this type of controller. 

PID controllers, on the other hand, are not free of disadvantages. One of them is the requirement of prior knowledge of, even roughly, the model of the process one wants to control. Thus, it is necessary to identify both the structure of the process model and its parameters. Identification tasks are complex tasks, requiring a great deal of knowledge about the nature of the process itself. There are also methods for identifying process models based on the results of practical experiments, however sometimes it may not be possible to conduct such experiments. When using a PID controller, one should also be aware that it was developed for processes, operation of which can be described by linear models. Unfortunately, the behavior of the vast majority of dynamic systems is described by non-linear models. 

The consequence of this fact is that, in such cases, the PID controller works using linear approximations of nonlinear systems, which can lead to various errors, inaccuracies, etc. Unlike the classic PID controller, controllers using artificial neural networks do not need to know the mathematical model of the process they control and its parameters. 

The ability to design different neural network architectures, such as convolutional, recurrent, or deep neural networks, makes it possible to adapt the neural regulator to the specific process it is supposed to control. On the other hand, the multiplicity of neural network architectures and their design means that we can never be sure whether a given neural network structure is optimal.

 The selection of neural controller parameters is done automatically using appropriate network training algorithms. The key element influencing the accuracy of neural regulator operation is the data used for training the neural network. The disadvantage of regulators using neural networks is the inability to demonstrate the stability of operation of the systems they control.

In case of the PID regulator, despite the use of approximate models of the process, it is very often possible to prove that a closed control system will operate stably in any or a certain range of values of variables. Unfortunately, such an analysis cannot be carried out in the case of neural regulators. In summary, the implementation of two different controllers to perform the same task provides an opportunity to learn the advantages and disadvantages of each.”

Project Authors

Marek Długosz is a Professor at the Akademia Górniczo-Hutnicza (AGH) – University of Science and Technology, Poland.

Paweł Skruch is currently working as the Manager and Principal Engineer AI at Aptiv, Switzerland.

Marcin Szelest is currently affiliated with the AGH University of Krakow, Kracow, Poland.

Artur Morys-Magiera is a a PhD candidate at AGH University of Krakow, Poland.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.