Erkent featured image robotics and rescue

Ozgur Erkent: robotic rescue operations with Duckietown

Ozgur Erkent: robotic rescue operations with Duckietown

Meet Ozgur Erkent, Assistant Professor at Hacettepe University’s Computer Engineering Department in Turkey, who is teaching and doing research with Duckietown.

Ankara, Turkey, January 2025: Prof. Ozgur Erkent shares how Duckietown is shaping robotics education at Hacettepe University. From hands-on learning in his Introduction to Robotics course, to real-world applications in rescue operations, he explains why he believes Duckietown is an invaluable tool for students exploring autonomous systems.

Bringing hands-on robotics to the classroom

At Hacettepe University, Professor Ozgur Erkent is using Duckietown in his curriculum and providing students with hands-on learning experiences that bridge theory and real-world applications. 

Good morning and welcome! Could you introduce yourself and your work?

My name is Ozgur Erkent and I am an Assistant Professor at Hacettepe University’s Computer Engineering Department. I have been here for nearly three years, focusing on mobile robots and autonomous vehicles. My work involves both teaching and research in these areas.

Hacettepe University Duckietown lab robotics and rescue
How did you first discover Duckietown?
I first heard about Duckietown while working as a researcher in France. A colleague returning from Colombia shared how undergraduates were using Duckiebots in their projects. That caught my interest, and when I joined Hacettepe University, I saw an opportunity to integrate it into my courses.
What course do you use Duckietown for, and what does it involve?
I use Duckietown in my Introduction to Robotics course, which is open to third- and fourth-year students in the Artificial Intelligence Engineering program. The course has a laboratory component where students work with Duckiebots and Duckiedrones to apply robotics concepts practically.

I also wrote a project funded by NVIDIA through the “Bridge To Turkiye Fund”, that focuses on rescue robotics. After the devastating earthquake in Turkey two years ago, NVIDIA launched an initiative to support research aimed at disaster response. With NVIDIA as the sponsor, we were able to purchase the Duckiebots, Duckiedrones and related tools for the Robotics Lab course. I proposed a project that leverages Duckietown kits to train students in SLAM (Simultaneous Localization and Mapping), sensor integration, and autonomous navigation—key skills for robotics applications in search and rescue operations. Through this project, students may gain hands-on experience in developing robotic systems that could one day assist in real-world disaster relief efforts.

Hacettepe University Duckietown lab robotics and rescue

Robotics is more than just algorithms; it’s about solving real-world challenges. Duckietown helps students bridge that gap in a meaningful way.

How have students reacted to working with Duckietown?

Many students come from a software background, so working with real hardware is a new challenge. Some find it difficult at first, but those who enjoy hands-on work really thrive. They even help their peers with assembly and troubleshooting. It’s a valuable learning experience. If I were to design something for undergraduate students learning robotics, it would probably look a lot like Duckietown. I think it would be a great addition, as it would help students get hands-on experience with the basics of robotics.

Hacettepe University Duckietown lab robotics and rescue
Hacettepe University Duckietown lab robotics and rescue

If I were to design something for undergraduate students learning robotics, it would probably look a lot like Duckietown. I think it would be a great addition, as it would help students get hands-on experience with the basics of robotics.

Besides Duckiebots, are you using any other tools?

Yes, I have also introduced Duckiedrones, which are especially popular in Turkey. The national foundation supports drone projects, and students are eager to explore them. Several groups are already working on Duckiedrone-based initiatives.

Duckiedrone DD24
What do you think about the Duckietown community and support?
The community is a big advantage. Universities considering Duckietown should definitely check out its forums and resources. The support available makes a big difference in implementing the platform effectively.
Any final thoughts?
I’m excited to see where these projects lead. Robotics is more than just algorithms; it’s about solving real-world challenges. Duckietown helps students bridge that gap in a meaningful way.

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

Flexible tether control in heterogeneous marsupial systems

Flexible tether control in marsupial systems

Flexible tether control in marsupial systems

Project Resources

Project highlights

Wouldn’t it be great to have a base station transfer power, data and other information to other autonomous vehicles through a tethered connection? But how to deal with the challenges arising from controlling the length and tension of the tether? 

Here is an overview of the authors’ results: 

Flexible tether control in Duckietown: objective and importance

Managing tethers effectively is an important challenge in autonomous robotic systems, especially in heterogeneous marsupial robot setups where multiple robots work together to achieve a task.

Tethers provide power and data connections between agents, but poor management can lead to tangling, restricted movement, or unnecessary strain.

This work implements a flexible tethering approach that balances slackness and tautness to improve system performance and reliability.

Using the Duckiebot DB21J as a test passenger agent, the study introduces a tether control system that adapts to different conditions, ensuring smoother operation and better resource sharing. By combining aspects of both taut and slacked tether models, this work contributes to making multi-robot systems more efficient and adaptable in various environments.

The method and challenges in implementing flexible tether control in Duckietown

The authors developed a custom-built spool mechanism designed to actively adjust tether length using real-time sensor feedback. The tether system comprises a custom-built spool mechanism, integrated with sensor feedback for real-time tether length adjustments.

To coordinate these adjustments, the system was implemented within a standard ROS-based framework, ensuring efficient data management.

To evaluate the system’s effectiveness, the authors tested different slackness and control gain parameters while the Duckiebot followed a predefined square path. By analyzing the spool’s reactivity and the consistency of the tether’s behavior, they assessed the system’s performance across varying conditions.

Several challenges emerged during testing, e.g., maintaining the right balance of tether slackness was critical, as excess slack risked entanglement, while insufficient slack could restrict mobility.

Hardware limitations affected the spool’s responsiveness, requiring careful tuning of control parameters. Additionally, environmental factors, such as potential obstacles, underscored the need for a more adaptive control mechanism in future iterations.

Flexible tether control: full report

Check out the full report here. 

Flexible tether control in heterogeneous marsupial systems in Duckietown: Authors

Carson Duffy is a computer engineer who studied at the Texas A&M University, USA.

Dr. Jason O’Kane is a faculty research advisor at Texas A&M. 

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

PID and Convolutional Neural Network (CNN) in Duckietown

PID and Convolutional Neural Networks (CNN) in Duckietown

General Information

PID and Convolutional Neural Networks (CNN) in Duckietown

Ever wondered how the legendary PID controller compares to a more “modern”  convolutional neural network (CNN) design, in controlling a Duckiebot in driving in Duckietown? 

This work analyzes the performance differences between classical control techniques and machine learning-based approaches for autonomous navigation. The Duckiebot follows a designated path using image-based feedback, where the PID controller corrects deviations through proportional, integral, and derivative adjustments. The CNN-based method leverages image feature extraction to generate control commands, reducing reliance on predefined system models. 

Key aspects covered include differential drive mechanics, real-time image processing, and ROS-based implementation. The study also outlines the impact of training data selection on CNN performance. Comparative analysis highlights the strengths and limitations of both approaches. The conclusions emphasize the applicability of PID and CNN techniques in Duckietown, demonstrating their role in advancing robotic autonomy.

Highlights - PID and Convolutional Neural Network (CNN) in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The paper presents the design and practical implementation by students of a control system using a classic PID controller and a controller using artificial neural networks. The control object is a Duckiebot robot, and the task it is to perform is to drive the robot along a designated line (line follower). 

The purpose of the proposed activities is to familiarize students with the advantages and disadvantages of the two controllers used and for them to acquire the ability to implement control systems in practice. The article briefly describes how the two controllers work, how to practically implement them, and how to practically implement the exercise.

Conclusion - PID and Convolutional Neural Network (CNN) in Duckietown

Here are the conclusions from the author of this paper:

“The PID controller is used successfully in many control systems, and its implementation is relatively simple. There are also a number of methods and algorithms for adjusting controller parameters for this type of controller. 

PID controllers, on the other hand, are not free of disadvantages. One of them is the requirement of prior knowledge of, even roughly, the model of the process one wants to control. Thus, it is necessary to identify both the structure of the process model and its parameters. Identification tasks are complex tasks, requiring a great deal of knowledge about the nature of the process itself. There are also methods for identifying process models based on the results of practical experiments, however sometimes it may not be possible to conduct such experiments. When using a PID controller, one should also be aware that it was developed for processes, operation of which can be described by linear models. Unfortunately, the behavior of the vast majority of dynamic systems is described by non-linear models. 

The consequence of this fact is that, in such cases, the PID controller works using linear approximations of nonlinear systems, which can lead to various errors, inaccuracies, etc. Unlike the classic PID controller, controllers using artificial neural networks do not need to know the mathematical model of the process they control and its parameters. 

The ability to design different neural network architectures, such as convolutional, recurrent, or deep neural networks, makes it possible to adapt the neural regulator to the specific process it is supposed to control. On the other hand, the multiplicity of neural network architectures and their design means that we can never be sure whether a given neural network structure is optimal.

 The selection of neural controller parameters is done automatically using appropriate network training algorithms. The key element influencing the accuracy of neural regulator operation is the data used for training the neural network. The disadvantage of regulators using neural networks is the inability to demonstrate the stability of operation of the systems they control.

In case of the PID regulator, despite the use of approximate models of the process, it is very often possible to prove that a closed control system will operate stably in any or a certain range of values of variables. Unfortunately, such an analysis cannot be carried out in the case of neural regulators. In summary, the implementation of two different controllers to perform the same task provides an opportunity to learn the advantages and disadvantages of each.”

Project Authors

Marek Długosz is a Professor at the Akademia Górniczo-Hutnicza (AGH) – University of Science and Technology, Poland.

Paweł Skruch is currently working as the Manager and Principal Engineer AI at Aptiv, Switzerland.

Marcin Szelest is currently affiliated with the AGH University of Krakow, Kracow, Poland.

Artur Morys-Magiera is a a PhD candidate at AGH University of Krakow, Poland.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Project Resources

Project highlights

Here is a visual tour of the author’s work on implementing deep reinforcement learning for autonomous lane following in Duckietown.

Deep reinforcement learning for autonomous lane following in Duckietown: objective and importance

Would it not be great if we could train an end-to-end neural network in simulation, plug it in the physical robot and have it drive safely on the road? 

Inspired by this idea, Mickyas worked to implement deep reinforcement learning (DRL) for autonomous lane following in Duckietown, training the agent using sim-to-real transfer. 

The project focuses on training DRL agents, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC), to learn steering control using high-dimensional camera inputs. It integrates an autoencoder to compress image observations into a latent space, improving computational efficiency. 

The hope is for the trained DRL model to generalize from simulation to real-world deployment on a Duckiebot. This involves addressing domain adaptation, camera input variations, and real-time inference constraints, amongst other implementation challenges.

Autonomous lane following is a fundamental component of self-driving systems, requiring continuous adaptation to environmental changes, especially whn using vision as main sensing modality. This project identifies limitations in existing DRL algorithms when applied to real-world robotics, and explores modifications in reward functions, policy updates, and feature extraction methods analyzing the results through real world experimentation.

The method and challenges in implementing deep reinforcement learning in Duckietown

The method involves training a DRL agent in a simulated Duckietown environment (Gym Duckietown Simulator) using an autoencoder for feature extraction. 

The encoder compresses image data into a latent space, reducing input dimensions for policy learning. The agent receives sequential encoded frames as observations and optimizes steering actions based on reward-driven updates. The trained model is then transferred to a real Duckiebot using a ROS-based communication framework. 

Challenges for pulling this off include accounting for discrepancies between simulated and real-world camera inputs, which affect performance and generalization. Differences in lighting, surface textures, and image normalization require domain adaptation techniques.

Moreover, computational limitations on the Duckiebot prevent direct onboard execution, requiring a distributed processing setup.

Reward shaping influences learning stability, and improper design of the reward function leads to policy exploitation or suboptimal behavior. Debugging DRL models is complex due to interdependencies between network architecture, exploration strategies, and training dynamics. 

The project addresses these challenges by refining preprocessing, incorporating domain randomization, and modifying policy structures.

Deep reinforcement learning for autonomous lane following: full report

Deep reinforcement learning for autonomous lane following in Duckietown: Authors

Mickyas Tamiru Asfaw is currently working as an AI Robotics and Innovation Engineer at the CESI lineact laboratory, France.

David Bertoin is currently working as a ML Applied Scientist at Photoroom, France.

Valentin Guillet is currently working as a Research engineer at IRT Saint Exupéry, France.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Visual control of automated guided vehicles in Duckietown

Visual monitoring of automated guided vehicles in Duckietown

General Information

Visual monitoring of automated guided vehicles in Duckietown

The increasing use of robotics in industrial automation has led to the need for systems that ensure safety and efficiency in monitoring autonomous guided vehicles (AGVs). This research proposes a visual monitoring system for monitoring the trajectory and behavior of AGVs in industrial environments.

The system utilizes a network of cameras mounted on towers to detect, identify, and track AGVs. The visual data is transmitted to a central server, where the robots’ trajectories are evaluated and compared against predefined ideal paths. The system operates independently of specific hardware or software configurations, offering flexibility in its deployment.

Duckietown was used as the test environment for this system, allowing for controlled experiments with simulated robotic fleets. A prototype of the system demonstrated its capability to track AGVs using Aruco tags and evaluate rectilinear trajectories.

Key aspects and concepts:

  • Use of camera towers for visual control of AGVs;
  • Transmission of visual data to a central server for trajectory evaluation;
  • Compatibility with multiple robot types and operating systems;
  • Integration of Aruco tags for robot identification;
  • Modular architecture enabling future expansions;
  • Testing in Duckietown for controlled evaluation.

This research demonstrates a modular approach to monitoring AGVs using a visual control system tested in the Duckietown platform. Future work will extend the system’s capability to handle more complex trajectories such as turns and arcs, further leveraging Duckietown as a scalable research and testing environment.

Highlights - Visual monitoring of automated guided vehicles in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

With the increasing automation of industry and the introduction of robotics in every step of the production chain, the problem of safety has become acute. The article proposes a solution to the problem of safety in production using a visual control system for the fleet of loading automated guided vehicles (AGV). The visual control system is built as towers equipped with cameras. This approach allows to be independent of equipment vendors and allows flexible reconfiguration of the AGV fleet. The cameras detect the appearance of a loading robot, identify it and track its trajectory. Data about the robots’ movements is collected and analyzed on a server. A prototype of the visual control system was tested with the Duckietown project.

Conclusion - Visual monitoring of automated guided vehicles in Duckietown

Here are the conclusions from the author of this paper:

“In the course of this work, a prototype visual evaluation system for Duckietown project was implemented. The system supports flexible seamless integration of third-party detection algorithms and trajectory evaluation algorithms. The visual control system was tested with client imitator module, witch does not require the presence of the real robot on the field. At this stage of the work, the prototype is able to recognize rectilinear trajectory of motion. In the future, we plan to develop evaluation algorithms for other types of trajectories: 90 degree turns, large angle turns, arc movement, etc. Another promising area of research is the integration of the system with cloud-based integrated development environments (IDEs) for industrial control algorithms.”

Project Authors

Anastasia Kravchenko is currently affiliated to Department of Cyber Physical Systems Institute of Automation and Electrometry SB RAS Novosibirsk, Russia.

Alexey Sychev is currently affiliated to Department of Cyber Physical Systems Institute of Automation and Electrometry SB RAS Novosibirsk, Russia.

Vladimir Zyubin is currenly working as an Associate Professor at the Institute of Automation and Electrometry, Russia.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing visual obstacle detection in Duckietown.

Visual Obstacle Detection: objective and importance

This project aims to develop a visual obstacle detection system using inverse perspective mapping with the goal to enable autonomous systems to detect obstacles in real time using images from a monocular RGB camera. It focuses on identifying specific obstacles, such as yellow Duckies and orange cones, in Duckietown.

The system ensures safe navigation by avoiding obstacles within the vehicle’s lane or stopping when avoidance is not feasible. It does not utilize learning algorithms, prioritizing a hard-coded approach due to hardware constraints. The objective includes enhancing obstacle detection reliability under varying illumination and object properties.

It is intended to simulate realistic scenarios for autonomous driving systems. Key metrics of evaluation were selected to be detection accuracy, false positives, and missed obstacles under diverse conditions. 

The method and the challenges visual obstacle detection using Inverse Perspective Mapping

The system processes images from a monocular RGB camera by applying inverse perspective mapping to generate a bird’s-eye view, assuming all pixels lie on the ground plane to simplify obstacle distortion detection. Obstacle detection involves HSV color filtering, image segmentation, and classification using eigenvalue analysis. The reaction strategies include trajectory planning or stopping based on the detected obstacle’s position and lane constraints.

Computational efficiency is a significant challenge due to the hardware limitations of Raspberry Pi, necessitating the avoidance of real-time re-computation of color corrections. Variability in lighting and motion blur impact detection reliability, while accurate calibration of camera parameters is essential for precise 3D obstacle localization. Integration of avoidance strategies faces additional challenges due to inaccuracies in pose estimation and trajectory planning.

Visual Obstacle Detection using Inverse Perspective Mapping: Full Report

Visual Obstacle Detection using Inverse Perspective Mapping: Authors

Julian Nubert is currently a Research Assistant & Doctoral Candidate at the Max Planck Institute for Intelligent Systems, Germany.

Niklas Funk is a PHD Graduate Student at Technische Universität Darmstadt, Germany.

Fabio Meier is currently working as the Head of Operational Data Intelligence at Sensirion Connected Solutions, Switzerland.

Fabrice Oehler is working as a Software Engineer at Sensirion, Switzerland.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Yikai Zheng and Xinyu Zhang TUD Autonomous mobility

Intelligent and autonomous mobility systems

Intelligent and autonomous mobility systems

Research Associates Yikai Zeng and Xinyu Zhang from the Technische Universität Dresden tell us about their work in developing autonomous mobility systems.

Dresden, Germany, November 22, 2024: Research Associates Yikai Zeng and Xinyu Zhang talk with us about the future of autonomous mobility and intelligent transportation systems that promise to redefine how we think about movement and connectivity in urban spaces.

Connected, cooperative and autonomous mobility

We talked with Yikai Zeng and Xinyu Zhang from the Chair of Traffic Process Automation at TU Dresden about their research and teaching activities, and how Duckietown is used at the MiniCCAM lab to teach autonomous mobility.

Hello and welcome! May I ask you to start by introducing yourself?

X. Zhang: Hi! I will start! My name is Xinyu. I’m a Research Associate at TU Dresden and currently work on computational basics and tools of traffic process automation. That’s why I got involved in this Duckiedrone demonstration. Apart from that, I am also responsible for the basic autonomous driving courses, where we use Duckiebots as our learning materials and tools for the students.

Y. Zeng: Hello, My name is Yikai. I’m also a Research Associate at TU Dresden, in Prof. Meng Wang’s laboratory.

Thank you very much, when did you first discover Duckietown?

Y. Zeng: the idea came from Professor Wang, who asked us to continue the Control course of a former colleague, using among other things, Duckiebots. When we took over the course, it was during the Covid period. Right now we have developed the MiniCCAM lab.

Could you tell us more about the miniCCAM lab?

Y. Zeng: Sure! The scope of the miniCCAM laboratory, for us researchers in the transportation and autonomous mobility field, is to look at the greater picture in terms of urban mobility, so slightly different in terms of scope than the course previously mentioned. We use Duckietown for autonomous driving. The current miniCCAM lab is on one hand a good tool for demonstrating to students and general audiences what we are able to do in terms of future transportation systems; on the other hand, it provides us with an opportunity to conduct research. For example, we implemented a higher logic controller for intersection navigation and tested it in both a simulated environment and on the model smart-city Duckietown setup. Duckietown is very practical because organizing an actual field test would be very expensive.

That's great to hear. Why did you decide to use Duckiebots to teach autonomous mobility?

Y. Zeng: The decision was taken before us, but I heard stories about that time. So this course has a long history, over ten years, and every few years the course was redesigned.

Around 2019 the decision was taken to upgrade our fleet of robots, and among various solutions, we also chose Lego initially, but it didn’t work very well for us.

So my former colleague found out about Duckietown, and that’s when the choice was taken. It came all in a single box, and this was considered very positive. It also came with complete teaching materials and very well-structured courses already. This was considered to be extremely useful to help us organize our courses, we just needed to modify what was already there for our own context. So this would be the main motivation, it’s very easy to deploy course materials, and the economic aspects were considered to be very attractive.

X. Zhang: Duckiebots are also good because they come with a camera and wheel encoders, making it easier to get students started, and having them learn about the fundamentals of autonomous driving. 

It came all in a single box, complete teaching materials and very well-structured courses. This was considered to be extremely useful, we just needed to modify what was already there for our own context.

Did students appreciate using Duckiebots?

Y. Zeng: Certainly Duckietown succeeded as a teaching tool, attracting many students to our courses. I would say Duckietown has this characteristic of motivating and capturing the attention of many students. It also provides the first real hands-on experience in the field of robotics and autonomous mobility.

In our course on Computational basics and tools of traffic process automation (Rechentechnische Grundlagen und Werkzeuge der Verkehrsprozessautomatisierung), we use Duckiebots to teach students about general control, group control, and swarm control. Duckietown is also the main, shall we say, “tourist attraction” of our department. Every time we hold events, many students come to us to see the Duckiebots cooperating, going through intersections, and so forth. We’ve been using Duckietown for two years, and already it is very popular, inspiring many interesting discussions with our audiences with scientific backgrounds.

Much more efficient than a simple presentation, I’d say! 

Duckiebots come with a camera and wheel encoders, making it easier to get students started, and having them learn about the fundamentals of autonomous mobility.

MiniCCAM city and bots autonomous mobility
Would you recommend Duckietown to colleagues and students?

Y. Zeng: Yes absolutely, in fact, I’m a bit sad that you’re not producing the old model anymore! We definitely want to try the latest models, test them as a fleet, and introduce them to our lab in the future. Our main focus is always on the interaction between groups of bots and how they work together.

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

Embedded Out-of-Distribution Detection in Duckietown paper abstract

Embedded Out-of-Distribution Detection in Duckietown

General Information

Embedded Out-of-Distribution Detection in Duckietown

The project “embedded out-of-distribution detection (OOD) Detection on an Autonomous Robot Platform” focuses on safety in Duckietown by implementing real-time OOD detection on the Duckiebots. The concept involves using a machine learning-based OOD detector, specifically a β-Variational Autoencoder (β-VAE), to identify test inputs that deviate from the training data’s distribution. Such inputs can lead to unreliable behavior in machine learning systems, critical for safety in autonomous platforms like the Duckiebot.

Key aspects of the project include:

  • Integration: The β-VAE OOD detector is integrated with the Duckiebot’s ROS-based architecture, alongside lane-following and motor control modules.
  • Emergency Braking: An emergency braking mechanism halts the Duckiebot when OOD inputs are detected, ensuring safety during operation.
  • Evaluation: Performance was evaluated in scenarios where the Duckiebot navigated a track and avoided obstacles. The system achieved an 87.5% success rate in emergency stops.

This work demonstrates a method to mitigate safety risks in autonomous robotics. By providing a framework for OOD detection on low-cost platforms, the project contributes to the broader applicability of safe machine learning in cyber-physical systems.

Highlights - Embedded Out-of-Distribution Detection in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Machine learning (ML) is actively finding its way into modern cyber-physical systems (CPS), many of which are safety-critical real-time systems. It is well known that ML outputs are not reliable when testing data are novel with regards to model training and validation data, i.e., out-of-distribution (OOD) test data. We implement an unsupervised deep neural network-based OOD detector on a real-time embedded autonomous Duckiebot and evaluate detection performance. Our OOD detector produces a success rate of 87.5% for emergency stopping a Duckiebot on a braking test bed we designed. We also provide case analysis on computing resource challenges specific to the Robot Operating System (ROS) middleware on the Duckiebot.

Conclusion - Embedded Out-of-Distribution Detection in Duckietown

Here are the conclusions from the author of this paper:

“We successfully demonstrated that the 𝛽-VAE OOD detection algorithm could run on an embedded platform and provides a safety check on the control of an autonomous robot. We also showed that performance is dependent on real-time performance of the embedded system, particularly the OOD detector execution time. Lastly, we showed that there is a trade-off involved in choosing an OOD detection threshold; a smaller threshold value increases the average stopping distance from an obstacle, but leads to an increase in false positives.

This work also generates new questions that we hope to investigate in the future. The system architecture demonstrated in this paper was not utilizing a real-time OS and did not take advantage of technologies such as GPUs or TPUs, which are now becoming common on embedded systems. There is still much work that can be done to optimize process scheduling and resource utilization while maintaining the goal of using low-cost, off-the-shelf hardware and open-source software. Understanding what quality of service can be provided by a system with these constraints and whether it suffices for reliable operations of OOD detection algorithms is an ongoing research theme.

From the OOD detection perspective, we would like to run additional OOD detection algorithms on the same architecture and compare performance in terms of accuracy and computational efficiency. We would also like to develop a more comprehensive set of test scenarios to serve as a benchmark for OOD detection on embedded systems. These should include dynamic as well as static obstacles, operation in various environments and lighting conditions, and OOD scenarios that occur while the robot is performing more complex tasks like navigating corners, intersections, or merging with other traffic.

Demonstrating OOD detection on the Duckietown platform opens the door for more embedded applications of OOD detectors. This will serve to better evaluate their usefulness as a tool to enhance the safety of ML systems deployed as part of critical CPS.”

Project Authors

Michael Yuhas is currenly working as a Research Assistant and pursuing his PhD at the Nanyang Technological University, Singapore.

Yeli Feng is currenly working as a Lead Data Scientist at Amplify Health, Singapore.

Daniel Jun Xian Ng is currenly working as a Mobile Robot Software Engineer at the Hyundai Motor Group Innovation Center Singapore (HMGICS), Singapore.

Zahra Rahiminasab is currenly working as a Postdoctoral Researcher at Aalto University, Finland.

Arvind Easwaran is currenly working as an Associate Professor at the Nanyang Technological University, Singapore.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Intersection Navigation in Duckietown Using 3D Image Feature

Intersection Navigation in Duckietown Using 3D Image Features

Intersection Navigation in Duckietown Using 3D Image Features

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing intersection navigation using 3D image features in Duckietown.

Intersection Navigation in Duckietown: Advancing with 3D Image Features

Intersection navigation in Duckietown using 3D image features is an approach intented to improve autonomous intersection navigation, enhancing decision-making and path planning in complex Duckietown environments, i.e., made of several road loops and road intersections. 

The traditional approach to intersection navigation in Duckietown is naive: (a) stop at the red line before the intersection, (b) read Apriltag-equipped traffic signs (providing information on the shape and coordination mechanism at intersections); (c) decide which direction to take; (d) coordinate with other vehicles at the intersection to avoid collisions; (e) navigate through the intersection. This last step is performed in an open-loop fashion, leveraging the known appearance specifications of intersections in Duckietown. 

By incorporating 3D image features in the perception pipeline, extrapolated from the Duckietown road lines, Duckiebots can achieve a representation of their pose while crossing the intersection, closing, therefore, the loop and improving navigation accuracy, in addition to facilitating the development of new strategies for intersection navigation, such as real-time path optimization. 

Combining 3D image features with methods, such as Bird’s Eye View (BEV) transformations allows for comprehensive representations of the intersection. The integration of these techniques improves the accuracy of stop line detection and obstacle avoidance contributes to advancing autonomous navigation algorithms and supports real-world deployment scenarios.

ChatGPT representation of Duckietown intersection navigation challenges.
An AI representation of Duckietown intersection navigation challenges

The method and the challenges of intersection navigation using 3D features

The thesis involves implementing the MILE model (Model-based Imitation LEarning for urban driving), trained on the CARLA simulator, into the Duckietown environment to evaluate its performance in navigating unprotected intersections.

Experiments were conducted using the Gym-Duckietown simulator, where Duckiebots navigated a 4-way intersection across multiple trajectories. Metrics such as success rate, drivable area compliance, and ride comfort were used to assess performance.

The findings indicate that while the MILE model achieved state-of-the-art performance in the CARLA simulator, its generalization to the Duckietown environment without additional training was, as probably expected due to the sim2real gap, limited.

The BEVs generated by MILE were not sufficiently representative of the actual road surface in Duckietown, leading to suboptimal navigation performance. In contrast, the homographic BEV method, despite its assumption of a flat world plane, provided more accurate representations for intersection navigation in this context.

As for most approaches in robotics, there are limitation and tradeoffs to analyze.

Here are some technical challenges of the proposed approach:

  • Generalization across environments: one of the challenges is ensuring that the 3D image feature representation generalizes well across different simulation environments, such as Duckietown and CARLA. The differences in scale, road structures, and dynamics between simulators can impact the performance of the navigation system.
  • Accuracy of BEV representations: the transformation of camera images into Bird’s Eye View (BEV) representations has reduced accuracy, especially when dealing with low-resolution or distorted input data.
  • Real-time processing: the integration of 3D image features for navigation requires substantial computational resources with respect to utilizing 2D features instead. Achieving near real-time processing speeds for navigation tasks such as intersection navigation, is challenging.

Intersection Navigation in Duckietown Using 3D Image Feature: Full Report

Intersection Navigation in Duckietown Using 3D Image Feature: Authors

Jasper Mulder is currently working as a Junior Outdoor expert at Bever, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Variational Autoencoder for autonomous driving in Duckietown

Variational Autoencoder for autonomous driving in Duckietown

General Information

Variational Autoencoder for autonomous driving in Duckietown

This project explored using reinforcement learning (RL) and Variational Autoencoder (VAE) to train an autonomous agent for lane following in the Duckietown Gym simulator. VAEs were used to encode high-dimensional raw images into a low-dimensional latent space, reducing the complexity of the input for the RL algorithm (Deep Deterministic Policy Gradient, DDPG). The goal was to evaluate if this dimensionality reduction improved training efficiency and agent performance.

The agent successfully learned to follow straight lanes using both raw images and VAE-encoded representations. However, training with raw images performed similarly to VAEs, likely because the task was simple and had limited variability in road configurations.

The agent also displayed discrete control behaviors, such as extreme steering, in a task requiring continuous actions. These issues were attributed to the network architecture and limited reward function design.

While the VAE reduced training time slightly, it did not significantly improve performance. The project highlighted the complexity of RL applications, emphasizing the need for robust reward functions and network designs. 

Highlights - Variational Autoencoder and RL for Duckietown Lane Following

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The use of deep reinforcement learning (RL) for following the center of a lane has been studied for this project. Lane following with RL is a push towards general artificial intelligence (AI) which eliminates the use for hand crafted rules, features, and sensors. 

A project called Duckietown has created the Artificial Intelligence Driving Olympics, which aims to promote AI education and embodied AI tasks. The AIDO team has released an open-sourced simulator which was used as an environment for this study. This approach uses the Deep Deterministic Policy Gradient (DDPG) with raw images as input to learn a policy for driving in the middle of a lane for two experiments. A comparison was also done with using an encoded version of the state as input using a Variational Autoencoder (VAE) on one experiment. 

A variety of reward functions were tested to achieve the desired behavior of the agent. The agent was able to learn how to drive in a straight line, but was unable to learn how to drive on curves. It was shown that the VAE did not perform better than the raw image variant for driving in the straight line for these experiments. Further exploration of reward functions should be considered for optimal results and other improvements are suggested in the concluding statements.

Conclusion - Variational Autoencoder and RL for Duckietown Lane Following

Here are the conclusions from the author of this paper:

“After the completion of this project, I have gained insight on how difficult it is to get RL applications to work well. Most of my time was spent trying to tune the reward function. I have a list of improvements that are suggested as future work. 

  • Different network architectures – I used fully connected networks for all the architectures. I would think CNN architectures may be better at creating features for state representations. 
  • Tuning Networks – Since most of my time was spent on the reward exploration, I did not change any parameters at all. I followed the paper in the original DDPG paper [4]. A hyperparameter search may prove to be beneficial to find parameters that work best for my problem instead of all the problems in the paper. 
  • More training images for VAE 
  • Different Algorithm – Maybe an algorithm like PPO may be able to learn a better policy? 
  • Linear Function Approximation – Deep reinforcement learning has proven to be difficult to tune and work well. Maybe I could receive similar or better results using a different function approximator than a neural network. Wayve explains the use of prioritized experience replay [7], which is a method to improve on randomly sampled tuples of experiences during RL training and is based on sorting the tuples. This may improve performance of both of my algorithms. 
  • Exploring different Ornstein-Uhlenbeck process parameters to encourage, discourage more/less exploration 
  • Other dimensionality reducing methods instead of VAE. Maybe something like PCA? 

As for the AIDO competition, I have made the decision not to submit this work. It became apparent to me as I progressed through the project how difficult it is to get a perfectly working model using reinforcement learning. If I was to continue with this work for the submission, I think I would rather go towards the track of imitation learning. While this would introduce a wide range of new problems, I think intuitively it moves more sense to ”show” the robot how it should drive on the road rather having it learn from scratch. I even think classical control methods may work better or just as good as any machine learning based algorithm. Although I will not submit to this competition, I am glad I got to express two interests of mine in reinforcement learning and variational autoencoders. 

The supplementary documents for this report include the training set for the VAE, a video of experiment 1 working properly for both DDPG+Raw and DDPG+VAE, and a video of experiment 2 not working properly. The code has been posted to GitHub (Click for link).”

Project Authors

Bryon Kucharski is currently working as a Lead Data Scientist at Gartner, United States.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.