YOLO based object detection in Duckietown at night and day

YOLO-based Robust Object Detection in Duckietown

YOLO-based Robust Object Detection in Duckietown

Project Resources

Why Robust Object Detection?

Object detection is the ability of a robot to identify a feature in its surroundings that might influence its actions. For example, if an object is laid on the road it might represent an obstacle, i.e., a region of space that the Duckiebot cannot occupy. Robust object detection becomes particularly important when operating in dynamic environmental conditions.

Obstacles can be of various, shape or color and they can be detected through different sensing modalities, for example, through vision or lidar scanning. 

In this project, students use a purely vision-based approach for obstacle detection. Using vision is very tricky because small nuisances such as in-class variations (think of many different type of duckies) or environmental lighting conditions will dramatically affect the outcome. 

Robust object detection refers to the ability of a system to detect objects in a broad spectrum of operating conditions, and to do so reliably. 

Detecting object in Duckietown is therefore important to avoid static and moving obstacles, detect traffic signs and otherwise guarantee safe driving. 

Model Performance Under Normal and Low Lighting Conditions

Robust Object Detection: the challenges

Some of the key challenges associated with vision-based object detection are the following:

Robustness across variable lighting conditions: Ensuring accurate object detection under diverse lighting is complex due to changes in object appearance (check out why in our computer vision classes). The model must handle different lighting scenarios effectively.

Balancing robustness and performance: There’s a trade-off between robustness to lighting variations and achieving high accuracy in standard operating conditions. Prioritizing one may affect the other.

Integration and real-time performance: Integrating the trained neural network (NN) model into the Duckiebot’s system is required for real-time operation, avoid lags associated with transport of images across networks. The model’s complexity therefore must align with the computational resources available. This project was executed on DB19 model Duckiebots, equipped with Raspberry Pi 3B+ and a Coral board.

Data quality and generalization: Ensuring the model generalizes well despite potential biases in the training dataset and transfer learning challenges is crucial. Proper dataset curation and validation are essential.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Robust Obstacle Detection: Results

Robust Object Detection: Authors

Maximilian Stölzle is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at MIT as a Visiting Researcher.

Stefan Lionar is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently an Industrial PhD student at Sea AI Lab (SAIL), Singapore.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

Reward Consistency for Interpretable Feature Discovery in RL

General Information

Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

Interpretable feature discovery RL

What is interpretable feature discovery in reinforcement learning?

To understand this, let’s introduce a few important topics:

Reinforcement Learning (RL): A machine learning approach where an agent gains the ability to make decisions by engaging with an environment to accomplish a specific objective. Interpretable Feature Discovery in RL is an approach that aims to make the decision-making process of RL agents more understandable to humans.

The need for interpretability: In real-world applications, especially in safety-critical domains like self-driving cars, it is crucial to understand why an RL agent makes a certain decision. Interpretability helps:

  • Build trust in the system
  • Debug and improve the model
  • Ensure compliance with regulations and ethical standards
  • Understand fault if accidents arise

Feature discovery: Feature discovery in this context refers to identifying the key artifacts (features) of the environment that the RL agent is focusing on while making decisions. For example, in a self-driving car simulation, relevant features might include the position of other cars, road signs, or lane markings.

Learn about RL, navigation, and other robot autonomy topics at the link below!

Abstract

The black-box nature of deep reinforcement learning (RL) hinders them from real-world applications. Therefore, interpreting and explaining RL agents have been active research topics in recent years. Existing methods for post-hoc explanations usually adopt the action matching principle to enable an easy understanding of vision-based RL agents. In this article, it is argued that the commonly used action matching principle is more like an explanation of deep neural networks (DNNs) than the interpretation of RL agents. 

It may lead to irrelevant or misplaced feature attribution when different DNNs’ outputs lead to the same rewards or different rewards result from the same outputs. Therefore, we propose to consider rewards, the essential objective of RL agents, as the essential objective of interpreting RL agents as well. To ensure reward consistency during interpretable feature discovery, a novel framework (RL interpreting RL, denoted as RL-in-RL) is proposed to solve the gradient disconnection from actions to rewards. 

We verify and evaluate our method on the Atari 2600 games as well as Duckietown, a challenging self-driving car simulator environment. The results show that our method manages to keep reward (or return) consistency and achieves high-quality feature attribution. Further, a series of analytical experiments validate our assumption of the action matching principle’s limitations.

Highlights - Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

Here is a visual tour of the work of the authors. For more details, check out the full paper.

Conclusion

Here are the conclusions from the authors of this paper:

“In this article, we discussed the limitations of the commonly used assumption, the action matching principle, in RL interpretation methods. It is suggested that action matching cannot truly interpret the agent since it differs from the reward-oriented goal of RL. Hence, the proposed method first leverages reward consistency during feature attribution and models the interpretation problem as a new RL problem, denoted as RL-in-RL. 

Moreover, it provides an adjustable observation length for one-step reward or multistep reward (or return) consistency, depending on the requirements of behavior analyses. Extensive experiments validate the proposed model and support our concerns that action matching would lead to redundant and noncausal attention during interpretation since it is dedicated to exactly identical actions and thus results in a sort of “overfitting.”

 Nevertheless, although RL-in-RL shows superior interpretability and dispenses with redundant attention, further exploration of interpreting RL tasks with explicit causality is left for future work.”

Project Authors

Qisen Yang is an Artificial Intelligence PhD Student at Tsinghua University, China.

Huanqian Wang is currently pursuing the B.E. degree in control science and engineering with the Department of Automation, Tsinghua University, Beijing, China.

Mukun Tong is currently pursuing the B.E. degree in control science and engineering with the Department of Automation, Tsinghua University,
Beijing, China.

Wenjie Shi received his Ph.D. degree in control science and engineering from the Department of Automation, Institute of Industrial Intelligence and System, Tsinghua University, Beijing, China, in 2022.

Guang-Bin Huang is in the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.

Shiji Song is currently a Professor with the Department of Automation, Tsinghua University, Beijing, China.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Dynamic Obstacle Avoidance

Implementing vision based dynamic obstacle avoidance

Implementing vision based dynamic obstacle avoidance

Project Resources

Why dynamic obstacle avoidance?

Dynamic obstacle avoidance is the process of detecting a region of space that is not navigable (an obstacle), planning a path around it, and executing that plan.

When the obstacle moves, the plan needs to account for the future positions of the object as well, making the process significantly more complicated than passing a static obstacle. 

With this aim, the authors of this project designed and implemented a robust passing algorithm for Duckiebots in Duckietown.

The approach adopted was to develop a new LED-based detection system, modify the typical Duckietown lane following pipeline for planning around the obstacles, and deploying a new controller to execute manoeuvres. 

Dynamic obstacle avoidance:
the challenges

Some of the key challenges associated with this project are the following:

Detection Accuracy: The Duckiebot and Duckies detection systems occasionally produce false positives. Light sources from other Duckiebots or shiny objects can interfere with the LED detection, while yellow line segments can be mistaken for Duckies. Improving the reliability of detection under varying lighting conditions is essential.

Lane Following Stability: The Duckiebots sometimes become unstable while overtaking, especially when driving in the left lane. The lane-following system struggles with large lane pose angles or rapid changes in lane position, which can cause the Duckiebot to veer off the road. Enhancing the lane-following algorithm to maintain stability during lane changes is critical.

Velocity Estimation: Estimating the speed of moving Duckiebots accurately is challenging. The current position data obtained from LED detection fluctuates too much to provide a reliable velocity measurement. Developing a more robust method for estimating the velocity of other Duckiebots is needed to ensure safe and efficient overtaking.

Variable Speed Control: Implementing variable speed control during overtaking is problematic due to instability in the lane-following pipeline when speeds are dynamically adjusted. Adjusting speed based on the detected obstacle’s speed without losing lane stability is difficult, necessitating improvements in the lane control model to handle speed changes effectively.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Dynamic Obstacle Avoidance: Results

Dynamic Obstacle Avoidance: Authors

Nikolaj Witting is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at Trackman as an Algorithm Developer.

Fidel Esquivel Estay is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently serving as the Co-Founder at UpCircle.

Johannes Lienhart is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently serving as the CTO at Tethys Robotics.

Paula Wulkop is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, where she is currently pursuing her Ph.D.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Graph autonomous bots history

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

General Information

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development by Dianzhao Li, Paul Auerbach, and Ostap Okhrin is a review that highlights the rapid development of the industry and the important contributions of small-scale car platforms to robot autonomy research.

This survey is a valuable resource for anyone looking to get their bearings in the landscape of autonomous driving research.

We are glad see Duckietown – not only included on the list – but identified as one of the platforms that started a marked increase in the trend of yearly published papers. 

The mission of Duckietown, since we started at as a class at MIT, is to democratize access to the science and technology of robot autonomy. Part of how we intended to achieve this mission was to streamline the way autonomous behaviors for non-trivial robots were developed, tested and deployed in the real world. 

From 2018-2021 we ran several editions of the AI Driving Olympics (AI-DO): an international competition to benchmark the state of the art of embodied AI for safety-critical applications. It was a great experience – not only because it led to the development of the Challenges infrastructure, the Autolab infrastructure, and many agent baselines that catalyze further developments that are now available to the broader community, but even because it was the first time physical robots were brought the world’s leading scientific conference in Machine Learning (NeurIPS: the Neural Information Processing Systems conference – known as NIPS the first time AI-DO was launched). 

All this infrastructure development and testing might have been instrumental in making R&D in autonomous mobile robotics more efficient. Practitioners in the field know-how doing R&D is particularly difficult because final outcomes are the result of the tuple (robot) x (environment) x (task) – so not standardizing everything other than the specific feature under development (i.e., not following the ceteris paribus principle) often leads to apples and pair comparisons, i.e., bad science, which hampers the overall progress of the field.

We are happy to see Duckietown recognized as a contributor to facilitating the making of good science in the field. We beleive that even better and more science will come in the next years, as the students being educated with the Duckietown system start their professional journeys in academia or the workforce.

We are excited to see what the future of robot autonomy will look like, and we will continue doing our best by providing tools, workflows, and comprehensive resources to facilitate the professional development of the next generations of scientists, engineers, and practicioners in the field!

To learn more about Duckietown teaching resources follow the link below.

Starting around 2016, with the introduction of Duckietown, BARC, and Autorally, there was a significant increase in research papers.

Abstract

We report the abstract of the authors’ work:

“While engaging with the unfolding revolution in autonomous driving, a challenge presents itself, how can we effectively raise awareness within society about this transformative trend? While full-scale autonomous driving vehicles often come with a hefty price tag, the emergence of small-scale car platforms offers a compelling alternative. 

These platforms not only serve as valuable educational tools for the broader public and young generations but also function as robust research platforms, contributing significantly to the ongoing advancements in autonomous driving technology. 

This survey outlines various small-scale car platforms, categorizing them and detailing the research advancements accomplished through their usage. The conclusion provides proposals for promising future directions in the field.”

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

Here is a visual tour of the work. For more details, check out the full paper.

Summary and conclusion

Here is what the authors learned from this survey:

“In this paper, we offer an overview of the current state-of-the- art developments in small-scale autonomous cars. Through a detailed exploration of both past and ongoing research in this domain, we illuminate the promising trajectory for the advancement of autonomous driving technology with small-scale cars. We initially enumerate the presently predominant small-scale car platforms widely employed in academic and educational domains and present the configuration specifics of each platform. Similar to their full-size counterparts, the deployment of hyper-realistic simulation environments is imperative for training, validating, and testing autonomous systems before real-world implementation. To this end, we show the commonly employed universal simulators and platform-specific simulators.

Furthermore, we provide a detailed summary and categorization of tasks accomplished by small-scale cars, encompassing localization and mapping, path planning and following, lane-keeping, car following, overtaking, racing, obstacle avoidance, and more. Within each benchmarked task, we classify the literature into distinct categories: end-toend systems versus modular systems and traditional methods 20 versus ML-based methods. This classification facilitates a nuanced understanding of the diverse approaches adopted in the field. The collective achievements of small-scale cars are thus showcased through this systematic categorization. Since this paper aims to provide a holistic review and guide, we also outline the commonly utilized in various well-known platforms. This information serves as a valuable resource, enabling readers to leverage our survey as a guide for constructing their own platforms or making informed decisions when considering commercial options within the community.

We additionally present future trends concerning small-scale car platforms, focusing on different primary aspects. Firstly, enhancing accessibility across a broad spectrum of enthusiasts: from elementary students and colleagues to researchers, demands the implementation of a comprehensive learning pipeline with diverse entry levels for the platform. Next, to complete the whole ecosystem of the platform, a powerful car body, varying weather conditions, and communications issues should be addressed in a smart city setup. These trends are anticipated to shape the trajectory of the field, contributing significantly to advancements in real-world autonomous driving research.
While we have aimed to achieve maximum comprehensiveness, the expansive nature of this topic makes it challenging to encompass all noteworthy works. Nonetheless, by illustrating the current state of small-scale cars, we hope to offer a distinctive perspective to the community, which would generate more discussions and ideas leading to a brighter future of autonomous driving with small-scale cars.”

Project Authors

Dianzhao Li

Dianzhao Li is a research assistant at the Technische Universität Dresden, Dresden, Germany.

Paul Auerbach

Paul Auerbach is with Barkhausen Institut gGmbH, Dresden, Germany

Ostap Okhrin Technische Universität Dresden portrait

Ostap Okhrin is Chair of Statistics and Econometrics at the Institute of Economics and Transport, School of Transportation, Technische Universitat Dresden in Germany.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

 

End-to-end Deep RL (DRL) systems: in autonomous driving environments that rely on visual input for vehicle control face potential security risks, including:

  • State Adversarial Perturbations: Subtle alterations to visual input that mislead the DRL agent, causing incorrect decision-making.
  • Reward Tampering: Manipulation of the reward signal to misguide the learning process, leading the agent to adopt unsafe or inefficient policies.

These vulnerabilities can compromise the safety and reliability of self-driving vehicles.

Ackermann steering Duckiebots and rocket

Development of an Ackermann steering autonomous vehicle

Development of an Ackermann steering autonomous vehicle

Ackermann steering Duckiebots and rocket
Project Resources

Why Ackermann steering?

Ackermann steering is a configuration of wheels on a vehicle charachterized by four wheels, two in the back that are powered by a DC motor, and two in the front that steer though commands received by a servo motor. In contrast, differential drive robots have two wheels that are independently powered by two DC-motors, with a passive omnidrectional third wheel that acts as support. 

The dynamics (i.e., the “kind of movement”) of differential drive robots is quite different from real world automobiles, which, e.g., cannot turn on the spot. Ackerman steering achieves more realistic vehicle dynamics at cost: increased hardware complexity and mathematical modeling. But neither of these challenges have stopped talented Duckietown student from designing and implementing an Ackermann steering Duckiebot!

 

(Duckietown trivia: careful Duckietown observers will have noticed that the Duckiebot models historically have been called DB18, DB19, DB21, etc. – every wondered which would have been the DB20?) 

Ackermann steering in Duckietown: the challenges

Ackermann steering introduces more complex mathematical modeling, with respect to differential drive robots, in order to predict future movement hence elaborate pose estimates on the fly. The kinematic modeling of the front steering apparatus is non trivial, and the radius of curvature Ackermann steering robots showcase is very different from differential drive robots.

Differential drive robots are capable of turning on the spot (applying equal and opposite commands to the two wheels), while anyone who has ever tried parallel parking a real car, knows that this is not possible. 

How complex will it be for Ackermann steering robots to navigate Duckietown is the real challenge of this fun project.

The authors start from basic design elements through CAD, iterate through various bills of materials, make prototypes, and program them leveraging the Duckietown software infrastructure to achieve autonomous behaviors in Duckietown. 

Project Highlights

Here is the output of their work. Check out the documents for more details!

Ackermann steering: Results

(Turn on the sound for best experience!)

The autonomous behaviors of the Ackermann steering Duckiebot, a.k.a. DB20 or DBv2, shown above are the work of Timothy Scott, a former Duckietown student. 

Ackermann steering Duckiebot: Authors

Merlin Hosner is a former Duckietown student in the Institute for Dynamic Systems and Controls (IDSC) of ETH Zurich (D-MAVT), and currently works at Climeworks as a Process Development Engineer.

Rafael Fröhlich is a former Duckietown student in the Institute for Dynamic Systems and Controls (IDSC) of ETH Zurich (D-MAVT), where he is currently a Research Assistant.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Vision-based reinforcement learning for lane-tracking control

Vision-based Reinforcement Learning for Lane-Tracking Control

General Information

Vision-based reinforcement learning for lane-tracking control

a) Test track used for simulated reinforcement learning and baseline evaluations; b) and c) real and simulated test track used for the evaluation of the simulation-to-reality transfer

What is Vision-based Reinforcement Learning? A few important topics:

Reinforcement Learning: a machine learning paradigm where an agent learns to make decisions by interacting with an environment to achieve a goal. In this context, reinforcement learning is used to teach a vehicle how to drive within Duckietown lanes by providing rewards or penalties based on its actions.

Vision-based Control: The control of the vehicle is based on visual inputs, specifically images captured by a forward-facing camera. These images are processed by a neural network to determine appropriate steering actions, allowing the vehicle to track lanes and avoid collisions.

Simulation-to-Reality (sim2real) Transfer Learning: The trained policy, which learns to control the vehicle in a simulated environment, is transferred to real-world scenarios. The effectiveness of the trained model in real-world driving situations is evaluated, demonstrating the ability to generalize learning from simulation to reality.

Domain Randomization: This technique involves introducing variations or randomizations into the simulation environment during training. By exposing the agent to a wide range of simulated scenarios with different lighting conditions, road surfaces, and other environmental factors, domain randomization helps improve the model’s ability to generalize to unseen real-world conditions.

Learn about RL, navigation and other robot autonomy topics at the link below!

Abstract

The present study focused on vision-based end-to-end reinforcement learning in relation to vehicle control problems such as lane following and collision avoidance. The controller policy presented in this paper is able to control a small-scale robot to follow the right-hand lane of a real two-lane road, although its training has only been carried out in a simulation.

This model, realised by a simple, convolutional network, relies on images of a forward-facing monocular camera and generates continuous actions that directly control the vehicle. To train this policy, proximal policy optimization was used, and to achieve the generalisation capability required for real performance, domain randomisation was used. A thorough analysis of the trained policy was conducted by measuring multiple performance metrics and comparing these to baselines that rely on other methods.

To assess the quality of the simulation-to-reality transfer learning process and the performance of the controller in the real world, simple metrics were measured on a real track and compared with results from a matching simulation. Further analysis was carried out by visualising salient object maps.

Highlights - Vision-based reinforcement learning for lane-tracking control

Here is a visual tour of the work of the authors. For more details, check out the full paper.

Conclusion

Here are the conclusions from the authors of this paper:

“This work presented a solution to the problem of complex, vision-based lane following in the Duckietown environment using reinforcement learning to train an end-to-end steering policy capable of simulation-to-real transfer learning. It was found that the training is sensitive to problem formulation, such as the representation of actions. 

This study has demonstrated that by using domain randomisation, a moderately detailed and accurate simulation is sufficient for training end-to-end lane-following agents that operate in a real environment. The performance of these agents was evaluated by comparing some basic metrics to match real and simulated scenarios. 

Agents were also successfully trained to perform collision avoidance in addition to lane following. Finally, salient object visualisation was used to give an illustrative explanation of the inner workings of the policies in both the real and simulated domains.”.

Project Authors

András Kalapos

András Kalapos is a Machine Learning PhD Student at Budapest University of Technology and Economics, Hungary.

Csaba Gór

Csaba Gór is a Machine Learning Engineer at Turbine, in Hungary.

Róbert Moni

Róbert Moni is a Senior Machine Learning Engineer at Continental.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

 

End-to-end Deep RL (DRL) systems: in autonomous driving environments that rely on visual input for vehicle control face potential security risks, including:

  • State Adversarial Perturbations: Subtle alterations to visual input that mislead the DRL agent, causing incorrect decision-making.
  • Reward Tampering: Manipulation of the reward signal to misguide the learning process, leading the agent to adopt unsafe or inefficient policies.

These vulnerabilities can compromise the safety and reliability of self-driving vehicles.

Deep Reinforcement Learning for Autonomous Navigation on Duckietown Platform: Evaluation of Adversarial Robustness

Evaluating Adversarial Robustness in Duckietown Navigation

General Information

Deep RL for Autonomous Navigation on Duckietown Platform: Evaluation of Adversarial Robustness

Adversarial Navigation Robustness - Sequence of robot positions with DRL agent trained under adversarial and non-adversarial settings in a lane following experiment. The UAPFGSM method, making the agent move in circular movements with minimal perturbations, while adversarial reward tampering forces it to move in the opposite direction of the road.

What is adversarial robustness in navigation tasks all about? A few important topics:

Reinforcement Learning (RL) is a type of machine learning where agents learn to make decisions by receiving rewards or penalties based on their actions in an environment. This is great because it removed the need for curated training datasets.

Deep Reinforcement Learning (DRL) enhances RL by using deep neural networks to process complex inputs and make decisions. Deep networks are neural networks with multiple layers.

Adversarial Robustness refers to a system’s ability to resist and maintain performance despite deliberate attacks or input perturbations.

Navigation is the task of finding feasible paths between points in the environment like Google Maps or similar systems provide us in everyday life. 

Learn about RL, navigation and other robot autonomy topics at the link below.

Abstract

Self-driving cars have gained widespread attention in recent years due to their potential to revolutionize the transportation industry. However, their success critically depends on the ability of reinforcement learning (RL) algorithms to navigate complex environments safely. In this paper, we investigate the potential security risks associated with end-to-end deep RL (DRL) systems in autonomous driving environments that rely on visual input for vehicle control, using the open-source Duckietown platform for robotics and self-driving vehicles.

We demonstrate that current DRL algorithms are inherently susceptible to attacks by designing a general state adversarial perturbation and a reward tampering approach. Our strategy involves evaluating how attacks can manipulate the agent’s decision-making process and using this understanding to create a corrupted environment that can lead the agent towards low-performing policies. We introduce our state perturbation method, accompanied by empirical analysis and extensive evaluation, and then demonstrate a targeted attack using reward tampering that leads the agent to catastrophic situations.

Our experiments show that our attacks are effective in poisoning the learning of the agent when using the gradient-based Proximal Policy Optimization algorithm within the Duckietown environment. The results of this study are of interest to researchers and practitioners working in the field of autonomous driving, DRL, and computer security, and they can help inform the development of safer and more reliable autonomous driving systems.

Highlights - Evaluation of Adversarial Robustness Results

Here is a visual tour of the work of the authors. For more details, check out the paper link.

Conclusion

Here are the conclusions from the authors of this paper:

“The focus of our study was to address adversarial attacks on deep reinforcement learning (DRL) agents, specifically examining state adversarial attacks and reward-tampering attacks. 

We developed a parametric framework for state adversarial attacks and a non-parametric framework for reward tampering attacks, which enabled us to create effective attacks. We found that the performance of a DRL agent declined rapidly after the attack, and the deviation from the road was worse than that of standard DRL. 

We used salient maps to provide a clear explanation of the policies’ internal operations in both the adversarial and non-adversarial aspects. Our research provides insight into the potential vulnerabilities of DRL agents and highlights the need for more robust and secure agents to mitigate the risk of adversarial attacks. 

Moving forward, future work will focus on incorporating real-world analysis to test the performance of the DuckieBot under both adversarial and non-adversarial settings”.

Project Authors

Abdullah Hosseini is a Research and Development Specialist at Weill Cornell Medicine in Qatar.

Junaid Qadir is a Professor of Computer Engineering at Qatar University.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

 

End-to-end Deep RL (DRL) systems: in autonomous driving environments that rely on visual input for vehicle control face potential security risks, including:

  • State Adversarial Perturbations: Subtle alterations to visual input that mislead the DRL agent, causing incorrect decision-making.
  • Reward Tampering: Manipulation of the reward signal to misguide the learning process, leading the agent to adopt unsafe or inefficient policies.

These vulnerabilities can compromise the safety and reliability of self-driving vehicles.

Project parking in Duckietown

Introducing Autonomous Parking in Duckietown Cities

Introducing Autonomous Parking in Duckietown Cities

Project parking in Duckietown
Project Resources

Why Autonomous Parking?

Parking is notoriously a hard task to master for many humans. Hence, students of the Autonomous Mobility on Demand course at ETH Zurich wanted to determine to what degree this applied to autonomous parking with Duckiebots. 

The goal of the Autonomous Parking project was to design, implement, and test a complete autonomous parking solution compliant with the Duckietown ecosystem.

Duckiebots should be able to enter and exit a parking area, identify viable parking lots, actually park and exit their parking spot safely, and avoid collision with other Duckiebots during the entire process. 

The vision is to integrate autonomous charging solutions into the parking area, so Duckiebots can charge themselves when needed.

Autonomous parking in Duckietown: the challenges

Leveraging the Duckietown lane following vision baseline provided a basic infrastructure to build upon.

Some technical challenges specific to this projects were:

Backward Lane Following: Duckiebots must drive backward to exit the parking lots but only have cameras on the front. It is required to adjust the Duckiebot’s control system for stable backward driving, by changing the pose estimation process and re-tuning the PID controller.

Dynamic Color Adaptation: the new parking lot design introduced additional appearance specifications to the Duckietown city setup, such as blue lines identifying parking areas. Modifying the Duckiebots’ native lane detector to recognize blue lines in addition to yellow, red, and white, allows for additional flexibility in lane following based on specified colors.

Time Slot Coordination: Managing the availability of parking spaces is crucial to minimize the probability of collisions between Duckiebots. This project tackled this challenge by implementing a time-slot system to manage parking exits to prevent collisions, using red LEDs for signaling to other Duckiebots.

Project Highlights

Here is a visual tour of the work of the authors.

Check out the documents for more details!

Project Parking Results

(Turn on the sound for best experience!)

Project Authors

Trevor Phillips is a former Duckietown student, now a Machine Learning SWE at Apple in Switzerland. 

Vincenzo Polizzi

Vincenzo Polizzi is a former Duckietown student, now a Ph. D. student at the University of Toronto, Canada.

Linus Lingg

Linus Lingg is a former Duckietown student, now the Co-Founder and CTO of bottleplus in Switzerland. 

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Dino Claro: a Duckietown journey from project to thesis

Dino Claro's Duckietown journey: from project to graduate thesis

Dino Claro, a mechanical and mechatronics engineering graduate from the University of Cape Town, shares his Duckietown journey: with challenges and results.

Cape Town, February 13th, 2024: Dino Claro, Graduate Mechanical and Mechatronics Engineer at the University of Cape Town, shares his experience with Duckietown and the project he developed using Duckiebots for his masters thesis.

Duckinator: an Odometry pose-estimation for the Duckiebot robotic car platform

Duckies - abbey road
Hello and welcome Mr. Dino Claro! Could you introduce yourself?

My name is Dino Claro and I’m a Graduate Mechanical and Mechatronics Engineer at the University of Cape Town.

Thanks for accepting to share your experience with us. When did you first run into Duckietown?

During vacation work at the University of Cape Town (UCT) Mechatronic Systems Group, I was given the open-ended task of estimating the pose of a robot car. The goal of the vacation work was to solve a problem independently but also free from the
stresses of receiving a mark or grade. There was no expectation for novel work. In fact, the vacation work was only two weeks, and the expected solution would have been straightforward, probably odometry-based. Thus, Duckinator was born.

That's when you decided to use Duckietown?

With two platforms available, a basic Arduino 4WD kit and the Duckiebot, I could simply not resist the Duckies’ pull. The idea of using a Linux-based platform geared toward AI was extremely exciting. 

At the end of the two-week vacation work, I was still ploughing through Duckietown documentation, the EdX: Self-Driving Cars with Duckietown MOOC, and ROS tutorials. My pose estimation solution seemed very far down the road. At that point, I should have realised that the DB (besides its cute exterior) is nuanced, to say the least.

duckies pyramid
Could you describe us your project?

The early phase of my project was extremely rudimentary. I had only had a couple of weeks during the vacation work to play with the DB [Duckiebot]. I planned to continue with the EdX MOOC [Self-Driving Cars with Duckietown, 2023 edition] while researching Docker and ROS on the side for the first couple of weeks and then begin development. A pitfall with this technique was completing a section of the MOOC or some other tutorial and believing I could implement it myself. My initial thinking was that if the MOOC could be completed in 10 weeks or so and given that I have already a couple of weeks’ headstart due to vacation work, I should be able to implement my standalone autonomous solution for the DB in the 12-week frame. 

Spoiler alert, Duckinator did not rival Tesla. I made the realisation about 4 weeks into the project. At that stage, I was in the Object Detection activity of the MOOC. With the world in a frenzy over AI and ML, I was itching to dip my hands in some of this mysterious ML stuff. 

Dr. Pretorius obliged, and my plan from this point was to implement my own standalone Duckietown-compliant Docker image for the YOLOv5. Charged with the excitement of the new project direction, I began researching ML, computer vision algorithms and YOLO itself. Implementing the YOLOv5 model was relatively smooth sailing and I loved learning computer vision. In all honesty, my YOLOv5 model was just organising the Object Detection MOOC into a standalone Docker image as the MOOC hides the Docker image from the student. I obtained the training data using the MOOC helper files and then trained the YOLOv5 model using a very similar Google Colab script as provided by the MOOC. 

I slightly extended the YOLOv5 model from the MOOC by training the model to detect DBs, which proved to be sort of successful. As I only had one Duckiebot, I tested the model by parking Duckinator in front of a mirror or putting it in front of my laptop showing photos of other DBs. Due to this shabby testing, I left this extension out of my write-up. This was all completed after week 7.

With the world in a frenzy over AI and ML, I was itching to dip my hands in some of this mysterious ML stuff.

Duckiebot image detection
Did you meet your objectives?

Completing the Object Detection model effectively meant that my revised project brief had been met but as I still had some time, I needed to extend the model in some way. 

Duckinator had eyes but I wanted to make it move … autonomously. I had the idea of creating a safety controller where the distance of objects from the duckiebot could be inferred using the predicted bound box and perspective geometry. 

My theory went as follows: knowing the real-world size of all the objects the DB could detect and comparing this to the dimensions of the bounding box provided by the YOLO model, it would be possible to infer the depth of the object and this depth could then be used to base autonomous controller commands. This led me to research autonomous vehicle safety architectures/controllers and modern depth estimation algorithms. I soon realised that much more advanced autonomous architectures existed. For instance, modern autonomous vehicles fuse camera feeds, object detection models, kinematic models and various other sensors to generate vector or depth maps. The creation of these depth maps is extremely complex and a field of intense research.

imposter_marked
What where the challenges you encountered during your project?

After coding up my algorithm for the projective projection algorithm, I obtained unexpected results. Negative in most cases. Describing my algorithm in more clarity to Dr. Pretorius, he made it clear that a simple projection perspective would not work in this case. 

I was projecting everything from the camera image to the ground plane but of course, the duckies and any other objects do not exist solely on the ground plane. This being week 10 of the project, I had simply run out of time and had to scrap the projective perspective and had no time to implement any of the more complex algorithms out there.  

I was devastated at the fact that Duckinator was not going to move. 

Upon some reflection though, the YOLOv5 model was working quite well, and I had all this research about autonomous architectures and depth estimation. One of the autonomous architectures I researched was Braitenberg vehicles acting as, possibly, the simplest autonomous architecture. A basic Braitenberg controller was simple enough to implement and would mean once again Duckinator could move. 

Bounding boxes were populated onto a black image and then divided into left and right region maps. These maps were then element-wise multiplied with a weight matrix to provide a scalar value which can be used for wheel commands. Using the ‘fear’ Braitenberg vehicle the DB would then steer away from any detected objects. Another realisation was that my project was experimental with one of the main goals being for it to act as a stepping stone for future projects. 

At this stage (two weeks before my project was due) I was satisfied with a newly engineered aim: Evaluating the viability of the Duckietown platform at the undergraduate level by implementing an ML object detection model. The key outputs being the YOLOv5 model (Duckie Detector) as well as possible future projects and trajectories for students.

The real learning occurs when getting your hands dirty, experimenting and troubleshooting.

duckie_avoidance
What are your final considerations?

Reflecting on my journey from a complete beginner to a slightly more competent beginner, here’s my advice for those on a similar journey: 

Begin with a blank Duckietown-compliant Docker image and dive into coding a demo, whether it’s based on my solution or another. Ultimately, the goal is to first understand the code and then attempt to recreate it without directly copying. 

While documentation and EdX activities are useful in providing broad overviews and points of contact for debugging, relying solely on them may create a deceptive sense of competency. 

The real learning occurs when getting your hands dirty, experimenting and troubleshooting.

Thank you very much for taking the time, we appreciated your story very much! Is there anything else you would like to add?

Yes, I would say that embracing the hands-on experience is key to understanding the platform and being immersed in the infectious ethos surrounding Duckietown.

On that note, I would like to express my gratitude to Dr. Pretorius for granting me the freedom to experiment and the opportunity to work with the Duckiebot. I eagerly await future projects and the growth of the Duckietown community

DB and Duckie pyramid

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

BCI Initiative event screen

Massachusettes Institute of Technology: first BCI hackathon

Cambridge, MA, USA – McGovern Institute, 24-25 February 2024: Over 100 participants took part to the first MIT BCI hackathon, competing in teams to control Duckiebots using brain computer interfaces.

Quick links

Controlling Duckiebots using brain computer interfaces

Over 100 participants gathered at the Massachusetts Institute of Technology for the first BCI hackathon organized by Dr. Federico Claudi. The participants tried to control a Duckiebot using only brain computer interfaces, and competed in a series of tasks. 

BCI is the field of research that studies how to measure, amplify, filter and utilize electrical signals from the brain to interact with external devices.

MIT BCI Hackathon man wearing headset
MIT hackathon woman wearing headset

What made this hackathon distinctive was the hands-on challenge, where participants were tasked with controlling a physical robot. This not only tested participants’ technical skills but also showcased their ability to tackle real-world problems through innovative BCI applications.

The task teams competed on was having Duckiebots (DB21-J4) navigate a road loop as fast as possible while avoiding Duckies. Here is an example:

The hardware used in this competition was an X.on EEG headset, and Duckiebots for control. Also, the winning team’s solution will be soon made available as a reproducible  Learning Experience with Duckietown – stay tuned!

The Duckiebot is a DIY, Raspberry Pi-based robot powered by Nvidia and designed for introducing learners to autonomous technologies.

If you would like to contribute in developing accessible BCI LXs with Duckietown, and support the dissemination of BCI research, e.g. reach out to us at [email protected].

Learn more about Duckietown

The Duckietown platform enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.