Survey on Testbeds for Vehicle Autonomy & Robot Swarms

Survey on Testbeds for Vehicle Autonomy & Robot Swarms

General Information

Survey on Testbeds for Vehicle Autonomy & Robot Swarms

Collage showcasing diverse testbeds in the realm of Connected and Automated Vehicles, Vehicle Autonomy and Robot Swarms

“A Survey on Small-Scale Testbeds for Connected and Automated Vehicles and Robot Swarms“ by Armin Mokhtarian et al. offers a comparison of current small-scale testbeds for Connected and Automated Vehicles (CAVs), Vehicle Autonomy and Robot Swarms (RS).

As mentioned in , small-scale autonomous vehicle testbeds are paving the way to faster and more meaningful research and development in vehicle autonomy, embodied AI, and AI robotics as a whole. 

Although small-scale, often made of off-the-shelf components and relatively low-cost, these platforms provide the opportunity for deep insights into specific scientific and technological challenges of autonomy. 

Duckietown, in particular, is highlighted for its modular, miniature-scale smart-city environment, which facilitates the study of autonomous vehicle localization and traffic management through onboard sensors.

Learn about robot autonomy, traditional robotics autonomy architectures, agent training, sim2real, navigation, and other topics with Duckietown, starting from the link below!

Abstract

Connected and Automated Vehicles (CAVs) and Robot Swarms (RS) have the potential to transform the transportation and manufacturing sectors into safer, more efficient, sustainable systems.

However, extensive testing and validation of their algorithms are required. Small-scale testbeds offer a cost-effective and controlled environment for testing algorithms, bridging the gap between full-scale experiments and simulations. This paper provides a structured overview of characteristics of testbeds based on the sense-plan-act paradigm, enabling the classification of existing testbeds.

Its aim is to present a comprehensive survey of various testbeds and their capabilities. We investigated 17 testbeds and present our results on the public webpage www.cpm-remote.de/testbeds.

Furthermore, this paper examines seven testbeds in detail to demonstrate how the identified characteristics can be used for classification purposes.

Highlights - Survey on Testbeds for Vehicle Autonomy & Robot Swarms

Here is a visual tour of the authors’ work. For more details, check out the full paper or the corresponding up-to-date project website.

 

Conclusion - Survey on Testbeds for Vehicle Autonomy & Robot Swarms

Here are the conclusions from the authors of this paper:

“This survey provides a detailed overview of small-scale CAV/RS testbeds, with the aim of helping researchers in these fields to select or build the most suitable testbed for their experiments and to identify potential research focus areas. We structured the survey according to characteristics derived from potential use cases and research topics within the sense-plan-act paradigm.

Through an extensive investigation of 17 testbeds, we have evaluated 56 characteristics and have made the results of this analysis available on our webpage. We invited the testbed creators to assist in the initial process of gathering information and updating the content of this webpage. This collaborative approach ensures that the survey maintains its relevance and remains up to date with the latest developments.

The ongoing maintenance will allow researchers to access the most recent information. In addition, this paper can serve as a guide for those interested in creating a new testbed. The characteristics and overview of the testbeds presented in this survey can help identify potential gaps and areas for improvement.

One ongoing challenge that we identified with small-scale testbeds is the enhancement of their ability to accurately map to realworld conditions, ensuring that experiments conducted are as realistic and applicable as possible.

Overall, this paper provides a resource for researchers and developers in the fields of connected and automated vehicles and robot swarms, enabling them to make informed decisions when selecting or replicating a testbed and supporting the advancement of testbed technologies by identifying research gaps.”

Project Authors

Armin Mokhtarian is currently working as a Research Associate & PhD Candidate at RWTH Aachen University, Germany.

 

Patrick Scheffe is a Research Associate at Lehrstuhl Informatik 11 – Embedded Software, Germany.

 

Maximilian Kloock is working as a Team Manager Advanced Battery Management System Technologies at FEV Europe, Germany.

Heeseung Bang is currently a Postdoctoral Associate at Cornell University, USA.

 

Viet-Anh Le is a Visiting Graduate Student at Cornell University, USA.

Johannes Betz is a Assistant Professor at Technische Universität München, Germany.

 

Sean Wilson is a Senior Research Engineer at Georgia Institute of Technology, USA.

 

Spring Berman is an Associate Professor at Arizona State University, USA.

Liam Paull is an Associate Professor at Université de Montréal, Canada and he is also the Chief Education Officer at Duckietown, USA.

 

Amanda Prorok is an associate professor at University of Cambridge, UK.

 

Bassam Alrifaee is a Professor at Bundeswehr University Munich, Germany.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Monocular Visual Odometry

Monocular Visual Odometry for Duckiebot Navigation

Monocular Visual Odometry for Duckiebot Navigation

Project Resources

Why Monocular Visual Odometry?

Monocular Visual Odometry (VO) falls under the “perception” block of the traditional robot autonomy architecture. 

Perception in robot autonomy involves transforming sensor data into actionable information to accomplish a given task in the environment.

Perception is crucial because it allows robots to create a representation of themselves in teh environment they are operating within, which in turn enables the robot to navigate, avoid static or dynamic obstacles, forming the foundation for effective autonomy.

The function of monocular visual odometry is to estimate the robot’s pose over time by analyzing the sequence of images captured by a single camera. 

VO in this project is implemented through the following steps:

  1. Image acquisition: the node receives images from the camera, which serve as the primary source of data for motion estimation.

  2. Feature extraction: key features (points of interest) are extracted from the images using methods like ORB, SURF, or SIFT, which highlight salient details in the scene.

  3. Feature matching: the extracted features from consecutive images are matched, identifying how certain points have moved from one image to the next.

  4. Outlier filtering: erroneous or mismatched features are filtered out, improving the accuracy of the feature matches. In this project, histogram fitting to discard outliers is used.

  5. Rotation estimation: the filtered feature matches are used to estimate the rotation of the Duckiebot, determining how the orientation has changed.

  6. Translation estimation: simultaneously, the node estimates the translation, i.e., how much the Duckiebot has moved in space.

  7. Camera information and kinematics inputs: additional information from the camera (e.g., intrinsic parameters) and kinematic data (e.g., velocity) help refine the translation and rotation estimations.

  8. Path and odometry outputs: the final estimated motion is used to update the Duckiebot’s odometry (evolution of pose estimate over time) and the path it follows within the environment.

Monocular visual odometry is challenging, but provide low-cost, camera-based solution for real-time motion estimation in dynamic environments.

Monocular Visual Odometry: the challenges

Implementing Monocular Visual Odometry involves processing images at runtime, presents challenges that effect performance.
  • Extracting and matching visual features from consecutive images is a fundamental task in monocular VO. This process can be hindered by factors such as low texture areas, motion blur, variations in lighting conditions and occlusions.
  • Monocular VO systems face inherent scale ambiguity since a single camera cannot directly measure depth. The system must infer scale from visual features, which can be error-prone and less accurate in the absence of depth cues.
  • Running VO algorithms requires significant computational resources, particularly when processing high-resolution images at a high frequency. The Raspberry Pi used in the Duckiebot has limited processing power and memory, which contrians the performance of the visual odometry pipeline (the newer Duckiebots, DB21J uses Jetson Nano for computation.)
  • Monocular VO systems, as all odometry systems relying on dead-recokning models, are susceptible to long-term drift and divergence due to cumulative errors in feature tracking and pose estimation.
This project addresses visual odometry challenges by implementing robust feature extraction and matching algorithms (ORB by default) and optimizing parameters to handle dynamic environments and computational constraints. Moreover, it integrates visual odometry with existing Duckiebot autonomy pipeline, leveraging the finite state machine for accurate pose estimation and navigation.

Project Highlights

Here is the output of the authors’ work. Check out the GitHub repository for more details!

 

Monocular Visual Odometry: Results

Monocular Visual Odometry: Authors

Gianmarco Bernasconi is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Senior Research Engineer at Motional, Singapore.

 

Tomasz Firynowicz is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently works as a Software Engineer at Dentsply Sirona, Switzerland. Tomasz was a mentor on this project.

 

Guillem Torrente Martí is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently works as a Robotics Engineer at SonyAI, Japan. Guillem was a mentor on this project.

Yang Liu is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently is a Doctoral Student at EPFL, Switzerland. Yang was a mentor on this project

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Sim2Real Transfer of Multi-Agent Policies for Self-Driving

Sim2Real Transfer of Multi-Agent Policies for Self-Driving

General Information

Sim2Real Transfer of Multi-Agent Policies for Self-Driving

Flowchart illustrating the step update loop in the Duckie-MAAD architecture, detailing the process of agent action, path following, wheel velocity calculation, pose estimation, and policy update when training multi-agent reinforcement learning (MARL).

In the field of autonomous driving, transferring policies from simulation to the real world (Sim-to-real transfer, or Sim2Real) is theoretically desirable, as it is much faster and more cost-effective to train agents in simulation rather than in the real world. 

Given simulations are just that – representations of the real world – the question of whether the trained policies will actually perform well enough in the real world is always open. This challenge is known as “Sim-to-Real gap”. 

This gap is especially pronounced in Multi-Agent Reinforcement Learning (MARL), where agent collaboration and environmental synchronization significantly complicate policy transfer.

The authors of this work propose employing “Multi-Agent Proximal Policy Optimization” (MAPPO) in conjunction with domain randomization techniques, to create a robust pipeline for training MARL policies that is not only effective in simulation but also adaptable to real-world conditions.

Through varying levels of parameter randomization—such as altering lighting conditions, lane markings, and agent behaviors— the authors enhance the robustness of trained policies, ensuring they generalize effectively across a wide range of real-world scenarios.

Learn about training, sim2real, navigation, and other robot autonomy topics with Duckietown starting from the link below!

Abstract

Autonomous Driving requires high levels of coordination and collaboration between agents. Achieving effective coordination in multi-agent systems is a difficult task that remains largely unresolved. Multi-Agent Reinforcement Learning has arisen as a powerful method to accomplish this task because it considers the interaction between agents and also allows for decentralized training—which makes it highly scalable. 

However, transferring policies from simulation to the real world is a big challenge, even for single-agent applications. Multi-agent systems add additional complexities to the Sim-to-Real gap due to agent collaboration and environment synchronization. 

In this paper, we propose a method to transfer multi-agent autonomous driving policies to the real world. For this, we create a multi-agent environment that imitates the dynamics of the Duckietown multi-robot testbed, and train multi-agent policies using the MAPPO algorithm with different levels of domain randomization. We then transfer the trained policies to the Duckietown testbed and show that when using our method, domain randomization can reduce the reality gap by 90%. 

Moreover, we show that different levels of parameter randomization have a substantial impact on the Sim-to-Real gap. Finally, our approach achieves significantly better results than a rule-based benchmark.

 

Highlights - Sim2Real Transfer of Multi-Agent Policies for Self-Driving

Here is a visual tour of the work of the authors. For more details, check out the full paper.

 

Conclusion - Sim2Real Transfer of Multi-Agent Policies for Self-Driving

Here are the conclusions from the authors of this paper:

“AVs will lead to enormous safety and efficiency benefits across multiple fields, once the complex problem of multiagent coordination and collaboration is solved. MARL can help towards this, as it enables agents to learn to collaborate by sharing observations and rewards. 

However, the successful application of MARL, is heavily dependent on the fidelity of the simulation environment they were trained in. We present a method to train policies using MARL and to reduce the reality gap when transferring them to the real world via adding domain randomization during training, which we show has a significant and positive impact in real performance compared to rule-based methods or policies trained without different levels of domain randomization. 

It is important to mention that despite the performance improvements observed when using domain randomization, its use presents diminishing returns as seen with the overly conservative policy, for it cannot completely close the reality gap without increasing the fidelity of the simulator. Additionally, the amount of domain randomization to be used is case-specific and a theory for the selection of domain randomization remains an open question. The quantification and description of reality gaps presents another opportunity for future research.”

Project Authors

Eduardo Candela

Eduardo Candela is currently working as the Co-Founder & CTO of MAIHEM (YC W24), California.

 
Leandro Parada

Leandro Parada is a Research Associate at Imperial College London, United Kingdom.

 

Luís Marques is a Doctoral Researcher in the Department of Robotics at the University of Michigan, USA.

 
 
 
Tiberiu Andrei Georgescu

Tiberiu Andrei Georgescu is a Doctoral Researcher at Imperial College London, United Kingdom.

 
 
 
 
Yiannis Demiris

Yiannis Demiris is a Professor of Human-Centred Robotics and Royal Academy of Engineering Chair in Emerging Technologies at Imperial College London, United Kingdom.

 
Panagiotis Angeloudis

Panagiotis Angeloudis is a Reader in Transport Systems and Logistics at Imperial College London, United Kingdom.

 

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Goto-1: Autonomous Navigation using Dijkstra

Goto-1: Planning with Dijkstra

Goto-1: Planning with Dijkstra

Project Resources

Why planning with Dijkstra?

Planning is one of the three main components, or “blocks”, in a traditional robotics architecture for autonomy: “to see, to plan, to act” (perception, planning, and control). 

The function of the planning “block” is to provide the autonomous decision-making part of the robots’ mind, i.e., the controller, with a reference path to follow.

In the context of Duckietown, planning in applied at different hierarchical levels, from lane following to city navigation. 

This project aimed to build upon the vision-based lane following pipeline, introducing a deterministic planning algorithm to allow one Duckiebot to go to from any location (or tile) on a compliant Duckietown map to a specific target tile (hence the name: Goto-1).

Dijkstra algorithm is a graph-based methodology to determine, in a computationally efficient manner, the shortest path between two nodes in the graph.

Goto-1: Autonomous Navigation using Dijkstra
Navigation State Estimation

Autonomous Navigation: the challenges

The new planning capailities of Duckiebots enable autonomous navigation building on pre-existing functionalities, such as “lane following”, “intersection detection and identification”, and “intersection navigation” (we are operating in a scenario with only one agent on the map, so coordination and obstacle avoidance are not central to this progect).

Lane following in Duckietown is mainly vision-based, and as such suffers from the typical challenges of vision in robotics: motion blur, occlusions, sensitivity to environmental lighting conditions and “slow” sampling.

Intersection detection in Duckietown relies on the identification of the red lines on the road layer. Identification of the type of intersection, and relative location of the Duckiebot with respect to it, is instead achieved through the detection and interpretation of fiducial markers, appropriately specified and located on the map. In the case of Duckietown, April Tags (ATs) are used. Each AT, in addition to providing the necessary information regarding the type of intersection (3- or 4-way) and the position of the Duckiebot with respect to the intersection, is mapped to a unique ID in the Duckietown traffic sign database. 

These traffic signs IDs can be used to unamiguosly define the graph of the city roads. Based on this, and leveraging the lane following pipeline state estimator, it is possible to estimate the location (with tile accuracy) of the Duckiebot with respect to a global map reference frame, hence providing the agent sufficient information to know when to stop.

After stopping at an intersection, detecting and identifying it, Duckiebots are ready to choose which direction to go next. This is where the Dijkstra planning algorithm comes into play. After the planner communicates the desired turn to take, the Duckiebot drives through the intersection, before switchng back to lane following behavior after completing the crossing. In Duckietown, we refer to the combined operation of these states as “indefinite navigation”. 

Switching between different “states” of the robot mind (lane following, intersection detection and identification, intersection navigation, and then back to lane following) requires the careful design and implementation of a “finite state machine” which, triggered by specific events, allows for the Duckiebot to transition between these states. 

Integrating a new package within the existing indefinite navigation framework can cause inconsistencies and undefined behaviors, including unreliable AT detection, lane following difficulties, and inconsistent intersection navigation.

Performance evaluation of the GOTO-1 project involved testing three implementations with ten trials each, revealing variability in success rates.

Project Highlights

Here is the output of their work. Check out the GitHub repository for more details!

Autonomous Navigation: Results

Autonomous Navigation: Authors

Johannes Boghaert is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently serves as the CEO of Superlab Suisse, Switzerland.

Merlin Hosner is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently works as Process Development Engineer at Climeworks, Switzerland. Merlin was a mentor on this project.

Gioele Zardini is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently is an Assistant Professor at MITMerlin was a mentor on this project.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer

Enhancing Visual Domain Randomization for Sim2Real Transfer

General Information

Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer

Image showing the high level overview of the proposed method in the research Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer

One of the classical objections made to machine learning approaches to embeddded autonomy (i.e., to create agents that are deployed on real, physical, robots) is that training requires data, data requires experiement, and experiment are “expensive” (time, money, etc.). 

The natural counter argument to this is to use simulation to create the training data, because simulations are much less expensive than real world experiment; they can be ran continuously, with accellerated time, don’t require supervision, nobody gets tired, etc. 

But, as the experienced roboticist knows, “simulations are doomed to succeed”. This phrase encapsulates the notion that simulations do not contain the same wealth if information as the real world, because they are programmed to be what the programmer wants them to be useful for – they do not capture the complexity of the real world. Eventually things will “work” in simulation, but does that mean they will “work” in the real-world, too?

As Carl Sagan once said: “If you wish to make an applie pie from scratch, you must first reinvent the universe”. 

Domain randomization is an approach to mitigate the limitations of simulations. Instead of training an agent on one set of parameters defining the simulation, many simulations are instead ran, with different values of this parameters. E.g., in the context of a driving simulator like Duckietown, one set of parameters could make the sky purple instead of blue, or the lane markings have slightly different geometric properties, etc. The idea behind this approach is that the agent will be trained on a distribution of datasets that are all slightly different, hopefully making the agent more robust to real world nuisances once deployed in a physical body. 

In this paper,  the authors investigate specifically visual domain randomization. 

Learn about RL, navigation, and other robot autonomy topics at the link below!

Abstract

In order to train reinforcement learning algorithms, a significant amount of experience is required, so it is common practice to train them in simulation, even when they are intended to be applied in the real world. To improve robustness, camerabased agents can be trained using visual domain randomization, which involves changing the visual characteristics of the simulator between training episodes in order to improve their resilience to visual changes in their environment.

In this work, we propose a method, which includes realworld images alongside visual domain randomization in the reinforcement learning training procedure to further enhance the performance after sim-to-real transfer. We train variational autoencoders using both real and simulated frames, and the representations produced by the encoders are then used to train reinforcement learning agents.

The proposed method is evaluated against a variety of baselines, including direct and indirect visual domain randomization, end-to-end reinforcement learning, and supervised and unsupervised state representation learning.

By controlling a differential drive vehicle using only camera images, the method is tested in the Duckietown self-driving car environment. We demonstrate through our experimental results that our method improves learnt representation effectiveness and robustness by achieving the best performance of all tested methods.

Highlights - Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer

Here is a visual tour of the work of the authors. For more details, check out the full paper.

Conclusion - Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer

Here are the conclusions from the authors of this paper:

“In this work we proposed a novel method for learning effective image representations for reinforcement learning, whose core idea is to train a variational autoencoder using visually randomized images from the simulator, but include images from the real world as well, as if it was just another visually different version of the simulator.

We evaluated the method in the Duckietown self-driving environment on the lane-following task, and our experimental results showed that the image representations of our proposed method improved the performance of the tested reinforcement learning agents both in simulation and reality. This demonstrates the effectiveness and robustness of the representations learned by the proposed method. We benchmarked our method against a wide range of baselines, and the proposed method performed among the best in all cases.

Our experiments showed that using some type of visual domain randomization is necessary for a successful simto- real transfer. Variational autoencoder-based representations tended to outperform supervised representations, and both outperformed representations learned during end-to-end reinforcement learning. Also, for visual domain randomization, when using no real images, invariance regularization-based methods seemed to outperform direct methods. Based on our results, we conclude that including real images in simulation-based reinforcement learning trainings is able to enhance the real world performance of the agent – when using the two-stage approach, proposed in this paper.”

Project Authors

András Béres is currently working as a Junior Deep Learning Engineer at Continental, Hungary.

Bálint Gyires-Tóth is an associate professor at
Budapest University of Technology and Economics, Hungary.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

YOLO based object detection in Duckietown at night and day

YOLO-based Robust Object Detection in Duckietown

YOLO-based Robust Object Detection in Duckietown

Project Resources

Why Robust Object Detection?

Object detection is the ability of a robot to identify a feature in its surroundings that might influence its actions. For example, if an object is laid on the road it might represent an obstacle, i.e., a region of space that the Duckiebot cannot occupy. Robust object detection becomes particularly important when operating in dynamic environmental conditions.

Obstacles can be of various, shape or color and they can be detected through different sensing modalities, for example, through vision or lidar scanning. 

In this project, students use a purely vision-based approach for obstacle detection. Using vision is very tricky because small nuisances such as in-class variations (think of many different type of duckies) or environmental lighting conditions will dramatically affect the outcome. 

Robust object detection refers to the ability of a system to detect objects in a broad spectrum of operating conditions, and to do so reliably. 

Detecting object in Duckietown is therefore important to avoid static and moving obstacles, detect traffic signs and otherwise guarantee safe driving. 

Model Performance Under Normal and Low Lighting Conditions

Robust Object Detection: the challenges

Some of the key challenges associated with vision-based object detection are the following:

Robustness across variable lighting conditions: Ensuring accurate object detection under diverse lighting is complex due to changes in object appearance (check out why in our computer vision classes). The model must handle different lighting scenarios effectively.

Balancing robustness and performance: There’s a trade-off between robustness to lighting variations and achieving high accuracy in standard operating conditions. Prioritizing one may affect the other.

Integration and real-time performance: Integrating the trained neural network (NN) model into the Duckiebot’s system is required for real-time operation, avoid lags associated with transport of images across networks. The model’s complexity therefore must align with the computational resources available. This project was executed on DB19 model Duckiebots, equipped with Raspberry Pi 3B+ and a Coral board.

Data quality and generalization: Ensuring the model generalizes well despite potential biases in the training dataset and transfer learning challenges is crucial. Proper dataset curation and validation are essential.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Robust Obstacle Detection: Results

Robust Object Detection: Authors

Maximilian Stölzle is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at MIT as a Visiting Researcher.

Stefan Lionar is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently an Industrial PhD student at Sea AI Lab (SAIL), Singapore.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

Reward Consistency for Interpretable Feature Discovery in RL

General Information

Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

Interpretable feature discovery RL

What is interpretable feature discovery in reinforcement learning?

To understand this, let’s introduce a few important topics:

Reinforcement Learning (RL): A machine learning approach where an agent gains the ability to make decisions by engaging with an environment to accomplish a specific objective. Interpretable Feature Discovery in RL is an approach that aims to make the decision-making process of RL agents more understandable to humans.

The need for interpretability: In real-world applications, especially in safety-critical domains like self-driving cars, it is crucial to understand why an RL agent makes a certain decision. Interpretability helps:

  • Build trust in the system
  • Debug and improve the model
  • Ensure compliance with regulations and ethical standards
  • Understand fault if accidents arise

Feature discovery: Feature discovery in this context refers to identifying the key artifacts (features) of the environment that the RL agent is focusing on while making decisions. For example, in a self-driving car simulation, relevant features might include the position of other cars, road signs, or lane markings.

Learn about RL, navigation, and other robot autonomy topics at the link below!

Abstract

The black-box nature of deep reinforcement learning (RL) hinders them from real-world applications. Therefore, interpreting and explaining RL agents have been active research topics in recent years. Existing methods for post-hoc explanations usually adopt the action matching principle to enable an easy understanding of vision-based RL agents. In this article, it is argued that the commonly used action matching principle is more like an explanation of deep neural networks (DNNs) than the interpretation of RL agents. 

It may lead to irrelevant or misplaced feature attribution when different DNNs’ outputs lead to the same rewards or different rewards result from the same outputs. Therefore, we propose to consider rewards, the essential objective of RL agents, as the essential objective of interpreting RL agents as well. To ensure reward consistency during interpretable feature discovery, a novel framework (RL interpreting RL, denoted as RL-in-RL) is proposed to solve the gradient disconnection from actions to rewards. 

We verify and evaluate our method on the Atari 2600 games as well as Duckietown, a challenging self-driving car simulator environment. The results show that our method manages to keep reward (or return) consistency and achieves high-quality feature attribution. Further, a series of analytical experiments validate our assumption of the action matching principle’s limitations.

Highlights - Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

Here is a visual tour of the work of the authors. For more details, check out the full paper.

Conclusion

Here are the conclusions from the authors of this paper:

“In this article, we discussed the limitations of the commonly used assumption, the action matching principle, in RL interpretation methods. It is suggested that action matching cannot truly interpret the agent since it differs from the reward-oriented goal of RL. Hence, the proposed method first leverages reward consistency during feature attribution and models the interpretation problem as a new RL problem, denoted as RL-in-RL. 

Moreover, it provides an adjustable observation length for one-step reward or multistep reward (or return) consistency, depending on the requirements of behavior analyses. Extensive experiments validate the proposed model and support our concerns that action matching would lead to redundant and noncausal attention during interpretation since it is dedicated to exactly identical actions and thus results in a sort of “overfitting.”

 Nevertheless, although RL-in-RL shows superior interpretability and dispenses with redundant attention, further exploration of interpreting RL tasks with explicit causality is left for future work.”

Project Authors

Qisen Yang is an Artificial Intelligence PhD Student at Tsinghua University, China.

Huanqian Wang is currently pursuing the B.E. degree in control science and engineering with the Department of Automation, Tsinghua University, Beijing, China.

Mukun Tong is currently pursuing the B.E. degree in control science and engineering with the Department of Automation, Tsinghua University,
Beijing, China.

Wenjie Shi received his Ph.D. degree in control science and engineering from the Department of Automation, Institute of Industrial Intelligence and System, Tsinghua University, Beijing, China, in 2022.

Guang-Bin Huang is in the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.

Shiji Song is currently a Professor with the Department of Automation, Tsinghua University, Beijing, China.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Dynamic Obstacle Avoidance

Implementing vision based dynamic obstacle avoidance

Implementing vision based dynamic obstacle avoidance

Project Resources

Why dynamic obstacle avoidance?

Dynamic obstacle avoidance is the process of detecting a region of space that is not navigable (an obstacle), planning a path around it, and executing that plan.

When the obstacle moves, the plan needs to account for the future positions of the object as well, making the process significantly more complicated than passing a static obstacle. 

With this aim, the authors of this project designed and implemented a robust passing algorithm for Duckiebots in Duckietown.

The approach adopted was to develop a new LED-based detection system, modify the typical Duckietown lane following pipeline for planning around the obstacles, and deploying a new controller to execute manoeuvres. 

Dynamic obstacle avoidance:
the challenges

Some of the key challenges associated with this project are the following:

Detection Accuracy: The Duckiebot and Duckies detection systems occasionally produce false positives. Light sources from other Duckiebots or shiny objects can interfere with the LED detection, while yellow line segments can be mistaken for Duckies. Improving the reliability of detection under varying lighting conditions is essential.

Lane Following Stability: The Duckiebots sometimes become unstable while overtaking, especially when driving in the left lane. The lane-following system struggles with large lane pose angles or rapid changes in lane position, which can cause the Duckiebot to veer off the road. Enhancing the lane-following algorithm to maintain stability during lane changes is critical.

Velocity Estimation: Estimating the speed of moving Duckiebots accurately is challenging. The current position data obtained from LED detection fluctuates too much to provide a reliable velocity measurement. Developing a more robust method for estimating the velocity of other Duckiebots is needed to ensure safe and efficient overtaking.

Variable Speed Control: Implementing variable speed control during overtaking is problematic due to instability in the lane-following pipeline when speeds are dynamically adjusted. Adjusting speed based on the detected obstacle’s speed without losing lane stability is difficult, necessitating improvements in the lane control model to handle speed changes effectively.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Dynamic Obstacle Avoidance: Results

Dynamic Obstacle Avoidance: Authors

Nikolaj Witting is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at Trackman as an Algorithm Developer.

Fidel Esquivel Estay is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently serving as the Co-Founder at UpCircle.

Johannes Lienhart is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently serving as the CTO at Tethys Robotics.

Paula Wulkop is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, where she is currently pursuing her Ph.D.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Graph autonomous bots history

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

General Information

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development by Dianzhao Li, Paul Auerbach, and Ostap Okhrin is a review that highlights the rapid development of the industry and the important contributions of small-scale car platforms to robot autonomy research.

This survey is a valuable resource for anyone looking to get their bearings in the landscape of autonomous driving research.

We are glad see Duckietown – not only included on the list – but identified as one of the platforms that started a marked increase in the trend of yearly published papers. 

The mission of Duckietown, since we started at as a class at MIT, is to democratize access to the science and technology of robot autonomy. Part of how we intended to achieve this mission was to streamline the way autonomous behaviors for non-trivial robots were developed, tested and deployed in the real world. 

From 2018-2021 we ran several editions of the AI Driving Olympics (AI-DO): an international competition to benchmark the state of the art of embodied AI for safety-critical applications. It was a great experience – not only because it led to the development of the Challenges infrastructure, the Autolab infrastructure, and many agent baselines that catalyze further developments that are now available to the broader community, but even because it was the first time physical robots were brought the world’s leading scientific conference in Machine Learning (NeurIPS: the Neural Information Processing Systems conference – known as NIPS the first time AI-DO was launched). 

All this infrastructure development and testing might have been instrumental in making R&D in autonomous mobile robotics more efficient. Practitioners in the field know-how doing R&D is particularly difficult because final outcomes are the result of the tuple (robot) x (environment) x (task) – so not standardizing everything other than the specific feature under development (i.e., not following the ceteris paribus principle) often leads to apples and pair comparisons, i.e., bad science, which hampers the overall progress of the field.

We are happy to see Duckietown recognized as a contributor to facilitating the making of good science in the field. We beleive that even better and more science will come in the next years, as the students being educated with the Duckietown system start their professional journeys in academia or the workforce.

We are excited to see what the future of robot autonomy will look like, and we will continue doing our best by providing tools, workflows, and comprehensive resources to facilitate the professional development of the next generations of scientists, engineers, and practicioners in the field!

To learn more about Duckietown teaching resources follow the link below.

Starting around 2016, with the introduction of Duckietown, BARC, and Autorally, there was a significant increase in research papers.

Abstract

We report the abstract of the authors’ work:

“While engaging with the unfolding revolution in autonomous driving, a challenge presents itself, how can we effectively raise awareness within society about this transformative trend? While full-scale autonomous driving vehicles often come with a hefty price tag, the emergence of small-scale car platforms offers a compelling alternative. 

These platforms not only serve as valuable educational tools for the broader public and young generations but also function as robust research platforms, contributing significantly to the ongoing advancements in autonomous driving technology. 

This survey outlines various small-scale car platforms, categorizing them and detailing the research advancements accomplished through their usage. The conclusion provides proposals for promising future directions in the field.”

Towards Autonomous Driving with Small-Scale Cars: A Survey of Recent Development

Here is a visual tour of the work. For more details, check out the full paper.

Summary and conclusion

Here is what the authors learned from this survey:

“In this paper, we offer an overview of the current state-of-the- art developments in small-scale autonomous cars. Through a detailed exploration of both past and ongoing research in this domain, we illuminate the promising trajectory for the advancement of autonomous driving technology with small-scale cars. We initially enumerate the presently predominant small-scale car platforms widely employed in academic and educational domains and present the configuration specifics of each platform. Similar to their full-size counterparts, the deployment of hyper-realistic simulation environments is imperative for training, validating, and testing autonomous systems before real-world implementation. To this end, we show the commonly employed universal simulators and platform-specific simulators.

Furthermore, we provide a detailed summary and categorization of tasks accomplished by small-scale cars, encompassing localization and mapping, path planning and following, lane-keeping, car following, overtaking, racing, obstacle avoidance, and more. Within each benchmarked task, we classify the literature into distinct categories: end-toend systems versus modular systems and traditional methods 20 versus ML-based methods. This classification facilitates a nuanced understanding of the diverse approaches adopted in the field. The collective achievements of small-scale cars are thus showcased through this systematic categorization. Since this paper aims to provide a holistic review and guide, we also outline the commonly utilized in various well-known platforms. This information serves as a valuable resource, enabling readers to leverage our survey as a guide for constructing their own platforms or making informed decisions when considering commercial options within the community.

We additionally present future trends concerning small-scale car platforms, focusing on different primary aspects. Firstly, enhancing accessibility across a broad spectrum of enthusiasts: from elementary students and colleagues to researchers, demands the implementation of a comprehensive learning pipeline with diverse entry levels for the platform. Next, to complete the whole ecosystem of the platform, a powerful car body, varying weather conditions, and communications issues should be addressed in a smart city setup. These trends are anticipated to shape the trajectory of the field, contributing significantly to advancements in real-world autonomous driving research.
While we have aimed to achieve maximum comprehensiveness, the expansive nature of this topic makes it challenging to encompass all noteworthy works. Nonetheless, by illustrating the current state of small-scale cars, we hope to offer a distinctive perspective to the community, which would generate more discussions and ideas leading to a brighter future of autonomous driving with small-scale cars.”

Project Authors

Dianzhao Li

Dianzhao Li is a research assistant at the Technische Universität Dresden, Dresden, Germany.

Paul Auerbach

Paul Auerbach is with Barkhausen Institut gGmbH, Dresden, Germany

Ostap Okhrin Technische Universität Dresden portrait

Ostap Okhrin is Chair of Statistics and Econometrics at the Institute of Economics and Transport, School of Transportation, Technische Universitat Dresden in Germany.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

 

End-to-end Deep RL (DRL) systems: in autonomous driving environments that rely on visual input for vehicle control face potential security risks, including:

  • State Adversarial Perturbations: Subtle alterations to visual input that mislead the DRL agent, causing incorrect decision-making.
  • Reward Tampering: Manipulation of the reward signal to misguide the learning process, leading the agent to adopt unsafe or inefficient policies.

These vulnerabilities can compromise the safety and reliability of self-driving vehicles.

Ackermann steering Duckiebots and rocket

Development of an Ackermann steering autonomous vehicle

Development of an Ackermann steering autonomous vehicle

Ackermann steering Duckiebots and rocket
Project Resources

Why Ackermann steering?

Ackermann steering is a configuration of wheels on a vehicle charachterized by four wheels, two in the back that are powered by a DC motor, and two in the front that steer though commands received by a servo motor. In contrast, differential drive robots have two wheels that are independently powered by two DC-motors, with a passive omnidrectional third wheel that acts as support. 

The dynamics (i.e., the “kind of movement”) of differential drive robots is quite different from real world automobiles, which, e.g., cannot turn on the spot. Ackerman steering achieves more realistic vehicle dynamics at cost: increased hardware complexity and mathematical modeling. But neither of these challenges have stopped talented Duckietown student from designing and implementing an Ackermann steering Duckiebot!

 

(Duckietown trivia: careful Duckietown observers will have noticed that the Duckiebot models historically have been called DB18, DB19, DB21, etc. – every wondered which would have been the DB20?) 

Ackermann steering in Duckietown: the challenges

Ackermann steering introduces more complex mathematical modeling, with respect to differential drive robots, in order to predict future movement hence elaborate pose estimates on the fly. The kinematic modeling of the front steering apparatus is non trivial, and the radius of curvature Ackermann steering robots showcase is very different from differential drive robots.

Differential drive robots are capable of turning on the spot (applying equal and opposite commands to the two wheels), while anyone who has ever tried parallel parking a real car, knows that this is not possible. 

How complex will it be for Ackermann steering robots to navigate Duckietown is the real challenge of this fun project.

The authors start from basic design elements through CAD, iterate through various bills of materials, make prototypes, and program them leveraging the Duckietown software infrastructure to achieve autonomous behaviors in Duckietown. 

Project Highlights

Here is the output of their work. Check out the documents for more details!

Ackermann steering: Results

(Turn on the sound for best experience!)

The autonomous behaviors of the Ackermann steering Duckiebot, a.k.a. DB20 or DBv2, shown above are the work of Timothy Scott, a former Duckietown student. 

Ackermann steering Duckiebot: Authors

Merlin Hosner is a former Duckietown student in the Institute for Dynamic Systems and Controls (IDSC) of ETH Zurich (D-MAVT), and currently works at Climeworks as a Process Development Engineer.

Rafael Fröhlich is a former Duckietown student in the Institute for Dynamic Systems and Controls (IDSC) of ETH Zurich (D-MAVT), where he is currently a Research Assistant.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.