Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Project Resources

Project highlights

Here is a visual tour of the author’s work on implementing deep reinforcement learning for autonomous lane following in Duckietown.

Deep reinforcement learning for autonomous lane following in Duckietown: objective and importance

Would it not be great if we could train an end-to-end neural network in simulation, plug it in the physical robot and have it drive safely on the road? 

Inspired by this idea, Mickyas worked to implement deep reinforcement learning (DRL) for autonomous lane following in Duckietown, training the agent using sim-to-real transfer. 

The project focuses on training DRL agents, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC), to learn steering control using high-dimensional camera inputs. It integrates an autoencoder to compress image observations into a latent space, improving computational efficiency. 

The hope is for the trained DRL model to generalize from simulation to real-world deployment on a Duckiebot. This involves addressing domain adaptation, camera input variations, and real-time inference constraints, amongst other implementation challenges.

Autonomous lane following is a fundamental component of self-driving systems, requiring continuous adaptation to environmental changes, especially whn using vision as main sensing modality. This project identifies limitations in existing DRL algorithms when applied to real-world robotics, and explores modifications in reward functions, policy updates, and feature extraction methods analyzing the results through real world experimentation.

The method and challenges in implementing deep reinforcement learning in Duckietown

The method involves training a DRL agent in a simulated Duckietown environment (Gym Duckietown Simulator) using an autoencoder for feature extraction. 

The encoder compresses image data into a latent space, reducing input dimensions for policy learning. The agent receives sequential encoded frames as observations and optimizes steering actions based on reward-driven updates. The trained model is then transferred to a real Duckiebot using a ROS-based communication framework. 

Challenges for pulling this off include accounting for discrepancies between simulated and real-world camera inputs, which affect performance and generalization. Differences in lighting, surface textures, and image normalization require domain adaptation techniques.

Moreover, computational limitations on the Duckiebot prevent direct onboard execution, requiring a distributed processing setup.

Reward shaping influences learning stability, and improper design of the reward function leads to policy exploitation or suboptimal behavior. Debugging DRL models is complex due to interdependencies between network architecture, exploration strategies, and training dynamics. 

The project addresses these challenges by refining preprocessing, incorporating domain randomization, and modifying policy structures.

Deep reinforcement learning for autonomous lane following: full report

Deep reinforcement learning for autonomous lane following in Duckietown: Authors

Mickyas Tamiru Asfaw is currently pursuing Masters M2 , Mobile Autonomous and Robotic system (MARS) at the Grenoble INP – UGA, France.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Visual control of automated guided vehicles in Duckietown

Visual monitoring of automated guided vehicles in Duckietown

General Information

Visual monitoring of automated guided vehicles in Duckietown

The increasing use of robotics in industrial automation has led to the need for systems that ensure safety and efficiency in monitoring autonomous guided vehicles (AGVs). This research proposes a visual monitoring system for monitoring the trajectory and behavior of AGVs in industrial environments.

The system utilizes a network of cameras mounted on towers to detect, identify, and track AGVs. The visual data is transmitted to a central server, where the robots’ trajectories are evaluated and compared against predefined ideal paths. The system operates independently of specific hardware or software configurations, offering flexibility in its deployment.

Duckietown was used as the test environment for this system, allowing for controlled experiments with simulated robotic fleets. A prototype of the system demonstrated its capability to track AGVs using Aruco tags and evaluate rectilinear trajectories.

Key aspects and concepts:

  • Use of camera towers for visual control of AGVs;
  • Transmission of visual data to a central server for trajectory evaluation;
  • Compatibility with multiple robot types and operating systems;
  • Integration of Aruco tags for robot identification;
  • Modular architecture enabling future expansions;
  • Testing in Duckietown for controlled evaluation.

This research demonstrates a modular approach to monitoring AGVs using a visual control system tested in the Duckietown platform. Future work will extend the system’s capability to handle more complex trajectories such as turns and arcs, further leveraging Duckietown as a scalable research and testing environment.

Highlights - Visual monitoring of automated guided vehicles in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

With the increasing automation of industry and the introduction of robotics in every step of the production chain, the problem of safety has become acute. The article proposes a solution to the problem of safety in production using a visual control system for the fleet of loading automated guided vehicles (AGV). The visual control system is built as towers equipped with cameras. This approach allows to be independent of equipment vendors and allows flexible reconfiguration of the AGV fleet. The cameras detect the appearance of a loading robot, identify it and track its trajectory. Data about the robots’ movements is collected and analyzed on a server. A prototype of the visual control system was tested with the Duckietown project.

Conclusion - Visual monitoring of automated guided vehicles in Duckietown

Here are the conclusions from the author of this paper:

“In the course of this work, a prototype visual evaluation system for Duckietown project was implemented. The system supports flexible seamless integration of third-party detection algorithms and trajectory evaluation algorithms. The visual control system was tested with client imitator module, witch does not require the presence of the real robot on the field. At this stage of the work, the prototype is able to recognize rectilinear trajectory of motion. In the future, we plan to develop evaluation algorithms for other types of trajectories: 90 degree turns, large angle turns, arc movement, etc. Another promising area of research is the integration of the system with cloud-based integrated development environments (IDEs) for industrial control algorithms.”

Project Authors

Anastasia Kravchenko is currently affiliated to Department of Cyber Physical Systems Institute of Automation and Electrometry SB RAS Novosibirsk, Russia.

Alexey Sychev is currently affiliated to Department of Cyber Physical Systems Institute of Automation and Electrometry SB RAS Novosibirsk, Russia.

Vladimir Zyubin is currenly working as an Associate Professor at the Institute of Automation and Electrometry, Russia.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing visual obstacle detection in Duckietown.

Visual Obstacle Detection: objective and importance

This project aims to develop a visual obstacle detection system using inverse perspective mapping with the goal to enable autonomous systems to detect obstacles in real time using images from a monocular RGB camera. It focuses on identifying specific obstacles, such as yellow Duckies and orange cones, in Duckietown.

The system ensures safe navigation by avoiding obstacles within the vehicle’s lane or stopping when avoidance is not feasible. It does not utilize learning algorithms, prioritizing a hard-coded approach due to hardware constraints. The objective includes enhancing obstacle detection reliability under varying illumination and object properties.

It is intended to simulate realistic scenarios for autonomous driving systems. Key metrics of evaluation were selected to be detection accuracy, false positives, and missed obstacles under diverse conditions. 

The method and the challenges visual obstacle detection using Inverse Perspective Mapping

The system processes images from a monocular RGB camera by applying inverse perspective mapping to generate a bird’s-eye view, assuming all pixels lie on the ground plane to simplify obstacle distortion detection. Obstacle detection involves HSV color filtering, image segmentation, and classification using eigenvalue analysis. The reaction strategies include trajectory planning or stopping based on the detected obstacle’s position and lane constraints.

Computational efficiency is a significant challenge due to the hardware limitations of Raspberry Pi, necessitating the avoidance of real-time re-computation of color corrections. Variability in lighting and motion blur impact detection reliability, while accurate calibration of camera parameters is essential for precise 3D obstacle localization. Integration of avoidance strategies faces additional challenges due to inaccuracies in pose estimation and trajectory planning.

Visual Obstacle Detection using Inverse Perspective Mapping: Full Report

Visual Obstacle Detection using Inverse Perspective Mapping: Authors

Julian Nubert is currently a Research Assistant & Doctoral Candidate at the Max Planck Institute for Intelligent Systems, Germany.

Niklas Funk is a PHD Graduate Student at Technische Universität Darmstadt, Germany.

Fabio Meier is currently working as the Head of Operational Data Intelligence at Sensirion Connected Solutions, Switzerland.

Fabrice Oehler is working as a Software Engineer at Sensirion, Switzerland.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Yikai Zheng and Xinyu Zhang TUD Autonomous mobility

Intelligent and autonomous mobility systems

Intelligent and autonomous mobility systems

Research Associates Yikai Zeng and Xinyu Zhang from the Technische Universität Dresden tell us about their work in developing autonomous mobility systems.

Dresden, Germany, November 22, 2024: Research Associates Yikai Zeng and Xinyu Zhang talk with us about the future of autonomous mobility and intelligent transportation systems that promise to redefine how we think about movement and connectivity in urban spaces.

Connected, cooperative and autonomous mobility

We talked with Yikai Zeng and Xinyu Zhang from the Chair of Traffic Process Automation at TU Dresden about their research and teaching activities, and how Duckietown is used at the MiniCCAM lab to teach autonomous mobility.

Hello and welcome! May I ask you to start by introducing yourself?

X. Zhang: Hi! I will start! My name is Xinyu. I’m a Research Associate at TU Dresden and currently work on computational basics and tools of traffic process automation. That’s why I got involved in this Duckiedrone demonstration. Apart from that, I am also responsible for the basic autonomous driving courses, where we use Duckiebots as our learning materials and tools for the students.

Y. Zeng: Hello, My name is Yikai. I’m also a Research Associate at TU Dresden, in Prof. Meng Wang’s laboratory.

Thank you very much, when did you first discover Duckietown?

Y. Zeng: the idea came from Professor Wang, who asked us to continue the Control course of a former colleague, using among other things, Duckiebots. When we took over the course, it was during the Covid period. Right now we have developed the MiniCCAM lab.

Could you tell us more about the miniCCAM lab?

Y. Zeng: Sure! The scope of the miniCCAM laboratory, for us researchers in the transportation and autonomous mobility field, is to look at the greater picture in terms of urban mobility, so slightly different in terms of scope than the course previously mentioned. We use Duckietown for autonomous driving. The current miniCCAM lab is on one hand a good tool for demonstrating to students and general audiences what we are able to do in terms of future transportation systems; on the other hand, it provides us with an opportunity to conduct research. For example, we implemented a higher logic controller for intersection navigation and tested it in both a simulated environment and on the model smart-city Duckietown setup. Duckietown is very practical because organizing an actual field test would be very expensive.

That's great to hear. Why did you decide to use Duckiebots to teach autonomous mobility?

Y. Zeng: The decision was taken before us, but I heard stories about that time. So this course has a long history, over ten years, and every few years the course was redesigned.

Around 2019 the decision was taken to upgrade our fleet of robots, and among various solutions, we also chose Lego initially, but it didn’t work very well for us.

So my former colleague found out about Duckietown, and that’s when the choice was taken. It came all in a single box, and this was considered very positive. It also came with complete teaching materials and very well-structured courses already. This was considered to be extremely useful to help us organize our courses, we just needed to modify what was already there for our own context. So this would be the main motivation, it’s very easy to deploy course materials, and the economic aspects were considered to be very attractive.

X. Zhang: Duckiebots are also good because they come with a camera and wheel encoders, making it easier to get students started, and having them learn about the fundamentals of autonomous driving. 

It came all in a single box, complete teaching materials and very well-structured courses. This was considered to be extremely useful, we just needed to modify what was already there for our own context.

Did students appreciate using Duckiebots?

Y. Zeng: Certainly Duckietown succeeded as a teaching tool, attracting many students to our courses. I would say Duckietown has this characteristic of motivating and capturing the attention of many students. It also provides the first real hands-on experience in the field of robotics and autonomous mobility.

In our course on Computational basics and tools of traffic process automation (Rechentechnische Grundlagen und Werkzeuge der Verkehrsprozessautomatisierung), we use Duckiebots to teach students about general control, group control, and swarm control. Duckietown is also the main, shall we say, “tourist attraction” of our department. Every time we hold events, many students come to us to see the Duckiebots cooperating, going through intersections, and so forth. We’ve been using Duckietown for two years, and already it is very popular, inspiring many interesting discussions with our audiences with scientific backgrounds.

Much more efficient than a simple presentation, I’d say! 

Duckiebots come with a camera and wheel encoders, making it easier to get students started, and having them learn about the fundamentals of autonomous mobility.

MiniCCAM city and bots autonomous mobility
Would you recommend Duckietown to colleagues and students?

Y. Zeng: Yes absolutely, in fact, I’m a bit sad that you’re not producing the old model anymore! We definitely want to try the latest models, test them as a fleet, and introduce them to our lab in the future. Our main focus is always on the interaction between groups of bots and how they work together.

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

Embedded Out-of-Distribution Detection in Duckietown paper abstract

Embedded Out-of-Distribution Detection in Duckietown

General Information

Embedded Out-of-Distribution Detection in Duckietown

The project “embedded out-of-distribution detection (OOD) Detection on an Autonomous Robot Platform” focuses on safety in Duckietown by implementing real-time OOD detection on the Duckiebots. The concept involves using a machine learning-based OOD detector, specifically a β-Variational Autoencoder (β-VAE), to identify test inputs that deviate from the training data’s distribution. Such inputs can lead to unreliable behavior in machine learning systems, critical for safety in autonomous platforms like the Duckiebot.

Key aspects of the project include:

  • Integration: The β-VAE OOD detector is integrated with the Duckiebot’s ROS-based architecture, alongside lane-following and motor control modules.
  • Emergency Braking: An emergency braking mechanism halts the Duckiebot when OOD inputs are detected, ensuring safety during operation.
  • Evaluation: Performance was evaluated in scenarios where the Duckiebot navigated a track and avoided obstacles. The system achieved an 87.5% success rate in emergency stops.

This work demonstrates a method to mitigate safety risks in autonomous robotics. By providing a framework for OOD detection on low-cost platforms, the project contributes to the broader applicability of safe machine learning in cyber-physical systems.

Highlights - Embedded Out-of-Distribution Detection in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Machine learning (ML) is actively finding its way into modern cyber-physical systems (CPS), many of which are safety-critical real-time systems. It is well known that ML outputs are not reliable when testing data are novel with regards to model training and validation data, i.e., out-of-distribution (OOD) test data. We implement an unsupervised deep neural network-based OOD detector on a real-time embedded autonomous Duckiebot and evaluate detection performance. Our OOD detector produces a success rate of 87.5% for emergency stopping a Duckiebot on a braking test bed we designed. We also provide case analysis on computing resource challenges specific to the Robot Operating System (ROS) middleware on the Duckiebot.

Conclusion - Embedded Out-of-Distribution Detection in Duckietown

Here are the conclusions from the author of this paper:

“We successfully demonstrated that the 𝛽-VAE OOD detection algorithm could run on an embedded platform and provides a safety check on the control of an autonomous robot. We also showed that performance is dependent on real-time performance of the embedded system, particularly the OOD detector execution time. Lastly, we showed that there is a trade-off involved in choosing an OOD detection threshold; a smaller threshold value increases the average stopping distance from an obstacle, but leads to an increase in false positives.

This work also generates new questions that we hope to investigate in the future. The system architecture demonstrated in this paper was not utilizing a real-time OS and did not take advantage of technologies such as GPUs or TPUs, which are now becoming common on embedded systems. There is still much work that can be done to optimize process scheduling and resource utilization while maintaining the goal of using low-cost, off-the-shelf hardware and open-source software. Understanding what quality of service can be provided by a system with these constraints and whether it suffices for reliable operations of OOD detection algorithms is an ongoing research theme.

From the OOD detection perspective, we would like to run additional OOD detection algorithms on the same architecture and compare performance in terms of accuracy and computational efficiency. We would also like to develop a more comprehensive set of test scenarios to serve as a benchmark for OOD detection on embedded systems. These should include dynamic as well as static obstacles, operation in various environments and lighting conditions, and OOD scenarios that occur while the robot is performing more complex tasks like navigating corners, intersections, or merging with other traffic.

Demonstrating OOD detection on the Duckietown platform opens the door for more embedded applications of OOD detectors. This will serve to better evaluate their usefulness as a tool to enhance the safety of ML systems deployed as part of critical CPS.”

Project Authors

Michael Yuhas is currenly working as a Research Assistant and pursuing his PhD at the Nanyang Technological University, Singapore.

Yeli Feng is currenly working as a Lead Data Scientist at Amplify Health, Singapore.

Daniel Jun Xian Ng is currenly working as a Mobile Robot Software Engineer at the Hyundai Motor Group Innovation Center Singapore (HMGICS), Singapore.

Zahra Rahiminasab is currenly working as a Postdoctoral Researcher at Aalto University, Finland.

Arvind Easwaran is currenly working as an Associate Professor at the Nanyang Technological University, Singapore.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Intersection Navigation in Duckietown Using 3D Image Feature

Intersection Navigation in Duckietown Using 3D Image Features

Intersection Navigation in Duckietown Using 3D Image Features

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing intersection navigation using 3D image features in Duckietown.

Intersection Navigation in Duckietown: Advancing with 3D Image Features

Intersection navigation in Duckietown using 3D image features is an approach intented to improve autonomous intersection navigation, enhancing decision-making and path planning in complex Duckietown environments, i.e., made of several road loops and road intersections. 

The traditional approach to intersection navigation in Duckietown is naive: (a) stop at the red line before the intersection, (b) read Apriltag-equipped traffic signs (providing information on the shape and coordination mechanism at intersections); (c) decide which direction to take; (d) coordinate with other vehicles at the intersection to avoid collisions; (e) navigate through the intersection. This last step is performed in an open-loop fashion, leveraging the known appearance specifications of intersections in Duckietown. 

By incorporating 3D image features in the perception pipeline, extrapolated from the Duckietown road lines, Duckiebots can achieve a representation of their pose while crossing the intersection, closing, therefore, the loop and improving navigation accuracy, in addition to facilitating the development of new strategies for intersection navigation, such as real-time path optimization. 

Combining 3D image features with methods, such as Bird’s Eye View (BEV) transformations allows for comprehensive representations of the intersection. The integration of these techniques improves the accuracy of stop line detection and obstacle avoidance contributes to advancing autonomous navigation algorithms and supports real-world deployment scenarios.

ChatGPT representation of Duckietown intersection navigation challenges.
An AI representation of Duckietown intersection navigation challenges

The method and the challenges of intersection navigation using 3D features

The thesis involves implementing the MILE model (Model-based Imitation LEarning for urban driving), trained on the CARLA simulator, into the Duckietown environment to evaluate its performance in navigating unprotected intersections.

Experiments were conducted using the Gym-Duckietown simulator, where Duckiebots navigated a 4-way intersection across multiple trajectories. Metrics such as success rate, drivable area compliance, and ride comfort were used to assess performance.

The findings indicate that while the MILE model achieved state-of-the-art performance in the CARLA simulator, its generalization to the Duckietown environment without additional training was, as probably expected due to the sim2real gap, limited.

The BEVs generated by MILE were not sufficiently representative of the actual road surface in Duckietown, leading to suboptimal navigation performance. In contrast, the homographic BEV method, despite its assumption of a flat world plane, provided more accurate representations for intersection navigation in this context.

As for most approaches in robotics, there are limitation and tradeoffs to analyze.

Here are some technical challenges of the proposed approach:

  • Generalization across environments: one of the challenges is ensuring that the 3D image feature representation generalizes well across different simulation environments, such as Duckietown and CARLA. The differences in scale, road structures, and dynamics between simulators can impact the performance of the navigation system.
  • Accuracy of BEV representations: the transformation of camera images into Bird’s Eye View (BEV) representations has reduced accuracy, especially when dealing with low-resolution or distorted input data.
  • Real-time processing: the integration of 3D image features for navigation requires substantial computational resources with respect to utilizing 2D features instead. Achieving near real-time processing speeds for navigation tasks such as intersection navigation, is challenging.

Intersection Navigation in Duckietown Using 3D Image Feature: Full Report

Intersection Navigation in Duckietown Using 3D Image Feature: Authors

Jasper Mulder is currently working as a Junior Outdoor expert at Bever, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Variational Autoencoder for autonomous driving in Duckietown

Variational Autoencoder for autonomous driving in Duckietown

General Information

Variational Autoencoder for autonomous driving in Duckietown

This project explored using reinforcement learning (RL) and Variational Autoencoder (VAE) to train an autonomous agent for lane following in the Duckietown Gym simulator. VAEs were used to encode high-dimensional raw images into a low-dimensional latent space, reducing the complexity of the input for the RL algorithm (Deep Deterministic Policy Gradient, DDPG). The goal was to evaluate if this dimensionality reduction improved training efficiency and agent performance.

The agent successfully learned to follow straight lanes using both raw images and VAE-encoded representations. However, training with raw images performed similarly to VAEs, likely because the task was simple and had limited variability in road configurations.

The agent also displayed discrete control behaviors, such as extreme steering, in a task requiring continuous actions. These issues were attributed to the network architecture and limited reward function design.

While the VAE reduced training time slightly, it did not significantly improve performance. The project highlighted the complexity of RL applications, emphasizing the need for robust reward functions and network designs. 

Highlights - Variational Autoencoder and RL for Duckietown Lane Following

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The use of deep reinforcement learning (RL) for following the center of a lane has been studied for this project. Lane following with RL is a push towards general artificial intelligence (AI) which eliminates the use for hand crafted rules, features, and sensors. 

A project called Duckietown has created the Artificial Intelligence Driving Olympics, which aims to promote AI education and embodied AI tasks. The AIDO team has released an open-sourced simulator which was used as an environment for this study. This approach uses the Deep Deterministic Policy Gradient (DDPG) with raw images as input to learn a policy for driving in the middle of a lane for two experiments. A comparison was also done with using an encoded version of the state as input using a Variational Autoencoder (VAE) on one experiment. 

A variety of reward functions were tested to achieve the desired behavior of the agent. The agent was able to learn how to drive in a straight line, but was unable to learn how to drive on curves. It was shown that the VAE did not perform better than the raw image variant for driving in the straight line for these experiments. Further exploration of reward functions should be considered for optimal results and other improvements are suggested in the concluding statements.

Conclusion - Variational Autoencoder and RL for Duckietown Lane Following

Here are the conclusions from the author of this paper:

“After the completion of this project, I have gained insight on how difficult it is to get RL applications to work well. Most of my time was spent trying to tune the reward function. I have a list of improvements that are suggested as future work. 

  • Different network architectures – I used fully connected networks for all the architectures. I would think CNN architectures may be better at creating features for state representations. 
  • Tuning Networks – Since most of my time was spent on the reward exploration, I did not change any parameters at all. I followed the paper in the original DDPG paper [4]. A hyperparameter search may prove to be beneficial to find parameters that work best for my problem instead of all the problems in the paper. 
  • More training images for VAE 
  • Different Algorithm – Maybe an algorithm like PPO may be able to learn a better policy? 
  • Linear Function Approximation – Deep reinforcement learning has proven to be difficult to tune and work well. Maybe I could receive similar or better results using a different function approximator than a neural network. Wayve explains the use of prioritized experience replay [7], which is a method to improve on randomly sampled tuples of experiences during RL training and is based on sorting the tuples. This may improve performance of both of my algorithms. 
  • Exploring different Ornstein-Uhlenbeck process parameters to encourage, discourage more/less exploration 
  • Other dimensionality reducing methods instead of VAE. Maybe something like PCA? 

As for the AIDO competition, I have made the decision not to submit this work. It became apparent to me as I progressed through the project how difficult it is to get a perfectly working model using reinforcement learning. If I was to continue with this work for the submission, I think I would rather go towards the track of imitation learning. While this would introduce a wide range of new problems, I think intuitively it moves more sense to ”show” the robot how it should drive on the road rather having it learn from scratch. I even think classical control methods may work better or just as good as any machine learning based algorithm. Although I will not submit to this competition, I am glad I got to express two interests of mine in reinforcement learning and variational autoencoders. 

The supplementary documents for this report include the training set for the VAE, a video of experiment 1 working properly for both DDPG+Raw and DDPG+VAE, and a video of experiment 2 not working properly. The code has been posted to GitHub (Click for link).”

Project Authors

Bryon Kucharski is currently working as a Lead Data Scientist at Gartner, United States.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Monocular Navigation in Duckietown Using LEDNet Architecture

Monocular Navigation in Duckietown Using LEDNet Architecture

Monocular Navigation in Duckietown Using LEDNet Architecture

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing monocular navigation using LEDNet architecture in Duckietown*.

*Images from “Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers, M. Saavedra-Ruiz, S. Morin, L. Paull. ArXiv: https://arxiv.org/pdf/2203.03682

Why monocular navigation?

Image sensors are ubiquitous for their well-known sensory traits (e.g., distance measurement, robustness, accessibility, variety of form factors, etc.). Achieving autonomy with monocular vision, i.e., using only one image sensor, is desirable, and much work has gone into approaches to achieve this task. Duckietown’s first Duckiebot, the DB17, was designed with only a camera as sensor suite to highlight the importance of this challenge!  

But images, due to the integrative nature of image sensors and the physics of the image generation process, are subject to motion blur, occlusions, and sensitivity to environmental lighting conditions, which challenge the effectiveness of “traditional” computer vision algorithms to extract information. 

In this work, the author uses “LEDNet” to mitigate some of the known limitations of image sensors for use in autonomy. LEDNet’s encoder-decoder architecture with high resolution enables lane-following and obstacle detection. The model processes images at high frame rates, allowing recognition of turns, bends, and obstacles, which are useful for timely decision-making. The resolution improves the ability to differentiate road markings from obstacles, and classification accuracy.

LEDNet’s obstacle-avoidance algorithm can classify and detect obstacles even at higher speeds. Unlike Vision Transformers (wiki) (ViT) models, LEDNet avoids missing parts of obstacles, preventing robot collisions.

The model handles small obstacles by identifying them earlier and navigating around them. In the simulated Duckietown environment, LEDNet outperforms other models in lane-following and obstacle-detection tasks.

LEDNet uses “real-time” image segmentation to provide the Duckiebot with information for steering decisions. While the study was conducted in a simulation, the model’s performance indicates it would work in real-world scenarios with consistent lighting and predictable obstacles.

The next is to try it out! 

Monocular Navigation in Duckietown Using LEDNet Architecture - the challenges

In implementing monocular navigation in this project, the author faced several challenges: 

  1. Computational demands: LEDNet’s high-resolution processing requires computational resources, particularly when handling real-time image segmentation and obstacle detection at high frame rates.

  2. Limited handling of complex environments: the lane-following and obstacle-avoidance algorithm used in this study does not handle crossroads or junctions, limiting the model’s ability to navigate complex road structures.

  3. Simulation vs. real-world application: The study relies on a simulated environment where lighting, obstacle behavior, and road conditions are consistent. Implementing the system in the real world introduces variability in these factors, which affects the model’s performance.

  4. Small obstacle detection: While LEDNet performs well in detecting small obstacles compared to ViT, the detection of small obstacles is still dependent on the resolution and segmentation quality.

Project Report

Project Author

Angelo Broere is currently working as an Oproepkracht at Compressor Parts Service, Netherlands.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Networked Systems: Autonomy Education with Duckietown

Autonomy Education: Teaching Networked Systems

General Information

Autonomy Education: Teaching Networked Systems

In this work, Prof. Qing-Shan Jia from Tsinghua University in China explores the challenges and innovations in teaching networked systems, a domain with applications ranging from smart buildings to autonomous systems.

The study reviews curriculum structures and introduces practical solutions developed by the Tsinghua University Center for Intelligent and Networked Systems (CFINS).

Over the past two decades, CFINS has designed courses, developed educational platforms, and authored textbooks to bridge the gap between theoretical knowledge and practical application.

They feature Duckietown as part of an educational platform for autonomous driving. Duckietown offers a low-cost, do-it-yourself (DIY) framework for students to construct and program Duckiebots – autonomous mobile robotic vehicles. Duckietown allows learners to apply theoretical concepts in areas related to robot autonomy, like signal processing, machine learning, reinforcement learning, and control systems.

Duckietown enables students to gain hands-on experience in systems engineering, with calibration of sensors, programming navigation algorithms, and working on cooperative behaviors in multi-robot settings. This approach allows for the creation of complex cyber physical systems using state-of-the-art science and technology, not only democratizing access to autonomy education but also fostering understanding, even with remote learning scenarios. 

The integration of Duckietown into the curriculum exemplifies the innovative strategies employed by CFINS to make networked systems education both practical and impactful.

Abstract

In the author’s words:

Networked systems have become pervasive in the past two decades in modern societies. Engineering applications can be found from smart buildings to smart cities. It is important to educate the students to be ready for designing, analyzing, and improving networked systems. 

But this is becoming more and more challenging due to the conflict between the growing knowledge and the limited time in the curriculum. In this work we consider this important problem and provide a case study to address these challenges. 

A group of courses have been developed by the Center for Intelligent and Networked Systems, department of Automation, Tsinghua University in the past two decades for undergraduate and graduate students. We also report the related education platform and textbook development. Wish this would be useful for the other universities.

Conclusion - Networked Systems: Autonomy Education with Duckietown

Here are the conclusions from the author of this paper:

“In this work we provided a case study on the education practice of networked systems in the center for intelligent and networked systems, department of automation, Tsinghua University. The courses mentioned in this work have been delivered for 20 years, or even more. From this education practice, the following experience is summarized. First, use research to motivate the study. 

Networked systems is a vibrant research field. The exciting applications in smart buildings, autonomous driving, smart cities serve as good examples not just to motivate the students but also to make the teaching materials concrete. Inviting world-class talks and short-courses are also good practice. Second, education platforms help to learn the knowledge better. Students have hands-on experience while working on these education platforms. 

This project-based learning provides a comprehensive experience that will get the students ready for addressing the real-world engineering problems. Third, online/offline hybrid teaching mode is new and effective. This is especially important due to the pandemic. Lotus Pond, RainClassroom, and Tencent Meeting have been well adopted in Tsinghua. Students can interact with the teachers more frequently and with more specific questions. 

They can also replay the course offline, including their answers to the quiz and questions in the classroom. We hope that this summary on the education on networked systems might help the other educators in the field.”

Project Authors

Qing-Shan Jia is a Professor at the Tsinghua University, Beijing, People’s Republic of China.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Reinforcement Learning for the Control of Autonomous Robots

Reinforcement Learning for the Control of Autonomous Robots

Reinforcement Learning for the Control of Autonomous Robots

Project Resources

RL on Duckiebots - Project highlights

Here is a visual tour of the authors’ work on implementing reinforcement learning in Duckietown.

Why reinforcement learning for the control of Duckiebots in Duckietown?

This thesis explores the use of reinforcement learning (RL) techniques to enable autonomous navigation in the Duckietown. Reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment and receiving feedback through rewards or penalties. The goal is to maximize long-term rewards.

This work focuses on implementing and comparing various RL algorithms—specifically Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO) – to analyze performance in autonomous navigation. RL enables agents to learn behaviors by interacting with their environment and adapting to dynamic conditions. The PPO model was found demonstrating smooth driving using grayscale images for enhanced computational efficiency.

Another feature of this project is the integration of YOLO v5, an object detection model, which allowed the Duckiebot to recognize and stop for obstacles, improving its safety capabilities. This integration of perception and RL enabled the Duckiebot not only to follow lanes but also to navigate autonomously, making ‘real-time’ adjustments based on its surroundings.

By transferring trained models from simulation to physical Duckiebots (Sim2Real), the thesis evaluates the feasibility of applying these models to real-world autonomous driving scenarios. This work showcases how reinforcement learning and object detection can be combined to advance the development of safe, autonomous navigation systems, providing insights that could eventually be adapted for full-scale vehicles.

Reinforcement learning for the control of Duckiebots in Duckietown - the challenges

Implementing reinforcement learning, in this project faced a number of challeneges summarized below – 

  • Transfer from Simulation to Reality (Sim2Real): Models trained in simulations often encountered difficulties when applied to real-world Duckiebots, requiring adjustments for accurate and stable performance.
  • Computational Constraints: Limited processing power on the Duckiebots made it challenging to run complex RL models and object detection algorithms simultaneously.
  • Stability and Safety of Learning Models: Guaranteeing that the Duckiebot’s actions were safe and did not lead to erratic behaviors or collisions required fine-tuning and extensive testing of the RL algorithms.
  • Obstacle Detection and Avoidance: Integrating YOLO v5 for obstacle detection posed challenges in ensuring smooth integration with RL, as both systems needed to work harmoniously for obstacle avoidance.

These challenges were addressed through algorithm optimization, iterative model testing, and adjustments to the hyperparameters.

Reinforcement learning for the control of Duckiebots in Duckietown: Results

Reinforcement learning for the control of Duckiebots in Duckietown: Authors

Bruno Fournier is currently pursuing Master of Science in Engineering, Data Science at the HES-SO Haute école spécialisée de Suisse occidentale, Switzerland.

Sébastien Biner is currently pursuing Bachelor of Science in Automotive and Vehicle Technology at the Berner Fachhochschule BFH, Switzerland.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.