Adaptive Lane Following with Automatic Trim Calibration

Adaptive Lane Following with Auto-Trim Tuning

Adaptive Lane Following with Auto-Trim Tuning

Project Resources

Before and after:

Training:

Project highlights

Calibration of sensor and actuators is always important in setting up robot systems, especially in the context of autonomous operations. Manual tweaking of calibration parameters though is a nuisance, albeit necessary when every physical instance of the robots is slightly different from each other. 

In this project, the authors developed a process to automatically calibrate the trim parameter in the Duckiebot, i.e., allowing it to go straight when an equal command to both wheel motors is provided. 

Adaptive lane following in Duckietown: beyond manual odometry calibration

The objective of this project is to develop a process to autonomously calibrate the wheel trim parameter of Duckiebots, eliminating the need for manual tuning or improving upon it. Manual tuning of this parameter, as part of the odometry calibration procedure, is needed to account for the invevitable slight differences existing across different Duckiebots, due to manufacturing, assembly, handling difference, etc.

Creating an automatic trim calibration procedure enhances the Duckiebot’s lane following behavior, by continuously adjusting the wheel alignment based on real-time lane pose feedback. Duckiebots typically require manual calibration for the odometry, which introduces variability and reduces scalability in autonomous mobility experiments. 

By implementing a Model-Reference Adaptive Control (MRAC) based approach, the project ensures consistent performance despite mechanical variations or external disturbances. This is desireable for large-scale Duckietown deployments where the robots need to maintain uniform behavior across different assemblies. 

Adaptive control reduces dependence on predefined parameters, allowing Duckiebots to self-correct without external intervention. This enables more reproducible fleet-level performance, useful for research in autonomous navigation. This project supports experimentation in self-calibrating robotic systems through application of adaptive control research.

Model Reference Adaptive Control (MRAC) for adaptive lane following in Duckietown

The method employs a Model-Reference Adaptive Control (MRAC) framework that iteratively estimates the optimal trim value during lane following by processing lane pose feedback from the vision pipeline, and comparing expected and actual motion to compute a correction factor. An adaptation law updates the trim dynamically based on real-time error minimization.

Pose estimation relies on a vision-based lane filter, which introduces latency and noise, affecting convergence stability. The adaptive controller must maintain stability while ensuring convergence to an optimal trim value within a finite time window. 

The performance of this approach is constrained by sensor inaccuracies, requiring threshold-based filtering to exclude unreliable pose data. The algorithm operates in real-world conditions where road surface variations, lighting changes, and mechanical wear affect performance. Synchronizing lane pose data with controller updates while minimizing computation delays is a key challenge, and ensuring that the adaptive controller does not introduce oscillations or instability in the control loop requires parameter tuning.

Adaptive lane following: full report

Check out the full report here. 

Adaptive lane following in Duckietown: Authors

Pietro Griffa is currently working as a Systems and Estimation Engineer at Verity, Switzerland.

Simone Arreghini is currently pursuing his Ph. D. at IDSIA USI-SUPSI, Switzerland.

Rohit Suri was a mentor on this project and is currently working as a Senior Research Scientist at Venti Technologies, Singapore.

Aleksandar Petrov was a mentor on this project and is currently pursuing his Ph. D.  at the University of Oxford, United Kingdom.

Jacopo Tani was a supervisor on this project and is currently the CEO at Duckietown.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Flexible tether control in heterogeneous marsupial systems

Flexible tether control in marsupial systems

Flexible tether control in marsupial systems

Project Resources

Project highlights

Wouldn’t it be great to have a base station transfer power, data and other information to other autonomous vehicles through a tethered connection? But how to deal with the challenges arising from controlling the length and tension of the tether? 

Here is an overview of the authors’ results: 

Flexible tether control in Duckietown: objective and importance

Managing tethers effectively is an important challenge in autonomous robotic systems, especially in heterogeneous marsupial robot setups where multiple robots work together to achieve a task.

Tethers provide power and data connections between agents, but poor management can lead to tangling, restricted movement, or unnecessary strain.

This work implements a flexible tethering approach that balances slackness and tautness to improve system performance and reliability.

Using the Duckiebot DB21J as a test passenger agent, the study introduces a tether control system that adapts to different conditions, ensuring smoother operation and better resource sharing. By combining aspects of both taut and slacked tether models, this work contributes to making multi-robot systems more efficient and adaptable in various environments.

The method and challenges in implementing flexible tether control in Duckietown

The authors developed a custom-built spool mechanism designed to actively adjust tether length using real-time sensor feedback. The tether system comprises a custom-built spool mechanism, integrated with sensor feedback for real-time tether length adjustments.

To coordinate these adjustments, the system was implemented within a standard ROS-based framework, ensuring efficient data management.

To evaluate the system’s effectiveness, the authors tested different slackness and control gain parameters while the Duckiebot followed a predefined square path. By analyzing the spool’s reactivity and the consistency of the tether’s behavior, they assessed the system’s performance across varying conditions.

Several challenges emerged during testing, e.g., maintaining the right balance of tether slackness was critical, as excess slack risked entanglement, while insufficient slack could restrict mobility.

Hardware limitations affected the spool’s responsiveness, requiring careful tuning of control parameters. Additionally, environmental factors, such as potential obstacles, underscored the need for a more adaptive control mechanism in future iterations.

Flexible tether control: full report

Check out the full report here. 

Flexible tether control in heterogeneous marsupial systems in Duckietown: Authors

Carson Duffy is a computer engineer who studied at the Texas A&M University, USA.

Dr. Jason O’Kane is a faculty research advisor at Texas A&M. 

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Deep Reinforcement Learning for Autonomous Lane Following

Project Resources

Project highlights

Here is a visual tour of the author’s work on implementing deep reinforcement learning for autonomous lane following in Duckietown.

Deep reinforcement learning for autonomous lane following in Duckietown: objective and importance

Would it not be great if we could train an end-to-end neural network in simulation, plug it in the physical robot and have it drive safely on the road? 

Inspired by this idea, Mickyas worked to implement deep reinforcement learning (DRL) for autonomous lane following in Duckietown, training the agent using sim-to-real transfer. 

The project focuses on training DRL agents, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC), to learn steering control using high-dimensional camera inputs. It integrates an autoencoder to compress image observations into a latent space, improving computational efficiency. 

The hope is for the trained DRL model to generalize from simulation to real-world deployment on a Duckiebot. This involves addressing domain adaptation, camera input variations, and real-time inference constraints, amongst other implementation challenges.

Autonomous lane following is a fundamental component of self-driving systems, requiring continuous adaptation to environmental changes, especially whn using vision as main sensing modality. This project identifies limitations in existing DRL algorithms when applied to real-world robotics, and explores modifications in reward functions, policy updates, and feature extraction methods analyzing the results through real world experimentation.

The method and challenges in implementing deep reinforcement learning in Duckietown

The method involves training a DRL agent in a simulated Duckietown environment (Gym Duckietown Simulator) using an autoencoder for feature extraction. 

The encoder compresses image data into a latent space, reducing input dimensions for policy learning. The agent receives sequential encoded frames as observations and optimizes steering actions based on reward-driven updates. The trained model is then transferred to a real Duckiebot using a ROS-based communication framework. 

Challenges for pulling this off include accounting for discrepancies between simulated and real-world camera inputs, which affect performance and generalization. Differences in lighting, surface textures, and image normalization require domain adaptation techniques.

Moreover, computational limitations on the Duckiebot prevent direct onboard execution, requiring a distributed processing setup.

Reward shaping influences learning stability, and improper design of the reward function leads to policy exploitation or suboptimal behavior. Debugging DRL models is complex due to interdependencies between network architecture, exploration strategies, and training dynamics. 

The project addresses these challenges by refining preprocessing, incorporating domain randomization, and modifying policy structures.

Deep reinforcement learning for autonomous lane following: full report

Deep reinforcement learning for autonomous lane following in Duckietown: Authors

Mickyas Tamiru Asfaw is currently working as an AI Robotics and Innovation Engineer at the CESI lineact laboratory, France.

David Bertoin is currently working as a ML Applied Scientist at Photoroom, France.

Valentin Guillet is currently working as a Research engineer at IRT Saint Exupéry, France.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Visual Obstacle Detection using Inverse Perspective Mapping

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing visual obstacle detection in Duckietown.

Visual Obstacle Detection: objective and importance

This project aims to develop a visual obstacle detection system using inverse perspective mapping with the goal to enable autonomous systems to detect obstacles in real time using images from a monocular RGB camera. It focuses on identifying specific obstacles, such as yellow Duckies and orange cones, in Duckietown.

The system ensures safe navigation by avoiding obstacles within the vehicle’s lane or stopping when avoidance is not feasible. It does not utilize learning algorithms, prioritizing a hard-coded approach due to hardware constraints. The objective includes enhancing obstacle detection reliability under varying illumination and object properties.

It is intended to simulate realistic scenarios for autonomous driving systems. Key metrics of evaluation were selected to be detection accuracy, false positives, and missed obstacles under diverse conditions. 

The method and the challenges visual obstacle detection using Inverse Perspective Mapping

The system processes images from a monocular RGB camera by applying inverse perspective mapping to generate a bird’s-eye view, assuming all pixels lie on the ground plane to simplify obstacle distortion detection. Obstacle detection involves HSV color filtering, image segmentation, and classification using eigenvalue analysis. The reaction strategies include trajectory planning or stopping based on the detected obstacle’s position and lane constraints.

Computational efficiency is a significant challenge due to the hardware limitations of Raspberry Pi, necessitating the avoidance of real-time re-computation of color corrections. Variability in lighting and motion blur impact detection reliability, while accurate calibration of camera parameters is essential for precise 3D obstacle localization. Integration of avoidance strategies faces additional challenges due to inaccuracies in pose estimation and trajectory planning.

Visual Obstacle Detection using Inverse Perspective Mapping: Full Report

Visual Obstacle Detection using Inverse Perspective Mapping: Authors

Julian Nubert is currently a Research Assistant & Doctoral Candidate at the Max Planck Institute for Intelligent Systems, Germany.

Niklas Funk is a PHD Graduate Student at Technische Universität Darmstadt, Germany.

Fabio Meier is currently working as the Head of Operational Data Intelligence at Sensirion Connected Solutions, Switzerland.

Fabrice Oehler is working as a Software Engineer at Sensirion, Switzerland.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Intersection Navigation in Duckietown Using 3D Image Feature

Intersection Navigation in Duckietown Using 3D Image Features

Intersection Navigation in Duckietown Using 3D Image Features

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing intersection navigation using 3D image features in Duckietown.

Intersection Navigation in Duckietown: Advancing with 3D Image Features

Intersection navigation in Duckietown using 3D image features is an approach intented to improve autonomous intersection navigation, enhancing decision-making and path planning in complex Duckietown environments, i.e., made of several road loops and road intersections. 

The traditional approach to intersection navigation in Duckietown is naive: (a) stop at the red line before the intersection, (b) read Apriltag-equipped traffic signs (providing information on the shape and coordination mechanism at intersections); (c) decide which direction to take; (d) coordinate with other vehicles at the intersection to avoid collisions; (e) navigate through the intersection. This last step is performed in an open-loop fashion, leveraging the known appearance specifications of intersections in Duckietown. 

By incorporating 3D image features in the perception pipeline, extrapolated from the Duckietown road lines, Duckiebots can achieve a representation of their pose while crossing the intersection, closing, therefore, the loop and improving navigation accuracy, in addition to facilitating the development of new strategies for intersection navigation, such as real-time path optimization. 

Combining 3D image features with methods, such as Bird’s Eye View (BEV) transformations allows for comprehensive representations of the intersection. The integration of these techniques improves the accuracy of stop line detection and obstacle avoidance contributes to advancing autonomous navigation algorithms and supports real-world deployment scenarios.

ChatGPT representation of Duckietown intersection navigation challenges.
An AI representation of Duckietown intersection navigation challenges

The method and the challenges of intersection navigation using 3D features

The thesis involves implementing the MILE model (Model-based Imitation LEarning for urban driving), trained on the CARLA simulator, into the Duckietown environment to evaluate its performance in navigating unprotected intersections.

Experiments were conducted using the Gym-Duckietown simulator, where Duckiebots navigated a 4-way intersection across multiple trajectories. Metrics such as success rate, drivable area compliance, and ride comfort were used to assess performance.

The findings indicate that while the MILE model achieved state-of-the-art performance in the CARLA simulator, its generalization to the Duckietown environment without additional training was, as probably expected due to the sim2real gap, limited.

The BEVs generated by MILE were not sufficiently representative of the actual road surface in Duckietown, leading to suboptimal navigation performance. In contrast, the homographic BEV method, despite its assumption of a flat world plane, provided more accurate representations for intersection navigation in this context.

As for most approaches in robotics, there are limitation and tradeoffs to analyze.

Here are some technical challenges of the proposed approach:

  • Generalization across environments: one of the challenges is ensuring that the 3D image feature representation generalizes well across different simulation environments, such as Duckietown and CARLA. The differences in scale, road structures, and dynamics between simulators can impact the performance of the navigation system.
  • Accuracy of BEV representations: the transformation of camera images into Bird’s Eye View (BEV) representations has reduced accuracy, especially when dealing with low-resolution or distorted input data.
  • Real-time processing: the integration of 3D image features for navigation requires substantial computational resources with respect to utilizing 2D features instead. Achieving near real-time processing speeds for navigation tasks such as intersection navigation, is challenging.

Intersection Navigation in Duckietown Using 3D Image Feature: Full Report

Intersection Navigation in Duckietown Using 3D Image Feature: Authors

Jasper Mulder is currently working as a Junior Outdoor expert at Bever, Netherlands.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Monocular Navigation in Duckietown Using LEDNet Architecture

Monocular Navigation in Duckietown Using LEDNet Architecture

Monocular Navigation in Duckietown Using LEDNet Architecture

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing monocular navigation using LEDNet architecture in Duckietown*.

*Images from “Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers, M. Saavedra-Ruiz, S. Morin, L. Paull. ArXiv: https://arxiv.org/pdf/2203.03682

Why monocular navigation?

Image sensors are ubiquitous for their well-known sensory traits (e.g., distance measurement, robustness, accessibility, variety of form factors, etc.). Achieving autonomy with monocular vision, i.e., using only one image sensor, is desirable, and much work has gone into approaches to achieve this task. Duckietown’s first Duckiebot, the DB17, was designed with only a camera as sensor suite to highlight the importance of this challenge!  

But images, due to the integrative nature of image sensors and the physics of the image generation process, are subject to motion blur, occlusions, and sensitivity to environmental lighting conditions, which challenge the effectiveness of “traditional” computer vision algorithms to extract information. 

In this work, the author uses “LEDNet” to mitigate some of the known limitations of image sensors for use in autonomy. LEDNet’s encoder-decoder architecture with high resolution enables lane-following and obstacle detection. The model processes images at high frame rates, allowing recognition of turns, bends, and obstacles, which are useful for timely decision-making. The resolution improves the ability to differentiate road markings from obstacles, and classification accuracy.

LEDNet’s obstacle-avoidance algorithm can classify and detect obstacles even at higher speeds. Unlike Vision Transformers (wiki) (ViT) models, LEDNet avoids missing parts of obstacles, preventing robot collisions.

The model handles small obstacles by identifying them earlier and navigating around them. In the simulated Duckietown environment, LEDNet outperforms other models in lane-following and obstacle-detection tasks.

LEDNet uses “real-time” image segmentation to provide the Duckiebot with information for steering decisions. While the study was conducted in a simulation, the model’s performance indicates it would work in real-world scenarios with consistent lighting and predictable obstacles.

The next is to try it out! 

Monocular Navigation in Duckietown Using LEDNet Architecture - the challenges

In implementing monocular navigation in this project, the author faced several challenges: 

  1. Computational demands: LEDNet’s high-resolution processing requires computational resources, particularly when handling real-time image segmentation and obstacle detection at high frame rates.

  2. Limited handling of complex environments: the lane-following and obstacle-avoidance algorithm used in this study does not handle crossroads or junctions, limiting the model’s ability to navigate complex road structures.

  3. Simulation vs. real-world application: The study relies on a simulated environment where lighting, obstacle behavior, and road conditions are consistent. Implementing the system in the real world introduces variability in these factors, which affects the model’s performance.

  4. Small obstacle detection: While LEDNet performs well in detecting small obstacles compared to ViT, the detection of small obstacles is still dependent on the resolution and segmentation quality.

Project Report

Project Author

Angelo Broere is currently working as an Oproepkracht at Compressor Parts Service, Netherlands.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Reinforcement Learning for the Control of Autonomous Robots

Reinforcement Learning for the Control of Autonomous Robots

Reinforcement Learning for the Control of Autonomous Robots

Project Resources

RL on Duckiebots - Project highlights

Here is a visual tour of the authors’ work on implementing reinforcement learning in Duckietown.

Why reinforcement learning for the control of Duckiebots in Duckietown?

This thesis explores the use of reinforcement learning (RL) techniques to enable autonomous navigation in the Duckietown. Reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment and receiving feedback through rewards or penalties. The goal is to maximize long-term rewards.

This work focuses on implementing and comparing various RL algorithms—specifically Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO) – to analyze performance in autonomous navigation. RL enables agents to learn behaviors by interacting with their environment and adapting to dynamic conditions. The PPO model was found demonstrating smooth driving using grayscale images for enhanced computational efficiency.

Another feature of this project is the integration of YOLO v5, an object detection model, which allowed the Duckiebot to recognize and stop for obstacles, improving its safety capabilities. This integration of perception and RL enabled the Duckiebot not only to follow lanes but also to navigate autonomously, making ‘real-time’ adjustments based on its surroundings.

By transferring trained models from simulation to physical Duckiebots (Sim2Real), the thesis evaluates the feasibility of applying these models to real-world autonomous driving scenarios. This work showcases how reinforcement learning and object detection can be combined to advance the development of safe, autonomous navigation systems, providing insights that could eventually be adapted for full-scale vehicles.

Reinforcement learning for the control of Duckiebots in Duckietown - the challenges

Implementing reinforcement learning, in this project faced a number of challeneges summarized below – 

  • Transfer from Simulation to Reality (Sim2Real): Models trained in simulations often encountered difficulties when applied to real-world Duckiebots, requiring adjustments for accurate and stable performance.
  • Computational Constraints: Limited processing power on the Duckiebots made it challenging to run complex RL models and object detection algorithms simultaneously.
  • Stability and Safety of Learning Models: Guaranteeing that the Duckiebot’s actions were safe and did not lead to erratic behaviors or collisions required fine-tuning and extensive testing of the RL algorithms.
  • Obstacle Detection and Avoidance: Integrating YOLO v5 for obstacle detection posed challenges in ensuring smooth integration with RL, as both systems needed to work harmoniously for obstacle avoidance.

These challenges were addressed through algorithm optimization, iterative model testing, and adjustments to the hyperparameters.

Reinforcement learning for the control of Duckiebots in Duckietown: Results

Reinforcement learning for the control of Duckiebots in Duckietown: Authors

Bruno Fournier is currently pursuing Master of Science in Engineering, Data Science at the HES-SO Haute école spécialisée de Suisse occidentale, Switzerland.

Sébastien Biner is currently pursuing Bachelor of Science in Automotive and Vehicle Technology at the Berner Fachhochschule BFH, Switzerland.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Smart Lighting: Realistic Day and Night in Duckietown

Smart Lighting: Realistic Day and Night in Duckietown

Smart Lighting: Realistic Day and Night in Duckietown

Project Resources

Project Highlights

Here is the output of the authors’ work on smart lighting autonomous driving.

Why day and night autonomous driving in Duckietown?

Autonomous driving is already inherently hard. Driving at night makes it even more challenging! This is why smart lighting is an interesting application that intersects with autonomous driving: having city infrastructure, such as traffic lights and watchtowers, generate dynamically varying light – only where and when they’re needed – to make driving at night not only possible but safe. Here are some reasons for which this project is interesting:

Realistic driving scenarios: autonomous driving systems must handle varying lighting conditions. Day and night cycles are just the beginning: transitions like sunrise or sunset make the spectrum of experimental corner cases more complex, hence Duckietown a valuable testbed.

Robust lane-following capabilities: developing an adaptive lighting system in which the city infrastructure “collaborates” with Duckiebot to provide optimal driving scenarios reinforces driving performances and general robustness for lane following.  

Decentralized control for scalability: a decentralized approach to managing lighting implies that the system can be scalable across Duckietowns of arbitrary dimensions, making it more adaptable and resilient.

Autonomous lighting management: a responsive street lighting system, working in tandem with the Duckiebot’s onboard sensors, improves energy efficiency and ensures safety by adjusting to local lighting needs automatically.

Smart Lighting: Realistic Day and Night in Duckietown - the challenges

Implementing smart lighting in Duckietown to improve autonomous driving during day and night cycles presents several challenges. Here are a few examples: 

Hardware modifications: while Duckiebots are equipped with controllable LEDs, city infrastructure does not possess lighting capabilities out of the box. The first step is integrating light sources in the design of Duckietown’s city infrastructure.

Variable lighting conditions: Duckiebots, which in this project rely uniquely on vision in their autonomy pipeline, must adapt to changing lighting conditions such as full darkness, sunrise, sunset, and artificial lighting, which impacts camera vision and lane detection accuracy.

Decentralized control: managing street lighting in a decentralized way across Duckietown ensures that each area adapts to its local lighting needs, compensating for example for the presence of passing Duckiebots with their own lights on. Join control algorithms including both city infrastructure and vehicle lighting intensity add complexity to the system’s design and coordination.

Scalability: the street lighting system must be scalable across the entire city, requiring a design that can be expanded without significant complications.

Safe and reliable operation: the system needs to be safe, adapting to issues such as occasional watchtower lighting source failure, while ensuring consistent lane-following performance.

Smart Lighting: Realistic Day and Night in Duckietown: Results

Smart Lighting: Realistic Day and Night in Duckietown: Authors

David Müller is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Research Engineer at Disney Research, Switzerland.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Intersection Navigation for Duckiebots Using DBSCAN

Duckiebot Intersection Navigation with DBSCAN

Duckiebot Intersection Navigation with DBSCAN

Project Resources

Why intersection navigation using DBSCAN?

Navigating intersections is obviously important when driving in Duckietown. It is not as obvious that the mechanics of intersection navigation for autonomous vehicles are very different from those used for standard lane following. There typically is a finite state machine that transitions the agent behavior from one set of algorithms, appropriate for driving down the road, and a different set of algorithms, to actually solve the “intersections” problem. 

The intersection problem in Duckietown has several steps: 

  1. Identifying the beginning of the intersection (identified with a horizontal red line on the road floor)
  2. Stopping at the red line, before engaging the intersection
  3. Identifying what kind of intersection it is (3-way or 4-way, according to the Duckietown appearance specifications at the time of writing)
  4. Identifying the relative position of the Duckiebot at the intersection, hence the available routes forward
  5. Choosing a route
  6. Identifying when it is appropriate to engage the intersection to avoid potentially colliding with other Duckiebots (e.g., is there a centralized coordinator – a traffic light – or not?)
  7. Engaging and navigating the intersection toward the chosen feasible route
  8. Switching the state back to lane following. 

Easier said than done, right?

For each of the points above different approaches could be used. This project focuses on improving the baseline solutions for points 2., and most importantly, 7. of the above.

The real challenge is the actual driving across the intersection (in a safe way, i.e., by “keeping your lane”), because the features that provide robust feedback control in the lane following pipeline are not present inside intersections. The baseline solution for this problem in Duckietown is open loop control, relying on the model of the Duckiebots and the Duckietown to magic-tune a few parameters and the curves just about right. 

As all students of autonomy know, open-loop control is ideally perfect (when all models are known exactly), but it is practically pretty useless on its own, as “all models are wrong” [learn why, e.g., in the Modeling of a Differential Drive robot class]. 

In this project, the authors seek to close the loop around intersection navigation, and chose to use an algorithm called “DBSCAN” (Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise) to do it. 

DBSCAN (Density-Based Spatial Clustering of Applications with Noise – wiki) is a clustering algorithm that groups data points based on density, identifying clusters of varying shapes and filtering out noise. It is used to find the red stop lines at intersections without needing predefined geometric priors (colors, shapes, or fixed positions). This allows to track meaningful visual features in intersections efficiently, localize with respect to them, and hence attempt to navigate along optimal precomputed trajectories depending on the chosen direction.

Intersection navigation using DBSCAN: the challenges

Some of the challenges in this intersection navigation project are:

Initial position uncertainty: Duckiebot’s starting alignment at the stop line may vary, requiring the system to handle inconsistent initial conditions.

Real-time feedback: the current system lacks real-time feedback, relying on pre-configured instructions that cannot adjust for unexpected events, such as slippage of the wheels, inconsistencies between different Duckiebots, and misalignment of road tiles (non-compliant assembly).

Processing speed: previous closed-loop solution attempts used April tags and Kalman filters – with implementations that ended up being too slow: with low update rates and delays.

Transition to lane following: ensuring a smooth handover from intersection navigation to lane following requires precise control to avoid collisions and lane invasion.

Project Highlights

Here is a visual tour of the output of the authors’ work. Check out the GitHub repository for more details!

Intersection Navigation using DBSCAN: Results

Intersection Navigation using DBSCAN: Authors

Christian Leopoldseder is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Software Engineer at Google, Switzerland.

Matthias Wieland is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Senior Consultant at abaQon, Switzerland.

Sebastian Nicolas Giles is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Autonomous Driving Systems Engineer at embotech, Switzerland.

Merlin Hosner is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Process Development Engineer at Climeworks, Switzerland. Merlin was a mentor on this project.

Amaury Camus is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Lead Robotics Engineer at Hydromea, Switzerland. Amaury was a mentor on this project.


Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

The Obstavoid Algorithm in Duckietown

Obstacle Avoidance for Dynamic Navigation Using Obstavoid

Obstacle Avoidance for Dynamic Navigation Using Obstavoid

Project Resources

Why obstacle avoidance?

The importance of obstacle avoidance in self-driving is self-evident, whether the obstacle is a rubber duckie-pedestrian or another Duckiebot on the road.

In this project, authors deploy the Obstavoid Algorithm aiming to achieve:

  • Safety: preventing collisions with obstacles and other Duckiebots, ensuring safe navigation in a dynamic environment.

  • Efficiency: maintaining smooth movement by optimizing the trajectory, avoiding unnecessary stops or delays.

  • Real-world readiness: preparing Duckietown for real-world scenarios where unexpected obstacles can appear, improving readiness.

  • Traffic management: enabling better handling of complex traffic situations, such as maneuvering around blocked paths or navigating through crowded areas.

  • Autonomous operation: It enhances the vehicle’s ability to operate autonomously, reducing the need for human intervention and improving overall reliability.
obstacle avoidance "obstavoid" project logo

Obstacle Avoidance: the challenges

Implementing obstacle avoidance in Duckietown introduces the following challenges:

  • Dynamic obstacle prediction: accurately predicting the movement of dynamic obstacles, such as other Duckiebots, to ensure effective avoidance strategies and timely responses.
  • Computational complexity: managing the computational load of the trajectory solver, in “real-time” scenarios with varying obstacle configurations, while ensuring efficient performance on limited computation.
  • Cost function design: creating and fine-tuning a cost function that balances lane adherence, forward motion, and obstacle avoidance, while accommodating both static and dynamic elements in a complex environment.
  • Integration and testing: ensuring integration of the Obstavoid Algorithm with the Duckietown simulation framework and testing its performance in various scenarios to address potential failures and refine its robustness.

The Obstavoid Algorithm addresses these challenges by employing a time-dependent cost grid and Dijkstra’s algorithm for optimal trajectory planning, allowing for “real-time” obstacle avoidance.

Read more about how the Dijkstra’s algorithm is used in this student project titled “Goto-1: Planning with Dijkstra“.

It dynamically calculates and adjusts trajectories based on predicted obstacle movements, ensuring navigation and integration with the simulation framework.

obstacle avoidance with cost functions in Duckietown

Project Highlights

Here is the output of the authors’ work. Check out the GitHub r epository for more details!

 

Obstacle Avoidance: Results

Obstacle Avoidance: Authors

Alessandro Morra is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently serves as the CEO & Co-Founder at Ascento, Switzerland.

 
 

Dominik Mannhart is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently serves as the Co-Founder at Ascento, Switzerland.

 

Lionel Gulich is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Senior Robotics Software Engineer at NVIDIA, Switzerland.

 
 

Victor Klemm is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently is a PhD student at Robotics Systems Lab, ETH Zurich, Switzerland.

 
 

Dženan Lapandić is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently is a PhD candidate at KTH Royal Institute of Technology, Sweden.

 

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.