Monocular Navigation in Duckietown Using LEDNet Architecture

Monocular Navigation in Duckietown Using LEDNet Architecture

Monocular Navigation in Duckietown Using LEDNet Architecture

Project Resources

Project highlights

Here is a visual tour of the authors’ work on implementing monocular navigation using LEDNet architecture in Duckietown*.

*Images from “Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers, M. Saavedra-Ruiz, S. Morin, L. Paull. ArXiv: https://arxiv.org/pdf/2203.03682

Why monocular navigation?

Image sensors are ubiquitous for their well-known sensory traits (e.g., distance measurement, robustness, accessibility, variety of form factors, etc.). Achieving autonomy with monocular vision, i.e., using only one image sensor, is desirable, and much work has gone into approaches to achieve this task. Duckietown’s first Duckiebot, the DB17, was designed with only a camera as sensor suite to highlight the importance of this challenge!  

But images, due to the integrative nature of image sensors and the physics of the image generation process, are subject to motion blur, occlusions, and sensitivity to environmental lighting conditions, which challenge the effectiveness of “traditional” computer vision algorithms to extract information. 

In this work, the author uses “LEDNet” to mitigate some of the known limitations of image sensors for use in autonomy. LEDNet’s encoder-decoder architecture with high resolution enables lane-following and obstacle detection. The model processes images at high frame rates, allowing recognition of turns, bends, and obstacles, which are useful for timely decision-making. The resolution improves the ability to differentiate road markings from obstacles, and classification accuracy.

LEDNet’s obstacle-avoidance algorithm can classify and detect obstacles even at higher speeds. Unlike Vision Transformers (wiki) (ViT) models, LEDNet avoids missing parts of obstacles, preventing robot collisions.

The model handles small obstacles by identifying them earlier and navigating around them. In the simulated Duckietown environment, LEDNet outperforms other models in lane-following and obstacle-detection tasks.

LEDNet uses “real-time” image segmentation to provide the Duckiebot with information for steering decisions. While the study was conducted in a simulation, the model’s performance indicates it would work in real-world scenarios with consistent lighting and predictable obstacles.

The next is to try it out! 

Monocular Navigation in Duckietown Using LEDNet Architecture - the challenges

In implementing monocular navigation in this project, the author faced several challenges: 

  1. Computational demands: LEDNet’s high-resolution processing requires computational resources, particularly when handling real-time image segmentation and obstacle detection at high frame rates.

  2. Limited handling of complex environments: the lane-following and obstacle-avoidance algorithm used in this study does not handle crossroads or junctions, limiting the model’s ability to navigate complex road structures.

  3. Simulation vs. real-world application: The study relies on a simulated environment where lighting, obstacle behavior, and road conditions are consistent. Implementing the system in the real world introduces variability in these factors, which affects the model’s performance.

  4. Small obstacle detection: While LEDNet performs well in detecting small obstacles compared to ViT, the detection of small obstacles is still dependent on the resolution and segmentation quality.

Project Report

Project Author

Angelo Broere is currently working as an Oproepkracht at Compressor Parts Service, Netherlands.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Reinforcement Learning for the Control of Autonomous Robots

Reinforcement Learning for the Control of Autonomous Robots

Reinforcement Learning for the Control of Autonomous Robots

Project Resources

RL on Duckiebots - Project highlights

Here is a visual tour of the authors’ work on implementing reinforcement learning in Duckietown.

Why reinforcement learning for the control of Duckiebots in Duckietown?

This thesis explores the use of reinforcement learning (RL) techniques to enable autonomous navigation in the Duckietown. Reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment and receiving feedback through rewards or penalties. The goal is to maximize long-term rewards.

This work focuses on implementing and comparing various RL algorithms—specifically Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO) – to analyze performance in autonomous navigation. RL enables agents to learn behaviors by interacting with their environment and adapting to dynamic conditions. The PPO model was found demonstrating smooth driving using grayscale images for enhanced computational efficiency.

Another feature of this project is the integration of YOLO v5, an object detection model, which allowed the Duckiebot to recognize and stop for obstacles, improving its safety capabilities. This integration of perception and RL enabled the Duckiebot not only to follow lanes but also to navigate autonomously, making ‘real-time’ adjustments based on its surroundings.

By transferring trained models from simulation to physical Duckiebots (Sim2Real), the thesis evaluates the feasibility of applying these models to real-world autonomous driving scenarios. This work showcases how reinforcement learning and object detection can be combined to advance the development of safe, autonomous navigation systems, providing insights that could eventually be adapted for full-scale vehicles.

Reinforcement learning for the control of Duckiebots in Duckietown - the challenges

Implementing reinforcement learning, in this project faced a number of challeneges summarized below – 

  • Transfer from Simulation to Reality (Sim2Real): Models trained in simulations often encountered difficulties when applied to real-world Duckiebots, requiring adjustments for accurate and stable performance.
  • Computational Constraints: Limited processing power on the Duckiebots made it challenging to run complex RL models and object detection algorithms simultaneously.
  • Stability and Safety of Learning Models: Guaranteeing that the Duckiebot’s actions were safe and did not lead to erratic behaviors or collisions required fine-tuning and extensive testing of the RL algorithms.
  • Obstacle Detection and Avoidance: Integrating YOLO v5 for obstacle detection posed challenges in ensuring smooth integration with RL, as both systems needed to work harmoniously for obstacle avoidance.

These challenges were addressed through algorithm optimization, iterative model testing, and adjustments to the hyperparameters.

Reinforcement learning for the control of Duckiebots in Duckietown: Results

Reinforcement learning for the control of Duckiebots in Duckietown: Authors

Bruno Fournier is currently pursuing Master of Science in Engineering, Data Science at the HES-SO Haute école spécialisée de Suisse occidentale, Switzerland.

Sébastien Biner is currently pursuing Bachelor of Science in Automotive and Vehicle Technology at the Berner Fachhochschule BFH, Switzerland.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Smart Lighting: Realistic Day and Night in Duckietown

Smart Lighting: Realistic Day and Night in Duckietown

Smart Lighting: Realistic Day and Night in Duckietown

Project Resources

Project Highlights

Here is the output of the authors’ work on smart lighting autonomous driving.

Why day and night autonomous driving in Duckietown?

Autonomous driving is already inherently hard. Driving at night makes it even more challenging! This is why smart lighting is an interesting application that intersects with autonomous driving: having city infrastructure, such as traffic lights and watchtowers, generate dynamically varying light – only where and when they’re needed – to make driving at night not only possible but safe. Here are some reasons for which this project is interesting:

Realistic driving scenarios: autonomous driving systems must handle varying lighting conditions. Day and night cycles are just the beginning: transitions like sunrise or sunset make the spectrum of experimental corner cases more complex, hence Duckietown a valuable testbed.

Robust lane-following capabilities: developing an adaptive lighting system in which the city infrastructure “collaborates” with Duckiebot to provide optimal driving scenarios reinforces driving performances and general robustness for lane following.  

Decentralized control for scalability: a decentralized approach to managing lighting implies that the system can be scalable across Duckietowns of arbitrary dimensions, making it more adaptable and resilient.

Autonomous lighting management: a responsive street lighting system, working in tandem with the Duckiebot’s onboard sensors, improves energy efficiency and ensures safety by adjusting to local lighting needs automatically.

Smart Lighting: Realistic Day and Night in Duckietown - the challenges

Implementing smart lighting in Duckietown to improve autonomous driving during day and night cycles presents several challenges. Here are a few examples: 

Hardware modifications: while Duckiebots are equipped with controllable LEDs, city infrastructure does not possess lighting capabilities out of the box. The first step is integrating light sources in the design of Duckietown’s city infrastructure.

Variable lighting conditions: Duckiebots, which in this project rely uniquely on vision in their autonomy pipeline, must adapt to changing lighting conditions such as full darkness, sunrise, sunset, and artificial lighting, which impacts camera vision and lane detection accuracy.

Decentralized control: managing street lighting in a decentralized way across Duckietown ensures that each area adapts to its local lighting needs, compensating for example for the presence of passing Duckiebots with their own lights on. Join control algorithms including both city infrastructure and vehicle lighting intensity add complexity to the system’s design and coordination.

Scalability: the street lighting system must be scalable across the entire city, requiring a design that can be expanded without significant complications.

Safe and reliable operation: the system needs to be safe, adapting to issues such as occasional watchtower lighting source failure, while ensuring consistent lane-following performance.

Smart Lighting: Realistic Day and Night in Duckietown: Results

Smart Lighting: Realistic Day and Night in Duckietown: Authors

David Müller is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Research Engineer at Disney Research, Switzerland.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Intersection Navigation for Duckiebots Using DBSCAN

Duckiebot Intersection Navigation with DBSCAN

Duckiebot Intersection Navigation with DBSCAN

Project Resources

Why intersection navigation using DBSCAN?

Navigating intersections is obviously important when driving in Duckietown. It is not as obvious that the mechanics of intersection navigation for autonomous vehicles are very different from those used for standard lane following. There typically is a finite state machine that transitions the agent behavior from one set of algorithms, appropriate for driving down the road, and a different set of algorithms, to actually solve the “intersections” problem. 

The intersection problem in Duckietown has several steps: 

  1. Identifying the beginning of the intersection (identified with a horizontal red line on the road floor)
  2. Stopping at the red line, before engaging the intersection
  3. Identifying what kind of intersection it is (3-way or 4-way, according to the Duckietown appearance specifications at the time of writing)
  4. Identifying the relative position of the Duckiebot at the intersection, hence the available routes forward
  5. Choosing a route
  6. Identifying when it is appropriate to engage the intersection to avoid potentially colliding with other Duckiebots (e.g., is there a centralized coordinator – a traffic light – or not?)
  7. Engaging and navigating the intersection toward the chosen feasible route
  8. Switching the state back to lane following. 

Easier said than done, right?

For each of the points above different approaches could be used. This project focuses on improving the baseline solutions for points 2., and most importantly, 7. of the above.

The real challenge is the actual driving across the intersection (in a safe way, i.e., by “keeping your lane”), because the features that provide robust feedback control in the lane following pipeline are not present inside intersections. The baseline solution for this problem in Duckietown is open loop control, relying on the model of the Duckiebots and the Duckietown to magic-tune a few parameters and the curves just about right. 

As all students of autonomy know, open-loop control is ideally perfect (when all models are known exactly), but it is practically pretty useless on its own, as “all models are wrong” [learn why, e.g., in the Modeling of a Differential Drive robot class]. 

In this project, the authors seek to close the loop around intersection navigation, and chose to use an algorithm called “DBSCAN” (Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise) to do it. 

DBSCAN (Density-Based Spatial Clustering of Applications with Noise – wiki) is a clustering algorithm that groups data points based on density, identifying clusters of varying shapes and filtering out noise. It is used to find the red stop lines at intersections without needing predefined geometric priors (colors, shapes, or fixed positions). This allows to track meaningful visual features in intersections efficiently, localize with respect to them, and hence attempt to navigate along optimal precomputed trajectories depending on the chosen direction.

Intersection navigation using DBSCAN: the challenges

Some of the challenges in this intersection navigation project are:

Initial position uncertainty: Duckiebot’s starting alignment at the stop line may vary, requiring the system to handle inconsistent initial conditions.

Real-time feedback: the current system lacks real-time feedback, relying on pre-configured instructions that cannot adjust for unexpected events, such as slippage of the wheels, inconsistencies between different Duckiebots, and misalignment of road tiles (non-compliant assembly).

Processing speed: previous closed-loop solution attempts used April tags and Kalman filters – with implementations that ended up being too slow: with low update rates and delays.

Transition to lane following: ensuring a smooth handover from intersection navigation to lane following requires precise control to avoid collisions and lane invasion.

Project Highlights

Here is a visual tour of the output of the authors’ work. Check out the GitHub repository for more details!

Intersection Navigation using DBSCAN: Results

Intersection Navigation using DBSCAN: Authors

Christian Leopoldseder is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Software Engineer at Google, Switzerland.

Matthias Wieland is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Senior Consultant at abaQon, Switzerland.

Sebastian Nicolas Giles is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Autonomous Driving Systems Engineer at embotech, Switzerland.

Merlin Hosner is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Process Development Engineer at Climeworks, Switzerland. Merlin was a mentor on this project.

Amaury Camus is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Lead Robotics Engineer at Hydromea, Switzerland. Amaury was a mentor on this project.


Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

The Obstavoid Algorithm in Duckietown

Obstacle Avoidance for Dynamic Navigation Using Obstavoid

Obstacle Avoidance for Dynamic Navigation Using Obstavoid

Project Resources

Why obstacle avoidance?

The importance of obstacle avoidance in self-driving is self-evident, whether the obstacle is a rubber duckie-pedestrian or another Duckiebot on the road.

In this project, authors deploy the Obstavoid Algorithm aiming to achieve:

  • Safety: preventing collisions with obstacles and other Duckiebots, ensuring safe navigation in a dynamic environment.

  • Efficiency: maintaining smooth movement by optimizing the trajectory, avoiding unnecessary stops or delays.

  • Real-world readiness: preparing Duckietown for real-world scenarios where unexpected obstacles can appear, improving readiness.

  • Traffic management: enabling better handling of complex traffic situations, such as maneuvering around blocked paths or navigating through crowded areas.

  • Autonomous operation: It enhances the vehicle’s ability to operate autonomously, reducing the need for human intervention and improving overall reliability.
obstacle avoidance "obstavoid" project logo

Obstacle Avoidance: the challenges

Implementing obstacle avoidance in Duckietown introduces the following challenges:

  • Dynamic obstacle prediction: accurately predicting the movement of dynamic obstacles, such as other Duckiebots, to ensure effective avoidance strategies and timely responses.
  • Computational complexity: managing the computational load of the trajectory solver, in “real-time” scenarios with varying obstacle configurations, while ensuring efficient performance on limited computation.
  • Cost function design: creating and fine-tuning a cost function that balances lane adherence, forward motion, and obstacle avoidance, while accommodating both static and dynamic elements in a complex environment.
  • Integration and testing: ensuring integration of the Obstavoid Algorithm with the Duckietown simulation framework and testing its performance in various scenarios to address potential failures and refine its robustness.

The Obstavoid Algorithm addresses these challenges by employing a time-dependent cost grid and Dijkstra’s algorithm for optimal trajectory planning, allowing for “real-time” obstacle avoidance.

Read more about how the Dijkstra’s algorithm is used in this student project titled “Goto-1: Planning with Dijkstra“.

It dynamically calculates and adjusts trajectories based on predicted obstacle movements, ensuring navigation and integration with the simulation framework.

obstacle avoidance with cost functions in Duckietown

Project Highlights

Here is the output of the authors’ work. Check out the GitHub r epository for more details!

 

Obstacle Avoidance: Results

Obstacle Avoidance: Authors

Alessandro Morra is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently serves as the CEO & Co-Founder at Ascento, Switzerland.

 
 

Dominik Mannhart is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently serves as the Co-Founder at Ascento, Switzerland.

 

Lionel Gulich is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Senior Robotics Software Engineer at NVIDIA, Switzerland.

 
 

Victor Klemm is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently is a PhD student at Robotics Systems Lab, ETH Zurich, Switzerland.

 
 

Dženan Lapandić is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently is a PhD candidate at KTH Royal Institute of Technology, Sweden.

 

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Duckietown Access Remotely high res

ProTip: Duckiebot Remote Connection

ProTip: Duckiebot Remote Connection

Have you ever wanted to work from home, but your robot is in the lab? Networks are notoriosly the trickyest aspect of robotics, and establishing a Duckiebot remote connection can be a real challenge. 

The good news is, that as long as your Duckiebot has been left powered on, it is possible to establish a Duckiebot remote connection and operate the robot as if you were on the same network. 

In this guide, we will show how to access your Duckiebot from anywhere in the world using ZeroTier.

ProTips

Knowing the science does not necessarily mean being practical with the tips and tricks of the roboticist job. “ProTips” are professional tips discussing (apparently) “small details” of the everyday life of a roboticist.

We collect these tips to create a guideline for “best practices”, whether for saving time, reducing mistakes, or getting better performances from our robots. The objective is to share professional knowledge in an accessible way, to make the life of every roboticist easier! 

If you would like to contribute a ProTip, reach out.

About Duckietown

Duckietown is a platform that streamlines teaching, learning, and doing research on robot autonomy by offering hardware, software, curricula, technical documentation, and an international community for learners.

Check out the links below to learn more about Duckietown and start your learning or teaching adventure.

Monocular Visual Odometry

Monocular Visual Odometry for Duckiebot Navigation

Monocular Visual Odometry for Duckiebot Navigation

Project Resources

Why Monocular Visual Odometry?

Monocular Visual Odometry (VO) falls under the “perception” block of the traditional robot autonomy architecture. 

Perception in robot autonomy involves transforming sensor data into actionable information to accomplish a given task in the environment.

Perception is crucial because it allows robots to create a representation of themselves in teh environment they are operating within, which in turn enables the robot to navigate, avoid static or dynamic obstacles, forming the foundation for effective autonomy.

The function of monocular visual odometry is to estimate the robot’s pose over time by analyzing the sequence of images captured by a single camera. 

VO in this project is implemented through the following steps:

  1. Image acquisition: the node receives images from the camera, which serve as the primary source of data for motion estimation.

  2. Feature extraction: key features (points of interest) are extracted from the images using methods like ORB, SURF, or SIFT, which highlight salient details in the scene.

  3. Feature matching: the extracted features from consecutive images are matched, identifying how certain points have moved from one image to the next.

  4. Outlier filtering: erroneous or mismatched features are filtered out, improving the accuracy of the feature matches. In this project, histogram fitting to discard outliers is used.

  5. Rotation estimation: the filtered feature matches are used to estimate the rotation of the Duckiebot, determining how the orientation has changed.

  6. Translation estimation: simultaneously, the node estimates the translation, i.e., how much the Duckiebot has moved in space.

  7. Camera information and kinematics inputs: additional information from the camera (e.g., intrinsic parameters) and kinematic data (e.g., velocity) help refine the translation and rotation estimations.

  8. Path and odometry outputs: the final estimated motion is used to update the Duckiebot’s odometry (evolution of pose estimate over time) and the path it follows within the environment.

Monocular visual odometry is challenging, but provide low-cost, camera-based solution for real-time motion estimation in dynamic environments.

Monocular Visual Odometry: the challenges

Implementing Monocular Visual Odometry involves processing images at runtime, presents challenges that effect performance.
  • Extracting and matching visual features from consecutive images is a fundamental task in monocular VO. This process can be hindered by factors such as low texture areas, motion blur, variations in lighting conditions and occlusions.
  • Monocular VO systems face inherent scale ambiguity since a single camera cannot directly measure depth. The system must infer scale from visual features, which can be error-prone and less accurate in the absence of depth cues.
  • Running VO algorithms requires significant computational resources, particularly when processing high-resolution images at a high frequency. The Raspberry Pi used in the Duckiebot has limited processing power and memory, which contrians the performance of the visual odometry pipeline (the newer Duckiebots, DB21J uses Jetson Nano for computation.)
  • Monocular VO systems, as all odometry systems relying on dead-recokning models, are susceptible to long-term drift and divergence due to cumulative errors in feature tracking and pose estimation.
This project addresses visual odometry challenges by implementing robust feature extraction and matching algorithms (ORB by default) and optimizing parameters to handle dynamic environments and computational constraints. Moreover, it integrates visual odometry with existing Duckiebot autonomy pipeline, leveraging the finite state machine for accurate pose estimation and navigation.

Project Highlights

Here is the output of the authors’ work. Check out the GitHub repository for more details!

 

Monocular Visual Odometry: Results

Monocular Visual Odometry: Authors

Gianmarco Bernasconi is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works as a Senior Research Engineer at Motional, Singapore.

 

Tomasz Firynowicz is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently works as a Software Engineer at Dentsply Sirona, Switzerland. Tomasz was a mentor on this project.

 

Guillem Torrente Martí is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently works as a Robotics Engineer at SonyAI, Japan. Guillem was a mentor on this project.

Yang Liu is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently is a Doctoral Student at EPFL, Switzerland. Yang was a mentor on this project

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Goto-1: Autonomous Navigation using Dijkstra

Goto-1: Planning with Dijkstra

Goto-1: Planning with Dijkstra

Project Resources

Why planning with Dijkstra?

Planning is one of the three main components, or “blocks”, in a traditional robotics architecture for autonomy: “to see, to plan, to act” (perception, planning, and control). 

The function of the planning “block” is to provide the autonomous decision-making part of the robots’ mind, i.e., the controller, with a reference path to follow.

In the context of Duckietown, planning in applied at different hierarchical levels, from lane following to city navigation. 

This project aimed to build upon the vision-based lane following pipeline, introducing a deterministic planning algorithm to allow one Duckiebot to go to from any location (or tile) on a compliant Duckietown map to a specific target tile (hence the name: Goto-1).

Dijkstra algorithm is a graph-based methodology to determine, in a computationally efficient manner, the shortest path between two nodes in the graph.

Goto-1: Autonomous Navigation using Dijkstra
Navigation State Estimation

Autonomous Navigation: the challenges

The new planning capailities of Duckiebots enable autonomous navigation building on pre-existing functionalities, such as “lane following”, “intersection detection and identification”, and “intersection navigation” (we are operating in a scenario with only one agent on the map, so coordination and obstacle avoidance are not central to this progect).

Lane following in Duckietown is mainly vision-based, and as such suffers from the typical challenges of vision in robotics: motion blur, occlusions, sensitivity to environmental lighting conditions and “slow” sampling.

Intersection detection in Duckietown relies on the identification of the red lines on the road layer. Identification of the type of intersection, and relative location of the Duckiebot with respect to it, is instead achieved through the detection and interpretation of fiducial markers, appropriately specified and located on the map. In the case of Duckietown, April Tags (ATs) are used. Each AT, in addition to providing the necessary information regarding the type of intersection (3- or 4-way) and the position of the Duckiebot with respect to the intersection, is mapped to a unique ID in the Duckietown traffic sign database. 

These traffic signs IDs can be used to unamiguosly define the graph of the city roads. Based on this, and leveraging the lane following pipeline state estimator, it is possible to estimate the location (with tile accuracy) of the Duckiebot with respect to a global map reference frame, hence providing the agent sufficient information to know when to stop.

After stopping at an intersection, detecting and identifying it, Duckiebots are ready to choose which direction to go next. This is where the Dijkstra planning algorithm comes into play. After the planner communicates the desired turn to take, the Duckiebot drives through the intersection, before switchng back to lane following behavior after completing the crossing. In Duckietown, we refer to the combined operation of these states as “indefinite navigation”. 

Switching between different “states” of the robot mind (lane following, intersection detection and identification, intersection navigation, and then back to lane following) requires the careful design and implementation of a “finite state machine” which, triggered by specific events, allows for the Duckiebot to transition between these states. 

Integrating a new package within the existing indefinite navigation framework can cause inconsistencies and undefined behaviors, including unreliable AT detection, lane following difficulties, and inconsistent intersection navigation.

Performance evaluation of the GOTO-1 project involved testing three implementations with ten trials each, revealing variability in success rates.

Project Highlights

Here is the output of their work. Check out the GitHub repository for more details!

Autonomous Navigation: Results

Autonomous Navigation: Authors

Johannes Boghaert is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently serves as the CEO of Superlab Suisse, Switzerland.

Merlin Hosner is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently works as Process Development Engineer at Climeworks, Switzerland. Merlin was a mentor on this project.

Gioele Zardini is a former Duckietown student and teaching assistant of the Autonomous Mobility on Demand class at ETH Zurich, and currently is an Assistant Professor at MITMerlin was a mentor on this project.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

YOLO based object detection in Duckietown at night and day

YOLO-based Robust Object Detection in Duckietown

YOLO-based Robust Object Detection in Duckietown

Project Resources

Why Robust Object Detection?

Object detection is the ability of a robot to identify a feature in its surroundings that might influence its actions. For example, if an object is laid on the road it might represent an obstacle, i.e., a region of space that the Duckiebot cannot occupy. Robust object detection becomes particularly important when operating in dynamic environmental conditions.

Obstacles can be of various, shape or color and they can be detected through different sensing modalities, for example, through vision or lidar scanning. 

In this project, students use a purely vision-based approach for obstacle detection. Using vision is very tricky because small nuisances such as in-class variations (think of many different type of duckies) or environmental lighting conditions will dramatically affect the outcome. 

Robust object detection refers to the ability of a system to detect objects in a broad spectrum of operating conditions, and to do so reliably. 

Detecting object in Duckietown is therefore important to avoid static and moving obstacles, detect traffic signs and otherwise guarantee safe driving. 

Model Performance Under Normal and Low Lighting Conditions

Robust Object Detection: the challenges

Some of the key challenges associated with vision-based object detection are the following:

Robustness across variable lighting conditions: Ensuring accurate object detection under diverse lighting is complex due to changes in object appearance (check out why in our computer vision classes). The model must handle different lighting scenarios effectively.

Balancing robustness and performance: There’s a trade-off between robustness to lighting variations and achieving high accuracy in standard operating conditions. Prioritizing one may affect the other.

Integration and real-time performance: Integrating the trained neural network (NN) model into the Duckiebot’s system is required for real-time operation, avoid lags associated with transport of images across networks. The model’s complexity therefore must align with the computational resources available. This project was executed on DB19 model Duckiebots, equipped with Raspberry Pi 3B+ and a Coral board.

Data quality and generalization: Ensuring the model generalizes well despite potential biases in the training dataset and transfer learning challenges is crucial. Proper dataset curation and validation are essential.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Robust Obstacle Detection: Results

Robust Object Detection: Authors

Maximilian Stölzle is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at MIT as a Visiting Researcher.

Stefan Lionar is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently an Industrial PhD student at Sea AI Lab (SAIL), Singapore.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Dynamic Obstacle Avoidance

Implementing vision based dynamic obstacle avoidance

Implementing vision based dynamic obstacle avoidance

Project Resources

Why dynamic obstacle avoidance?

Dynamic obstacle avoidance is the process of detecting a region of space that is not navigable (an obstacle), planning a path around it, and executing that plan.

When the obstacle moves, the plan needs to account for the future positions of the object as well, making the process significantly more complicated than passing a static obstacle. 

With this aim, the authors of this project designed and implemented a robust passing algorithm for Duckiebots in Duckietown.

The approach adopted was to develop a new LED-based detection system, modify the typical Duckietown lane following pipeline for planning around the obstacles, and deploying a new controller to execute manoeuvres. 

Dynamic obstacle avoidance:
the challenges

Some of the key challenges associated with this project are the following:

Detection Accuracy: The Duckiebot and Duckies detection systems occasionally produce false positives. Light sources from other Duckiebots or shiny objects can interfere with the LED detection, while yellow line segments can be mistaken for Duckies. Improving the reliability of detection under varying lighting conditions is essential.

Lane Following Stability: The Duckiebots sometimes become unstable while overtaking, especially when driving in the left lane. The lane-following system struggles with large lane pose angles or rapid changes in lane position, which can cause the Duckiebot to veer off the road. Enhancing the lane-following algorithm to maintain stability during lane changes is critical.

Velocity Estimation: Estimating the speed of moving Duckiebots accurately is challenging. The current position data obtained from LED detection fluctuates too much to provide a reliable velocity measurement. Developing a more robust method for estimating the velocity of other Duckiebots is needed to ensure safe and efficient overtaking.

Variable Speed Control: Implementing variable speed control during overtaking is problematic due to instability in the lane-following pipeline when speeds are dynamically adjusted. Adjusting speed based on the detected obstacle’s speed without losing lane stability is difficult, necessitating improvements in the lane control model to handle speed changes effectively.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Dynamic Obstacle Avoidance: Results

Dynamic Obstacle Avoidance: Authors

Nikolaj Witting is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at Trackman as an Algorithm Developer.

Fidel Esquivel Estay is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently serving as the Co-Founder at UpCircle.

Johannes Lienhart is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently serving as the CTO at Tethys Robotics.

Paula Wulkop is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, where she is currently pursuing her Ph.D.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.