Visual Control for Autonomous Navigation in Duckietown

Visual Control for Autonomous Navigation in Duckietown

General Information

Visual Control for Autonomous Navigation in Duckietown

This research presents a visual control framework for in Duckietown using only onboard camera feedback for autonomous navigation. The system models the Duckiebot as a unicycle with constant driving velocity and uses steering velocity as the control input. Virtual guidelines are extracted from the lane boundaries to compute two visual features: the middle point and the vanishing point on the image plane. 

The controller drives these features to the image center using a mathematically derived control law. The visual features are obtained from the camera feed using a multi-stage image processing pipeline implemented in OpenCV. The pipeline includes frame denoising, grayscale conversion, edge detection using the Canny edge detection algorithm, region of interest masking, and line detection via the Probabilistic Hough Line Transform. This setup provides robust detection of the white and yellow lane markings under varying conditions. 

A scenario-driven transition system detects red lines marking intersections and activates artificial guidelines to execute controlled turns. The visual control implementation runs as a single ROS node following a publisher-subscriber architecture, deployed both in the Duckietown Simulator (gym) and in Duckietown.

Highlights - Visual Control for Autonomous Navigation in Duckietown

Here is a visual tour of the implementation of visual control for autonomous navigation by the authors. For all the details, check out the full paper.

Abstract

Here is the abstract of the work, directly in the words of the authors:

This paper presents a vision-based control framework for the autonomous navigation of wheeled mobile robots in city-like environments, including both straight roads and turns. The approach leverages Computer Vision techniques and OpenCV to extract lane line features and utilizes a previously established control law to compute the necessary steering commands. 

The proposed method enables the robot to accurately follow the lanes and seamlessly handle complex maneuvers such as consecutive turns. The framework has been rigorously validated through extensive simulations and real-world experiments using physical robots equipped with the ROS framework. Experimental evaluations were conducted at the DIAG Robotics Lab at Sapienza University of Rome, Italy, demonstrating the practicality of the proposed solution in realistic settings. 

This work bridges the gap between theoretical control strategies and their practical application, offering insights into vision-based navigation systems for autonomous robotics. A video demonstration of the experiments is available at https://youtu.be/tDvpwSj8X28.

Conclusion - Visual Control for Autonomous Navigation in Duckietown

Here is the conclusion according to the authors of this paper:

This paper proposed a vision-based control framework for lane-following tasks in wheeled mobile robots, validated through both simulations and real-world experiments. The approach effectively maintains the robot position at the center of lanes and enables safe left and right turns by relying solely on visual feedback from onboard camera, without requiring external localization systems or pre-mapped environments. 

The system’s modular design and simplicity allow for seamless integration with other robotic systems, making it versatile for diverse urban navigation scenarios. Future research will focus on enhancing the framework to handle complex scenarios, such as autonomous lane corrections, and incorporating obstacle detection and avoidance mechanisms for improved performance in dynamic, real-world environments. 

These advancements will expand the applicability of the proposed method, confirming its potential as a robust solution for autonomous navigation.

Did this work spark your curiosity?

Project Authors

Shima Akbari is a PhD student at Italian National Program in Autonomous Systems at the University of Rome Tor Vergata, Italy.

Nima Akbari is a PhD student at Basel University of Switzerland in privacy technologies for the Internet of Things.

Giuseppe Oriolo is a Full Professor of Automatic Control and Robotics at Sapienza University of Rome.

Sergio Galeani is a full professor at the University of Rome Tor Vergata, Italy.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Duckiematrix with Virtual Duckiedrone and Duckiebot

New Software Release – Ente Changelog

New Software Release – Ente Changelog

The Duckietown platform has been evolving since its creation back at MIT in 2016. The main code base has undergone four major revisions, with the current release named daffy (d: fourth letter of the alphabet). 

We are now happy to announce the new major Duckietown software release: ente

Why ente?

First things first: why is it called ente?

Among the various meanings of this word in different languages, Ente is the German word for “duck”. We chose this name as a tribute of Duckietown to ETH Zürich, and the German-speaking part of Switzerland, for their influence on Duckietown’s evolution over the last years.

But why did we need ente

We built ente to streamline the code base, especially the autonomy code running on Duckietown robots, make the development process quicker and more efficient, and to prime the platform for easier updates, maintenance, and future improvements. 

The Duckietown codebase had evolved, historically, from a classroom experience, resulting in a improvable autonomy stack. The ente initiative grew to include infrastructural upgrades, e.g., the introduction of the Duckietown Postal System (DTPS), to better support reproducible robotics learning experiences in light of new developments in the fields of robotics and AI, e.g., the release of ROS2.

What is new in ente?

Here is a non-exhaustive list of changes introduced by ente into Duckietown.

The Duckiematrix virtual environment

With ente comes the Duckiematrix, a photorealistic Unity-based virtual environment supporting virtual Duckietown robots. 

The Duckiematrix allows simulating the physics and aesthetics of a physical Duckietown environment, as well as the sensing and acting capabilities of virtual Duckietown robots within that environment.

The Duckiematrix is programmable, lightweight, ROS compatible, and supports “multiplayer” features, where multiple learners can join the same city with their Duckiebots and learn & practice together.

Duckiematrix logo 2
Virtual Duckiebots: digital twins for Duckietown robots

Virtual Duckietown robots allow for a Duckietown robot’s full software stack to be run on a local machine in its own Docker environment, and allowing for the full simulation of any aspect of that Duckietown robot within the Duckiematrix, simplifying testing and improving portability to the real world Duckiebots.

Code refactoring for faster development

The code in the autonomy stack has been refactored so that the key algorithms are moved into libraries. This facilitates the creation of notebooks for experimentation and learning, as well as enabling the code to be more portable and disentangled from the ROS infrastructure, setting the stage for using other middleware (e.g., ROS2).

The Duckietown Manual: all information in a single place

All documentation and information have been consolidated in the Duckietown Manual, a single, authoritative, and searchable source. 

The new Duckietown Manual is a great place to get started, as it contains step-by-step instructions on how to set up your computer,
assemble, calibrate, and operate a Duckiebot, along with troubleshooting tips. It moreover includes information for advanced users who wish to develop using Duckietown, pointers to code Documentation, as well as an instructor manual with pedagogical insights for teachers. 

Duckietown Postal Service (DTPS) and new development workflow

The Duckietown Postal Service (DTPS) is an HTTP/2 compatible message-passing system that bridges between the Duckietown robots and the environment, whether physical or digital. DTPS enables upgrading from ROS to ROS2, or the use of any other similar middleware, and makes Duckietown more compatible with all OSs.  

In addition, a new development workflow has been implemented. The API for working with learning experiences (dts code) has been significantly improved over the previous version. 

Duckiebot UI improvements

 

A few actuator and sensor interfaces were updated for improved usability and robot management, for example: 

 

 

Duckiebot Apps 4

Where are we going from here?

Coming soon: Self-Driving Cars with Duckietown 2025

A new edition of Self-Driving Cars with Duckietown MOOC, the world’s first robot autonomy massive open online course (MOOC) with hardware, will soon be announced.  This new edition will be ente-based, support the Duckiematrix and be instructor-paced.

Self Driving Cars with Duckietown MOOC
ROS 2 autonomy baseline and Python SDK interface

With DTPS enabling support for any middleware, translating the current ROS lane following pipeline into a ROS2 one is now a fun project. Coming out soon!

A Python SDK to interface Duckietown robots and the Duckiematrix is in the works as well.

Duckiematrix updates in development

Duckiematrix map editor

An app for creating and editing maps for the Duckiematrix.

Duckiematrix Gym

The integration of the Duckiematrix with Gymnasium.

Duckiedrone support for the Duckiematrix

The addition of Virtual Duckiedrones and the integration of Duckiedrones with the Duckiematrix.

DD24 Duckiedrone holding position in the Duckiematrix

How to get started with Duckietown?

While the legacy daffy version of Duckietown will stay up and be supported for the time being, it will not receive further updates. To upgrade your environment and your Duckiebots to the new ente version and start experiencing all the new features for free, see our guide here

About Duckietown

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Sim2Real Lane Segmentation via Domain Adaptation

Sim2Real Lane Segmentation via Domain Adaptation

General Information

Sim2Real Lane Segmentation via Domain Adaptation

This embodied AI work investigates Sim2Real transfer: the process of applying ML agents trained in simulation to real-world environments, for semantic lane segmentation in mobile robotics using domain adaptation techniques

The study addresses the distributional shift between synthetic (simulated) and real-world data using unsupervised and semi-supervised learning approaches that minimize the need for manual annotation by learning from unlabeled data or limited labeled samples.

A convolutional neural network (CNN) with an encoder-decoder architecture is trained on labeled synthetic data generated in the Duckietown Gym and adapted to unlabeled real-world images captured in the physical Duckietown setup.

The method integrates:

  • Feature-level and pixel-level adaptation, aligning internal representations and input appearance between domains to ensure consistent segmentation.

  • Adversarial training, where a discriminator encourages the CNN to learn domain-invariant features.

  • Cycle-consistent generative adversarial networks (CycleGANs), which perform image-to-image translation to make synthetic images visually similar to real ones while preserving semantic structure.

  • Evaluation using mean Intersection over Union (mIoU) and pixel accuracy, both standard metrics for assessing segmentation quality.

The results demonstrate that domain adaptation enables effective Sim2Real transfer for lane detection in Duckietown with minimal supervision advancing the deployment of robust, label-efficient perception systems in embedded robotics and autonomous navigation.

Highlights - Sim2Real lane segmentation via domain adaptation

Here is a visual tour of the implementation of lane segmentation via domain adaptation by the authors. For all the details, check out the full paper.

Abstract

Here is the abstract of the work, directly in the words of the authors:

As the cost of labelling and collecting real world data remains an issue for companies, simulator training and transfer learning slowly evolved to be the foundation of many state-of the-art projects. In this paper these methods are applied in the Duckietown setup where self-driving agents can be developed and tested.

Our aim was to train a selected artificial neural network for right lane segmentation on simulator generated stream of images as a comparison baseline, then use domain adaptation to be more precise and stable in the real environment. We have tested and compared four knowledge transfer methods that included domain transformation using CycleGAN and semi-supervised domain adaptation via Minimax Entropy.

As the latter was previously untested in semantic segmentation according to our best knowledge, we have contributed to showing it is indeed possible and produces promising results. Finally we have shown that it could also create a model that fulfills our performance requirements of stability and accuracy.We show that the selected methods are equally eligible for the simulation to real transfer learning problem, and that the simplest method delivers the best performance.

Conclusion - Sim2Real lane segmentation via domain adaptation

Here is the conclusion according to the authors of this paper:

Our goal was to create a stable and accurate right lane segmentation network by means of simulator data and domain adaptation techniques. We have tested and compared four knowledge transfer methods that included domain transformation using CycleGAN and semi-supervised domain adaptation via Minimax Entropy. We have shown that in the given scenario simulator-trained models have relatively good performance on real images, though their stability is a key weakness.

Our findings demonstrate that domain transformation using CycleGAN has limited applicability in segmentation tasks due to its distorting effect on road geometry, however the similarity between training and testing domains did result in increased stability.

Unfortunately, histogram matching failed in our case to improve on the baseline solution, producing similar results to CycleGAN.

We have observed that one of the simplest domain adaptation methods, source and target combined domain training helped to produce the best performing model according to numerical evaluation.

We implemented and demonstrated how semi-supervised domain adaptation via Minimax Entropy, a complex, entropybased adversarial method is applicable for segmentation tasks.

In the end, all the existing results were compared and evaluated with the conclusion that source and target combined domain training produced the best results of all investigated methods tied with SSDA via Minimax Entropy. Thereby, the usability of the latter method in segmentation tasks has also been proven.

Did this work spark your curiosity?

Project Authors

Márton Tim is currently working as a deep learning engineer at Continental, Hungary.

Robert Moni is currently working as a Senior Machine Learning Engineer at Continental, Hungary.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Duckietown Map Coordinate System for Global Localization

Duckiebot Localization with Sensor Fusion in Duckietown

Duckiebot Localization with Sensor Fusion in Duckietown

Project Resources

Localization with Sensor Fusion in Duckietown - the objectives

The advantage of having multiple sensors on a Duckiebot is that the data provided can be combined to provide additional precision and reduce uncertainty in derived results. This process is generally referred to as sensor fusion, and a typical example is localization, i.e., the problem of finding the pose of the Duckiebot in time, with respect to some reference frame. And if the data is redundant? No problem, just discard it.

In this project, the objective is to implement sensor fusion-based localization and lane-following on a DB21 Duckiebot, integrating odometry (using data from wheel encoders) with visual AprilTag detection for improved positional accuracy. 

This process addresses limitations of odometry, i.e., the open-loop reconstruction of the robots’ trajectory using only wheel encoder data in a mathematical approach known as “dead reckoning”, by incorporating AprilTags as global reference landmarks, thereby enhancing spatial awareness in environments where dead reckoning alone is insufficient.

Technical concepts include AprilTag-based localization, PID control for lane following, transform tree management in ROS (tf2), and coordinate frame transformations for pose estimation.

Sensor fusion - visual project highlights

The technical approach and challenges

This approach, at the technical level, involves:

  • extending ROS-based packages to implement AprilTag detection using the dt-apriltags library,
  • configuring static transformations for landmark localization in a unified world frame, and
  • correcting odometry drift by broadcasting transforms from estimated AprilTag poses to the Duckiebot’s base frame.

A full PID controller was moreover implemented, with tunable gains for lateral and heading deviation, and derivative terms were conditionally initialized for stability.

Challenges included:

  • remapping ROS topics for motor command propagation,
  • resolving frame connectivity in tf trees,
  • configuring accurate static transforms for AprilTag landmarks,
  • debugging quaternion misrepresentation during pose updates, and
  • correctly applying transform compositions using lookup_transform_full to compute odometry corrections.
Looking for similar projects?

Localization with Sensor Fusion in Duckietown: Authors

Samuel Neumann is a Ph. D. student at the University of Alberta, Canada.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Interpretable Reinforcement Learning for Visual Policies

Interpretable Reinforcement Learning for Visual Policies

General Information

Interpretable Reinforcement Learning for Visual Policies

Reinforcement Learning (RL) has enabled solving complex problems, especially in relation to visual perception in robotics. An outstanding challenges is that of allowing humans to make sense of the decision making process, so to enable deployment in safety-critical applications such as, e.g., autonomous driving. This work focuses on the problem of interpretable reinforcement learning in vision-based agents.

In particular, this research introduces a self-supervised framework for interpretable reinforcement learning in vision-based agents. The focus lies in enhancing policy interpretability by generating precise attention maps through Self-Supervised Attention Mechanisms (SSAM). 

The method does not rely on external labels and works using data generated by a pretrained RL agent. A self-supervised interpretable network (SSINet) is deployed to identify task-relevant visual features. The approach is evaluated across multiple environments, including Atari and Duckietown. 

Key components of the method include:

  • A two-stage training process using pretrained policies and frozen encoders
  • Attention masks optimized using behavior resemblance and sparsity constraints
  • Quantitative evaluation using FOR and BER metrics for attention quality
  • Comparative analysis with gradient and perturbation-based saliency methods
  • Application across various architectures and RL algorithms including PPO, SAC, and TD3

The proposed approach isolates relevant decision-making cues, offering insight into agent reasoning. In Duckietown, the framework demonstrates how visual interpretability can aid in diagnosing performance bottlenecks and agent failures, offering a scalable model for interpretable reinforcement learning in autonomous navigation systems.

Highlights - interpretable reinforcement learning for visual policies

Here is a visual tour of the implementation of interpretable reinforcement learning for visual policies by the authors. For all the details, check out the full paper.

Abstract

Here is the abstract of the work, directly in the words of the authors:

Deep reinforcement learning (RL) has recently led to many breakthroughs on a range of complex control tasks. However, the agent’s decision-making process is generally not transparent. The lack of interpretability hinders the applicability of RL in safety-critical scenarios. While several methods have attempted to interpret vision-based RL, most come without detailed explanation for the agent’s behavior. In this paper, we propose a self-supervised interpretable framework, which can discover interpretable features to enable easy understanding of RL agents even for non-experts. Specifically, a self-supervised interpretable network (SSINet) is employed to produce fine-grained attention masks for highlighting task-relevant information, which constitutes most evidence for the agent’s decisions. We verify and evaluate our method on several Atari 2600 games as well as Duckietown, which is a challenging self-driving car simulator environment. The results show that our method renders empirical evidences about how the agent makes decisions and why the agent performs well or badly, especially when transferred to novel scenes. Overall, our method provides valuable insight into the internal decision-making process of vision-based RL. In addition, our method does not use any external labelled data, and thus demonstrates the possibility to learn high-quality mask through a self-supervised manner, which may shed light on new paradigms for label-free vision learning such as self-supervised segmentation and detection.

Conclusion - interpretable reinforcement learning for visual policies

Here is the conclusion according to the authors of this paper:

In this paper, we addressed the growing demand for human-interpretable vision-based RL from a fresh perspective. To that end, we proposed a general self-supervised interpretable framework, which can discover interpretable features for easily understanding the agent’s decision-making process. Concretely, a self-supervised interpretable network (SSINet) was employed to produce high-resolution and sharp attention masks for highlighting task-relevant information, which constitutes most evidence for the agent’s decisions. Then, our method was applied to render empirical evidences about how the agent makes decisions and why the agent performs well or badly, especially when transferred to novel scenes. Overall, our work takes a significant step towards interpretable vision-based RL. Moreover, our method exhibits several appealing benefits. First, our interpretable framework is applicable to any RL model taking as input visual images. Second, our method does not use any external labelled data. Finally, we emphasize that our method demonstrates the possibility to learn high-quality mask through a self-supervised manner, which provides an exciting avenue for applying RL to self automatically labelling and label-free vision learning such as self-supervised segmentation and detection.

Did this work spark your curiosity?

Project Authors

Wenjie Shi received the BS degree from the School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan, China, in 2016. He is currently working toward the Ph.D. degree in control science and engineering from the Department of Automation, Institute of Industrial Intelligence and Systems, Tsinghua University, Beijing, China.

Gao Huang (Member, IEEE) received the B.S. degree in automation from Beihang University, Beijing, China, in 2009, and the Ph.D. degree in automation from Tsinghua University, Beijing, in 2015. He is currently an Associate Professor with the Department of Automation, Tsinghua University.

Shiji Song (Senior Member, IEEE) received the Ph.D. degree in mathematics from the Department of Mathematics, Harbin Institute of Technology, Harbin, China, in 1996. He is currently a Professor at the Department of Automation, Tsinghua University, Beijing, China.

Zhuoyuan Wang (IEEE) is currently a Ph. D. student at Carnegie Mellon University, and holds a B.S. degree in control science and engineering in the Department of Automation, Tsinghua University, Beijing, China.

Tingyu Lin received the B.S. degree and the Ph.D. degree in control system from the School of Automation Science and Electrical Engineering at Beihang University in 2007 and 2014, respectively. He is now a Member of China Simulation Federation (CSF).

Cheng Wu received the M.Sc. degree in electrical engineering from Tsinghua University, Beijing, China, in 1966. He is currently a Professor with the Department of Automation, Tsinghua University.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Visual Feedback for Autonomous Navigation in Duckietown

Features for Efficient Autonomous Navigation in Duckietown

Features for Efficient Autonomous Navigation in Duckietown

Project Resources

Project highlights

Visual Feedback for Autonomous Navigation in Duckietown - the objectives

This project from students at TUM (Technische Universität of Munich) builds on the preexisting Duckietown autonomy stack to add/reintegrate/improve upon much-needed autonomous navigation features: improved control (pure pursuit instead of PID), red stop line detection, AprilTag detection, intersection navigation, and obstacle detection (using YOLO v3), making Duckietowns more complex and interesting!

The resulting agent includes modules for lane following, stop line detection, and intersection handling using AprilTags, following the legacy infrastructure of Duckietown.

The autonomy pipeline relies heavily on vision as the primary means of perception: lane edges are projected from image space to the ground plane using inverse perspective mapping learned after running a camera calibration procedure.

The Duckiebot then estimates a dynamic target point by offsetting yellow or white lane markers depending on visibility. The curvature is computed based on the geometric relation between the Duckiebot and the goal point, and the steering command is derived from this curvature.

The Duckiebot velocity and angular velocity are then modulated using a second-degree polynomial function based on detected path geometry.

Visual input from an onboard monocular camera is processed through a lane filter with adaptive Gaussian variance scaling relative to frame timing.

When running by an intersection, stop lines are detected using HSV color segmentation. AprilTag detection determines intersection decisions, with tag IDs mapped to turn directions.

Every module is implemented as an independent ROS package with dedicated launch files, coordinated via a central launch file. A YOLOv3 object detection model, trained on a custom Duckietown dataset, provides real-time obstacle recognition.

The challenges and approach

One major hurdle was integrating object detection models like Single-Shot Detector (SSD) and YOLO with the Duckiebot’s ROS-based camera system.

While the SSD model was trained on a custom Duckietown dataset, ROS publisher-subscriber mismatches prevented live inference. Transitioning to the YOLO model involved adapting annotation formats and re-training for compatibility with the YOLO architecture. In lane following, the default controller from Duckietown demos showed high deviation, prompting the implementation of a modified pure pursuit approach. 

Additional challenges arose from limited computational resources on the Duckiebot, with CPU overuse causing processing delays when running all modules concurrently. The approach focused on modular development, isolating lane following, stop line detection, and intersection navigation into separate ROS packages with fine-tuned parameters. The pure pursuit algorithm was adapted for ground-projected lane estimation, dynamic speed control, and target point calculation based on visible lane markers. Integration of AprilTag-based intersection logic and LED signaling provided directional control at intersections.

This structured, iterative methodology enabled real-time, vision-guided behavior while operating within the constraints.

Project Report

Did this work spark your curiosity?

Visual Feedback for Autonomous Navigation in Duckietown: Authors

Servesh Khandwe is currently working as a Software Engineer at Porsche Digital, Germany.

Ayush Kumar is currently working as a Research Assistant at Fraunhofer IIS, Germany.

Parth Karkar is currently working as an Analytical Consultant at Mutares SE & Co. KGaA, Germany.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Visual Feedback for Lane Tracking in Duckietown

Visual Feedback for Autonomous Lane Tracking in Duckietown

General Information

Visual Feedback for Autonomous Lane Tracking in Duckietown

How can vehicle autonomy be achieved by relying only on visual feedback from the onboard camera?

This work presents an implementation of lane following for the Duckietbot (DB17) using visual feedback as the only onboard sensor. The approach relies on real-time lane detection, and pose estimation, eliminating the need for wheel encoders.

The onboard computation is provided by a Raspberry Pi, which performs low-level motor control, while high-level image processing and decision-making are offloaded to an external ROS-enabled computer.

The key technical aspects of the implemented autonomy pipeline include:

  • Camera calibration to correct fisheye lens distortion;

  • HSV-based image segmentation for lane line detection;

  • Aerial perspective transformation for geometric consistency;

  • Histogram-based color separation of continuous and dashed lines;

  • Piecewise polynomial fitting for path curvature estimation;

  • Closed-loop motion control based on computed linear and angular velocities.

The methodology demonstrates the feasibility of using camera-based perception to control robot motion in structured environments. By using Duckiebot and Duckietown as the development platform, this work is another example of how to bridge the gap between real-world testing and cost-effective prototyping, making vehicle autonomy research more accessible in educational and research contexts.

Highlights - visual feedback for lane tracking in Duckietown

Here is a visual tour of the implementation of vehicle autonomy by the authors. For all the details, check out the full paper.

Abstract

Here is the abstract of the work, directly in the words of the authors:

The autonomy of a vehicle can be achieved by a proper use of the information acquired with the sensors. Real-sized autonomous vehicles are expensive to acquire and to test on; however, the main algorithms that are used in those cases are similar to the ones that can be used for smaller prototypes. Due to these budget constraints, this work uses the Duckiebot as a testbed to try different algorithms as a first step to achieve full autonomy. This paper presents a methodology to properly use visual feedback, with the information of the robot camera, in order to detect the lane of a circuit and to drive the robot accordingly.

Conclusion - visual feedback for lane tracking in Duckietown

Here is the conclusion according to the authors of this paper:

Autonomous cars are currently a vast research area. Due to this increase in the interest of these vehicles, having a costeffective way to implement algorithms, new applications, and to test them in a controlled environment will further help to develop this technology. In this sense, this paper has presented a methodology for following a lane using a cost-effective robot, called the Duckiebot, using visual feedback as a guide for the motion. Although the whole system was capable of detecting the lane that needs to be followed, it is still sensitive to illumination conditions. Therefore, in places with a lot of lighting and brightness variations, the lane recognition algorithm can affect the autonomy of the vehicle.
As future work, machine learning, and particularly convolutional neural networks, is devised as a means to develop robust lane detectors that are not sensitive to brightness variation. Moreover, more than one Duckiebot is intended to drive simultaneously in the Duckietown.

Did this work spark your curiosity?

Project Authors

Oscar Castro is currently working at Blume, Peru.

Axel Eliam Céspedes Duran is currently working as a Laboratory Professor of the Industrial Instrumentation course at the UTEC – Universidad de Ingeniería y Tecnología, Peru.

Roosevelt Jhans Ubaldo Chavez is currently working as a Laboratory Professor of the Industrial Instrumentation course at the UTEC – Universidad de Ingeniería y Tecnología, Peru.

Oscar E. Ramos is currently working toward the Ph.D. degree in robotics with the Laboratory for Analysis and Architecture of Systems, Centre National de la Recherche Scientifique, University of Toulouse, Toulouse, France.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Figueroa robotics in Peru

Making robotics in Peru more accessible

Making robotics in Peru more accessible

Nicolas Figueroa, CEO of NFM Robotics and Robotics Lab, shares his vision of making robotics in Peru and Latin America accessible.

Lima, Peru, June 2025: Dr. Nicolas Figueroa talks with us about his goal to make teaching and learning robotics in Peru and Latin America more accessible and efficient, and especially about his mission to strengthen Peruvian national industry through robotics.

Bringing cutting edge robotics in Peru

Good morning and thank you for your time. Could you introduce yourself please?

Sure. My name is Nícolas Figueroa. I’m the general manager of NFM Robotics, and I also run a nonprofit initiative called Robotics Lab.  I recently defended my thesis, so now I’m officially a doctor! 

Through Robotics Lab, we work with universities to promote robotics and robot autonomy education in Latin America, where there is still a significant gap in access to advanced robotics knowledge. I believe Duckietown offers an efficient and accessible way to help bridge this gap.

robotics in Peru
What can you tell us about your work?

My goal is to build a strong robotics community in Peru, and eventually throughout South America. 

I work closely with university student leadership. For example, students form directive committees, presidents, vice presidents, chairs, and they organize conferences, workshops, and talks to promote robotics and robot autonomy knowledge. I maintain close contact with engineering schools in the fields of mechatronics, industrial robotics and electronics. 

This connection allows me to support their efforts more effectively, even as an external partner. With NFM Robotics, we are seeing that the Peruvian industry is beginning to explore robotics, but isn’t widely adopted yet. There’s a big opportunity to offer high-level solutions, but we need more people trained in this technology. 

Duckietown helps us train teams in ROS and autonomous robotics. These teams can then support industry projects.

HRFEST 2024 robotics in Peru
So how is Duckietown useful for your work?

Considering that our target are both academic institutions for education, and industry for practical applications, I found Duckietown to be an incredible tool for introducing autonomous robotics. Its hands-on, accessible approach is key to closing the knowledge gap concerning robotics in Peru. When I first looked for platforms to teach autonomous robotics, I found that many options were either too expensive, had limited access, or didn’t support community engagement. 

Duckietown stood out as different, it empowers learners and prioritizes impact. That’s why I knew it was the right platform to support our mission at Robotics Lab.

Prof. Figueroa with humanoid robot, robot autonomy

Through Robotics Lab, we work with universities to promote robotics education in Latin America, where there is still a significant gap in access to advanced robotics knowledge. I believe Duckietown offers an efficient and accessible way to help bridge this gap.

What is your current focus?

Right now, we are focusing on developing robotics in Peru as a pilot project. We’ve established a presence in five Peruvian universities. But by the end of this year and early next year, we plan to expand to other countries. For example, in May, we hosted a virtual lecture series with speakers from Germany, Italy, Spain, and Estonia. It was our first step in bringing our initiative to a broader international context.

robotics in Peru

I found Duckietown to be an incredible tool for introducing autonomous robotics. Its hands-on, accessible approach is key to closing the robotics knowledge gap.

Nicolas Figueroa with journalist
Did Duckietown satisfy your needs?
Duckietown has become a valuable partner in our region. We’re working to bring this platform to more universities and training centers so more people can explore cutting-edge technology, reduce knowledge gaps, and prepare for Industry 4.0 challenges. We’re proud to be part of the Duckietown ecosystem and to contribute to its growth in Latin America. We hope to foster even more collaboration and opportunity for the next generation of roboticists.
robotics in Peru
Thank you very much for your time, any final comment?

The idea is to form a group within Robotics Lab to begin introducing autonomous robots and learning more deeply about robotic autonomy. We’re currently in discussions with some university faculties about establishing Duckietown-based laboratories, and we hope to promote our partnership with Duckietown even further.

robotics in Peru

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!

Pure pursuit gif compress

Pure Pursuit Lane Following with Obstacle Avoidance

Pure Pursuit Lane Following with Obstacle Avoidance

Project Resources

Project highlights

Pure Pursuit Controller with Dynamic Speed and Turn Handling
Pure Pursuit Controller with Dynamic Speed and Turn Handling
Duckiebot lane following with pure pursuit and obstacle avoidance using image processing in Duckietown
Pure Pursuit with Image Processing-Based Obstacle Detection
Duckiebots navigating curves in Duckietown using pure pursuit and obstacle avoidance with onboard object detection
Duckiebots Avoiding Obstacles with Pure Pursuit Control

Pure Pursuit Lane Following with Obstacle Avoidance - the objectives

Pure pursuit is a geometric path tracking algorithm used in autonomous vehicle control systems. It calculates the curvature of the road ahead by determining a target point on the trajectory and computing the required angular velocity to reach that point based on the vehicle’s kinematics.

Unlike proportional integral derivative (PID) control, which adjusts control outputs based on continuous error correction, pure pursuit uses a lookahead point to guide the vehicle along a trajectory, enabling stable convergence to the path without oscillations. This method avoids direct dependency on derivative or integral feedback, reducing complexity in environments with sparse or noisy error signals.

This project aims to implement a pure pursuit-based lane following system integrated with obstacle avoidance for autonomous Duckiebot navigation. The goal is to enable real-time tracking of lane centerlines while maintaining safety through detection and response to dynamic obstacles such as other Duckiebots or cones.

The pipeline includes a modified ground projection system, an adaptive pure pursuit controller for path tracking, and both image processing and deep learning-based object detection modules for obstacle recognition and avoidance.

The challenges and approach

The primary challenges in this project include robust target point estimation under variable lighting and environmental conditions, real-time object detection with limited computational resources, and smooth trajectory control in the presence of dynamic obstacles.

The approach involves modular integration of perception, planning, and control subsystems.

For perception, the system uses both classical image processing methods and a trained deep learning model for object detection, enabling redundancy and simulation compatibility.

For planning and control, the pure pursuit controller dynamically adjusts speed and steering based on the estimated target point and obstacle proximity. Target point estimation is achieved through ground projection, a transformation that maps image coordinates to real-world planar coordinates using a calibrated camera model. Real-time parameter tuning and feedback mechanisms are included to handle variations in frame rate and sensor noise.

Obstacle positions are also ground-projected and used to trigger stop conditions within a defined safety zone, ensuring collision avoidance through reactive control.

Looking for similar projects?

Pure Pursuit Lane Following with Obstacle Avoidance: Authors

Soroush Saryazdi is currently leading the Neural Networks team at Matic, supervised by Navneet Dalal.

Dhaivat Bhatt is currently working as a Machine learning research engineer at Samsung AI centre, Toronto.

Learn more

Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.

Reproducible Sim-to-Real Traffic Signal Control Environment

Reproducible Sim-to-Real Traffic Signal Control Environment

General Information

Reproducible Sim-to-Real Traffic Signal Control Environment

As urban environments become increasingly populated and automobile traffic soars, with US citizens spending on average 54 hours a year stuck on the roads, active traffic control management promises to mitigate traffic jams while maintaining (or improving) safety. 

LibSignal++ is a Duckietown-based testbed for reproducible and low-cost sim-to-real evaluation of traffic signal control (TSC) algorithms. Using Duckietown enables consistent, small-scale deployment of both rule-based and learning-based TSC models.

LibSignal++ integrates visual control through camera-based sensing and object detection via the YOLO-v5 model. It features modular components, including Duckiebots, signal controllers, and an indoor positioning system for accurate vehicle trajectory tracking. The testbed supports dynamic scenario replication by enabling both manual and automated manipulation of sensor inputs and road layouts.

Key aspects of the research include:

  • Sim-to-real pipeline for Reinforcement Learning (RL)-based traffic signal control training and deployment
  • Multi-simulator training support with SUMO, CityFlow, and CARLA
  • Reproducibility through standardized and controllable physical components
  • Integration of real-world sensors and visual control systems
  • Comparative evaluation using rule-based policies on 3-way and 4-way intersections

The work concludes with plans to extend to Machine Learning (ML)-based TSC models and further sim-to-real adaptation.

Highlights - Reproducible Sim-to-Real Traffic Signal Control Environment

Here is a visual tour of the sim-to-real work of the authors. For all the details, check out the full paper.

Abstract

Here is the abstract of the work, directly in the words of the authors:

This paper presents a unique sim-to-real assessment environment for traffic signal control (TSC), LibSignal++, featuring a 14-ft by 14-ft scaled-down physical replica of a real-world urban roadway equipped with realistic traffic sensors such as cameras, and actual traffic signal controllers. Besides, it is supported by a precise indoor positioning system to track the actual trajectories of vehicles. To generate various plausible physical conditions that are difficult to replicate with computer simulations, this system supports automatic sensor manipulation to mimic observation changes and also supports manual adjustment of physical traffic network settings to reflect the influence of dynamic changes on vehicle behaviors. This system will enable the assessment of traffic policies that are otherwise extremely difficult to simulate or infeasible for full-scale physical tests, providing a reproducible and low-cost environment for sim-to-real transfer research on traffic signal control problems.

Results

Three traffic control policies were tested over a number of experiment repetitions, evaluating each time traffic throughput, average vehicle waiting times, and vehicle battery consumption.  Standard deviations for all policies were found to be within acceptable ranges, leading the authors to confirm the ability of the testbed to deliver reproducible results within controlled environments.

TSC policies test
Did this work spark your curiosity?

Project Authors

Yiran Zhang is associated with the Arizona State University, USA.

Khoa Vo is associated with the Arizona State University, USA.

Longchao Da is pursuing his Ph.D. at the Arizona State University, USA.

Tiejin Chen is pursuing his Ph.D. at the Arizona State University, USA.

Xiaoou Liu is pursuing her Ph.D. at the Arizona State University, USA.

Hua Wei is an Assistant Professor at the School of Computing and Augmented Intelligence, Arizona State University, USA.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.