Adaptive lane following featured image

Proxy Domains for Evaluation and Learning

General Information

Proxy Domains for Evaluation and Learning

Running robotics experiments in the real world is often costly in terms of time, money, and effort. For this reason, robotics development and testing rely on proxy domains (e.g., simulations) before real-world deployment. But how to gauge the degree of usefulness of using proxy domains in the development process, and are all domains equally useful? 

Intuitively, the answer to the above questions will depend on the type of robot, the task it has to achieve, and the environment in which it operates. Evaluating a proxy domain’s usefulness for a specific combination of these circumstances, specifically for the training of autonomous agents, is tackled in this work by establishing quantification metrics and assessing them in Duckietown.

The key aspects of this work are:

  • Proxy Usefulness Metrics: introduction of Proxy Relative Predictivity Value (PRPV) and Proxy Learning Value (PLV) to measure a proxy’s ability to predict real-world performance and aid agent learning. PRPV helps identify simulations that accurately predict real-world results, while PLV measures their effectiveness in training agents.

  • Prediction vs. Learning: differentiation of proxies used for accurate performance prediction from those for data generation in training.

  • Experiments: demonstration of how tuning proxy domain parameters (e.g., sensor delays, camera angle) affects predictivity and learning efficiency.

These metrics improve proxy selection and tuning for robotics research and education, and Duckietown enables rapid prototyping of these ideas for mobile autonomous vehicles. 

Highlights - Proxy Domains for Evaluation and Learning in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

In many situations it is either impossible or impractical to develop and evaluate agents entirely on the target domain on which they will be deployed. This is particularly true in robotics, where doing experiments on hardware is much more arduous than in simulation. This has become arguably more so in the case of learning-based agents. To this end, considerable recent effort has been devoted to developing increasingly realistic and higher fidelity simulators. However, we lack any principled way to evaluate how good a “proxy domain” is, specifically in terms of how useful it is in helping us achieve our end objective of building an agent that performs well in the target domain. In this work, we investigate methods to address this need. We begin by clearly separating two uses of proxy domains that are often conflated: 1) their ability to be a faithful predictor of agent performance and 2) their ability to be a useful tool for learning. In this paper, we attempt to clarify the role of proxy domains and establish new proxy usefulness (PU) metrics to compare the usefulness of different proxy domains. We propose the relative predictive PU to assess the predictive ability of a proxy domain and the learning PU to quantify the usefulness of a proxy as a tool to generate learning data. Furthermore, we argue that the value of a proxy is conditioned on the task that it is being used to help solve. We demonstrate how these new metrics can be used to optimize parameters of the proxy domain for which obtaining ground truth via system identification is not trivial.

Conclusion - Proxy Domains for Evaluation and Learning in Duckietown

Here are the conclusions from the author of this paper:

“We introduce new metrics to assess the usefulness of proxy domains for agent learning. In a robotics setting it is common to use simulators for development and evaluation to reduce the need to deploy on real hardware. We argue that it is necessary to to take into account the specific task when evaluating the usefulness of the the proxy. We establish novel metrics for two specific uses of a proxy. When the proxy domain is used to predict performance in the target domain, we offer the PRPV to assess the usefulness of the proxy as a predictor, and we argue that the task needs to be imposed but not the agent. When a proxy is used to generate training data for a learning algorithm, we propose the PLV as a metric to assess usefulness of the source domain, which is dependent on a specific task and a learning algorithm. We demonstrated the use of these measures for predicting parameters in the Duckietown environment. Future work will involve more rigorous treatment of the optimization problems posed to find optimal parameters, possibly in connection with differentiable simulation environments.”

Project Authors

Anthony Courchesne is currently working as an MLOps Engineer ar Maneva, Canada.

Andrea Censi is currently working as the Deputy Director, Chair of Dynamic Systems and Control at ETH Zurich, Switzerland.

Liam Paull is an Associate Professor at the Universite de Montreal, Canada and also serves as the Chief Education Officer at Duckietown.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Deep Reinforcement Learning for Agent-Based Autonomous Robot

Deep Reinforcement and Transfer Learning for Robot Autonomy

General Information

Deep Reinforcement and Transfer Learning for Robot Autonomy

Developing autonomous robotic systems is challenging. When using machine learning based approaches, one of the main challenges is the high cost and complexity of real-world training. Running real world experiments is time consuming and depending on the application, can be expensive as well.

This work uses Deep Reinforcement Learning (DRL) and tackles this challenge through Transfer Learning (TL). DRL enables robots to learn optimal behaviors through trial-and-error, guided by reward-based feedback. Transfer Learning then addresses the high cost of generating training data by leveraging simulation environments.

Running experiments in simulation is time and cost efficient, the trained agent can then be deployed on a physical robot, in a process known as Sim2Real transfer. Ideally, this approach significantly reduces training costs and accelerates real-world deployment.

In this work, training occurs in a simulated Duckietown environment using Deep Deterministic Policy Gradient (DDPG) and TL techniques to mitigate the expected difference between simulated and real-world environments. The resulting agent is then deployed  on a custom-built robot in a physical Duckietown city for evaluation.

Results show that the DRL-based model successfully learns lane-following and navigation autonomous behaviors in simulation, and performance comparison with real world experiments is provided.  

Highlights - Deep Reinforcement Learning for Agent-Based Autonomous Robot

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Real robots have different constraints, such as battery capacity limit, hardware cost, etc., which make it harder to train models and conduct experiments on physical robots. Transfer learning can be used to omit those constraints by training a self-driving system in a simulated environment, with a goal of running it later in a real world. Simulated environment should resemble a real one as much as possible to enhance transfer process. This paper proposes a specification of an autonomous robotic system using agent-based approach. It is modular and consists of various types of components (agents), which vary in functionality and purpose. 

Thanks to system’s general structure, it may be transferred to other environments with minimal adjustments to agents’ modules. The autonomous robotic system is implemented and trained in simulation and then transferred to real robot and evaluated on a model of a city. A two-wheeled robot uses a single camera to get observations of the environment in which it is operates. Those images are then processed and given as an input to the deep neural network, that predicts appropriate action in the current state. Additionally, the simulator provides a reward for each action, which is used by the reinforcement learning algorithm to optimize weights in the neural network, in order to improve overall performance.

Conclusion - Deep Reinforcement Learning for Agent-Based Autonomous Robot

Here are the conclusions from the author of this paper:

“After several breakthroughs in the field of Deep Reinforcement Learning, it became one of the most popular researched topics in Machine Learning and a common approach to the problem of autonomous driving. This paper presents the process of training an autonomous robotic system using popular actor-critic algorithm in the simulator, which may then also be run on real robot. It was possible to train an agent in real-time using trial-and-error approach without the need to collect vast amounts of labeled data. The neural network learned how to control the robot and how to follow the lanes, without any explicit guidelines. Only a few functions have been used to transform the data sent between environment and the agent, in order to make the learning process smoother and faster. 

For evaluation purposes, a real robot and a small city model have been built, based on the Duckietown platform specification. This hardware has been used to evaluate in the real world the performance of the system, trained in simulator. Also, additional Transfer Learning techniques were used, in order to adjust the observations and actions in the real robot, due to the differences with simulated environment. Although, the performance in real environment was worse than in simulator, certain trained models were still able to guide the robot around a simple road loop, which shows a potential for such approach. As a result, the use of the simulator greatly reduced the time and effort needed to train the system, and transfer methods were used to deploy it in the real world. 

The Duckietown platform provides a baseline, which was modified and refactored to follow the system structure. The simulator and its components are thoroughly documented, the detailed instructions explain how to train and run the robot both in simulation and in real world and evaluate the results. Duckietown provides complete sets of parts, necessary to build the robot and small city, however, it was decided to build custom robot, according to the guidelines. The robot uses a single camera to get observations of the surrounding environment. 

The reinforcement learning algorithm was used to learn a policy, which tries to choose optimal actions based on the those observations with the help of reward function, that provides a feedback for previous decisions. It was possible to significantly reduce the effort required to train a model, thanks to the simulator, as the process does not require constant human supervision and involvement. Such approach proves to be very promising, as the agent learned how to do the lane-following task without any explicit labels, and has shown good performance in the simulated environment. Although, there is still a room for improvement, when it comes to transferring the model to real world, which requires various adaptations and adjustments to be made for The robot to properly execute maneuvers and show stability in its actions.”

Project Authors

Vladyslav Kyryk is currently working as a Data Scientist at Finitec, Warsaw, Poland.

Maksym Figat  is working as an Assistant Professor at Warsaw University of Technology, Poland.

Maryan Kyryk is currently serving as the Co-Founder & CEO at Maxitech, Ukraine.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

PID and Convolutional Neural Network (CNN) in Duckietown

PID and Convolutional Neural Networks (CNN) in Duckietown

General Information

PID and Convolutional Neural Networks (CNN) in Duckietown

Ever wondered how the legendary PID controller compares to a more “modern”  convolutional neural network (CNN) design, in controlling a Duckiebot in driving in Duckietown? 

This work analyzes the performance differences between classical control techniques and machine learning-based approaches for autonomous navigation. The Duckiebot follows a designated path using image-based feedback, where the PID controller corrects deviations through proportional, integral, and derivative adjustments. The CNN-based method leverages image feature extraction to generate control commands, reducing reliance on predefined system models. 

Key aspects covered include differential drive mechanics, real-time image processing, and ROS-based implementation. The study also outlines the impact of training data selection on CNN performance. Comparative analysis highlights the strengths and limitations of both approaches. The conclusions emphasize the applicability of PID and CNN techniques in Duckietown, demonstrating their role in advancing robotic autonomy.

Highlights - PID and Convolutional Neural Network (CNN) in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The paper presents the design and practical implementation by students of a control system using a classic PID controller and a controller using artificial neural networks. The control object is a Duckiebot robot, and the task it is to perform is to drive the robot along a designated line (line follower). 

The purpose of the proposed activities is to familiarize students with the advantages and disadvantages of the two controllers used and for them to acquire the ability to implement control systems in practice. The article briefly describes how the two controllers work, how to practically implement them, and how to practically implement the exercise.

Conclusion - PID and Convolutional Neural Network (CNN) in Duckietown

Here are the conclusions from the author of this paper:

“The PID controller is used successfully in many control systems, and its implementation is relatively simple. There are also a number of methods and algorithms for adjusting controller parameters for this type of controller. 

PID controllers, on the other hand, are not free of disadvantages. One of them is the requirement of prior knowledge of, even roughly, the model of the process one wants to control. Thus, it is necessary to identify both the structure of the process model and its parameters. Identification tasks are complex tasks, requiring a great deal of knowledge about the nature of the process itself. There are also methods for identifying process models based on the results of practical experiments, however sometimes it may not be possible to conduct such experiments. When using a PID controller, one should also be aware that it was developed for processes, operation of which can be described by linear models. Unfortunately, the behavior of the vast majority of dynamic systems is described by non-linear models. 

The consequence of this fact is that, in such cases, the PID controller works using linear approximations of nonlinear systems, which can lead to various errors, inaccuracies, etc. Unlike the classic PID controller, controllers using artificial neural networks do not need to know the mathematical model of the process they control and its parameters. 

The ability to design different neural network architectures, such as convolutional, recurrent, or deep neural networks, makes it possible to adapt the neural regulator to the specific process it is supposed to control. On the other hand, the multiplicity of neural network architectures and their design means that we can never be sure whether a given neural network structure is optimal.

 The selection of neural controller parameters is done automatically using appropriate network training algorithms. The key element influencing the accuracy of neural regulator operation is the data used for training the neural network. The disadvantage of regulators using neural networks is the inability to demonstrate the stability of operation of the systems they control.

In case of the PID regulator, despite the use of approximate models of the process, it is very often possible to prove that a closed control system will operate stably in any or a certain range of values of variables. Unfortunately, such an analysis cannot be carried out in the case of neural regulators. In summary, the implementation of two different controllers to perform the same task provides an opportunity to learn the advantages and disadvantages of each.”

Project Authors

Marek Długosz is a Professor at the Akademia Górniczo-Hutnicza (AGH) – University of Science and Technology, Poland.

Paweł Skruch is currently working as the Manager and Principal Engineer AI at Aptiv, Switzerland.

Marcin Szelest is currently affiliated with the AGH University of Krakow, Kracow, Poland.

Artur Morys-Magiera is a a PhD candidate at AGH University of Krakow, Poland.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Visual control of automated guided vehicles in Duckietown

Visual monitoring of automated guided vehicles in Duckietown

General Information

Visual monitoring of automated guided vehicles in Duckietown

The increasing use of robotics in industrial automation has led to the need for systems that ensure safety and efficiency in monitoring autonomous guided vehicles (AGVs). This research proposes a visual monitoring system for monitoring the trajectory and behavior of AGVs in industrial environments.

The system utilizes a network of cameras mounted on towers to detect, identify, and track AGVs. The visual data is transmitted to a central server, where the robots’ trajectories are evaluated and compared against predefined ideal paths. The system operates independently of specific hardware or software configurations, offering flexibility in its deployment.

Duckietown was used as the test environment for this system, allowing for controlled experiments with simulated robotic fleets. A prototype of the system demonstrated its capability to track AGVs using Aruco tags and evaluate rectilinear trajectories.

Key aspects and concepts:

  • Use of camera towers for visual control of AGVs;
  • Transmission of visual data to a central server for trajectory evaluation;
  • Compatibility with multiple robot types and operating systems;
  • Integration of Aruco tags for robot identification;
  • Modular architecture enabling future expansions;
  • Testing in Duckietown for controlled evaluation.

This research demonstrates a modular approach to monitoring AGVs using a visual control system tested in the Duckietown platform. Future work will extend the system’s capability to handle more complex trajectories such as turns and arcs, further leveraging Duckietown as a scalable research and testing environment.

Highlights - Visual monitoring of automated guided vehicles in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

With the increasing automation of industry and the introduction of robotics in every step of the production chain, the problem of safety has become acute. The article proposes a solution to the problem of safety in production using a visual control system for the fleet of loading automated guided vehicles (AGV). The visual control system is built as towers equipped with cameras. This approach allows to be independent of equipment vendors and allows flexible reconfiguration of the AGV fleet. The cameras detect the appearance of a loading robot, identify it and track its trajectory. Data about the robots’ movements is collected and analyzed on a server. A prototype of the visual control system was tested with the Duckietown project.

Conclusion - Visual monitoring of automated guided vehicles in Duckietown

Here are the conclusions from the author of this paper:

“In the course of this work, a prototype visual evaluation system for Duckietown project was implemented. The system supports flexible seamless integration of third-party detection algorithms and trajectory evaluation algorithms. The visual control system was tested with client imitator module, witch does not require the presence of the real robot on the field. At this stage of the work, the prototype is able to recognize rectilinear trajectory of motion. In the future, we plan to develop evaluation algorithms for other types of trajectories: 90 degree turns, large angle turns, arc movement, etc. Another promising area of research is the integration of the system with cloud-based integrated development environments (IDEs) for industrial control algorithms.”

Project Authors

Anastasia Kravchenko is currently affiliated to Department of Cyber Physical Systems Institute of Automation and Electrometry SB RAS Novosibirsk, Russia.

Alexey Sychev is currently affiliated to Department of Cyber Physical Systems Institute of Automation and Electrometry SB RAS Novosibirsk, Russia.

Vladimir Zyubin is currenly working as an Associate Professor at the Institute of Automation and Electrometry, Russia.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Embedded Out-of-Distribution Detection in Duckietown paper abstract

Embedded Out-of-Distribution Detection in Duckietown

General Information

Embedded Out-of-Distribution Detection in Duckietown

The project “embedded out-of-distribution detection (OOD) Detection on an Autonomous Robot Platform” focuses on safety in Duckietown by implementing real-time OOD detection on the Duckiebots. The concept involves using a machine learning-based OOD detector, specifically a β-Variational Autoencoder (β-VAE), to identify test inputs that deviate from the training data’s distribution. Such inputs can lead to unreliable behavior in machine learning systems, critical for safety in autonomous platforms like the Duckiebot.

Key aspects of the project include:

  • Integration: The β-VAE OOD detector is integrated with the Duckiebot’s ROS-based architecture, alongside lane-following and motor control modules.
  • Emergency Braking: An emergency braking mechanism halts the Duckiebot when OOD inputs are detected, ensuring safety during operation.
  • Evaluation: Performance was evaluated in scenarios where the Duckiebot navigated a track and avoided obstacles. The system achieved an 87.5% success rate in emergency stops.

This work demonstrates a method to mitigate safety risks in autonomous robotics. By providing a framework for OOD detection on low-cost platforms, the project contributes to the broader applicability of safe machine learning in cyber-physical systems.

Highlights - Embedded Out-of-Distribution Detection in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

Machine learning (ML) is actively finding its way into modern cyber-physical systems (CPS), many of which are safety-critical real-time systems. It is well known that ML outputs are not reliable when testing data are novel with regards to model training and validation data, i.e., out-of-distribution (OOD) test data. We implement an unsupervised deep neural network-based OOD detector on a real-time embedded autonomous Duckiebot and evaluate detection performance. Our OOD detector produces a success rate of 87.5% for emergency stopping a Duckiebot on a braking test bed we designed. We also provide case analysis on computing resource challenges specific to the Robot Operating System (ROS) middleware on the Duckiebot.

Conclusion - Embedded Out-of-Distribution Detection in Duckietown

Here are the conclusions from the author of this paper:

“We successfully demonstrated that the 𝛽-VAE OOD detection algorithm could run on an embedded platform and provides a safety check on the control of an autonomous robot. We also showed that performance is dependent on real-time performance of the embedded system, particularly the OOD detector execution time. Lastly, we showed that there is a trade-off involved in choosing an OOD detection threshold; a smaller threshold value increases the average stopping distance from an obstacle, but leads to an increase in false positives.

This work also generates new questions that we hope to investigate in the future. The system architecture demonstrated in this paper was not utilizing a real-time OS and did not take advantage of technologies such as GPUs or TPUs, which are now becoming common on embedded systems. There is still much work that can be done to optimize process scheduling and resource utilization while maintaining the goal of using low-cost, off-the-shelf hardware and open-source software. Understanding what quality of service can be provided by a system with these constraints and whether it suffices for reliable operations of OOD detection algorithms is an ongoing research theme.

From the OOD detection perspective, we would like to run additional OOD detection algorithms on the same architecture and compare performance in terms of accuracy and computational efficiency. We would also like to develop a more comprehensive set of test scenarios to serve as a benchmark for OOD detection on embedded systems. These should include dynamic as well as static obstacles, operation in various environments and lighting conditions, and OOD scenarios that occur while the robot is performing more complex tasks like navigating corners, intersections, or merging with other traffic.

Demonstrating OOD detection on the Duckietown platform opens the door for more embedded applications of OOD detectors. This will serve to better evaluate their usefulness as a tool to enhance the safety of ML systems deployed as part of critical CPS.”

Project Authors

Michael Yuhas is currenly working as a Research Assistant and pursuing his PhD at the Nanyang Technological University, Singapore.

Yeli Feng is currenly working as a Lead Data Scientist at Amplify Health, Singapore.

Daniel Jun Xian Ng is currenly working as a Mobile Robot Software Engineer at the Hyundai Motor Group Innovation Center Singapore (HMGICS), Singapore.

Zahra Rahiminasab is currenly working as a Postdoctoral Researcher at Aalto University, Finland.

Arvind Easwaran is currenly working as an Associate Professor at the Nanyang Technological University, Singapore.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Variational Autoencoder for autonomous driving in Duckietown

Variational Autoencoder for autonomous driving in Duckietown

General Information

Variational Autoencoder for autonomous driving in Duckietown

This project explored using reinforcement learning (RL) and Variational Autoencoder (VAE) to train an autonomous agent for lane following in the Duckietown Gym simulator. VAEs were used to encode high-dimensional raw images into a low-dimensional latent space, reducing the complexity of the input for the RL algorithm (Deep Deterministic Policy Gradient, DDPG). The goal was to evaluate if this dimensionality reduction improved training efficiency and agent performance.

The agent successfully learned to follow straight lanes using both raw images and VAE-encoded representations. However, training with raw images performed similarly to VAEs, likely because the task was simple and had limited variability in road configurations.

The agent also displayed discrete control behaviors, such as extreme steering, in a task requiring continuous actions. These issues were attributed to the network architecture and limited reward function design.

While the VAE reduced training time slightly, it did not significantly improve performance. The project highlighted the complexity of RL applications, emphasizing the need for robust reward functions and network designs. 

Highlights - Variational Autoencoder and RL for Duckietown Lane Following

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

The use of deep reinforcement learning (RL) for following the center of a lane has been studied for this project. Lane following with RL is a push towards general artificial intelligence (AI) which eliminates the use for hand crafted rules, features, and sensors. 

A project called Duckietown has created the Artificial Intelligence Driving Olympics, which aims to promote AI education and embodied AI tasks. The AIDO team has released an open-sourced simulator which was used as an environment for this study. This approach uses the Deep Deterministic Policy Gradient (DDPG) with raw images as input to learn a policy for driving in the middle of a lane for two experiments. A comparison was also done with using an encoded version of the state as input using a Variational Autoencoder (VAE) on one experiment. 

A variety of reward functions were tested to achieve the desired behavior of the agent. The agent was able to learn how to drive in a straight line, but was unable to learn how to drive on curves. It was shown that the VAE did not perform better than the raw image variant for driving in the straight line for these experiments. Further exploration of reward functions should be considered for optimal results and other improvements are suggested in the concluding statements.

Conclusion - Variational Autoencoder and RL for Duckietown Lane Following

Here are the conclusions from the author of this paper:

“After the completion of this project, I have gained insight on how difficult it is to get RL applications to work well. Most of my time was spent trying to tune the reward function. I have a list of improvements that are suggested as future work. 

  • Different network architectures – I used fully connected networks for all the architectures. I would think CNN architectures may be better at creating features for state representations. 
  • Tuning Networks – Since most of my time was spent on the reward exploration, I did not change any parameters at all. I followed the paper in the original DDPG paper [4]. A hyperparameter search may prove to be beneficial to find parameters that work best for my problem instead of all the problems in the paper. 
  • More training images for VAE 
  • Different Algorithm – Maybe an algorithm like PPO may be able to learn a better policy? 
  • Linear Function Approximation – Deep reinforcement learning has proven to be difficult to tune and work well. Maybe I could receive similar or better results using a different function approximator than a neural network. Wayve explains the use of prioritized experience replay [7], which is a method to improve on randomly sampled tuples of experiences during RL training and is based on sorting the tuples. This may improve performance of both of my algorithms. 
  • Exploring different Ornstein-Uhlenbeck process parameters to encourage, discourage more/less exploration 
  • Other dimensionality reducing methods instead of VAE. Maybe something like PCA? 

As for the AIDO competition, I have made the decision not to submit this work. It became apparent to me as I progressed through the project how difficult it is to get a perfectly working model using reinforcement learning. If I was to continue with this work for the submission, I think I would rather go towards the track of imitation learning. While this would introduce a wide range of new problems, I think intuitively it moves more sense to ”show” the robot how it should drive on the road rather having it learn from scratch. I even think classical control methods may work better or just as good as any machine learning based algorithm. Although I will not submit to this competition, I am glad I got to express two interests of mine in reinforcement learning and variational autoencoders. 

The supplementary documents for this report include the training set for the VAE, a video of experiment 1 working properly for both DDPG+Raw and DDPG+VAE, and a video of experiment 2 not working properly. The code has been posted to GitHub (Click for link).”

Project Authors

Bryon Kucharski is currently working as a Lead Data Scientist at Gartner, United States.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Networked Systems: Autonomy Education with Duckietown

Autonomy Education: Teaching Networked Systems

General Information

Autonomy Education: Teaching Networked Systems

In this work, Prof. Qing-Shan Jia from Tsinghua University in China explores the challenges and innovations in teaching networked systems, a domain with applications ranging from smart buildings to autonomous systems.

The study reviews curriculum structures and introduces practical solutions developed by the Tsinghua University Center for Intelligent and Networked Systems (CFINS).

Over the past two decades, CFINS has designed courses, developed educational platforms, and authored textbooks to bridge the gap between theoretical knowledge and practical application.

They feature Duckietown as part of an educational platform for autonomous driving. Duckietown offers a low-cost, do-it-yourself (DIY) framework for students to construct and program Duckiebots – autonomous mobile robotic vehicles. Duckietown allows learners to apply theoretical concepts in areas related to robot autonomy, like signal processing, machine learning, reinforcement learning, and control systems.

Duckietown enables students to gain hands-on experience in systems engineering, with calibration of sensors, programming navigation algorithms, and working on cooperative behaviors in multi-robot settings. This approach allows for the creation of complex cyber physical systems using state-of-the-art science and technology, not only democratizing access to autonomy education but also fostering understanding, even with remote learning scenarios. 

The integration of Duckietown into the curriculum exemplifies the innovative strategies employed by CFINS to make networked systems education both practical and impactful.

Abstract

In the author’s words:

Networked systems have become pervasive in the past two decades in modern societies. Engineering applications can be found from smart buildings to smart cities. It is important to educate the students to be ready for designing, analyzing, and improving networked systems. 

But this is becoming more and more challenging due to the conflict between the growing knowledge and the limited time in the curriculum. In this work we consider this important problem and provide a case study to address these challenges. 

A group of courses have been developed by the Center for Intelligent and Networked Systems, department of Automation, Tsinghua University in the past two decades for undergraduate and graduate students. We also report the related education platform and textbook development. Wish this would be useful for the other universities.

Conclusion - Networked Systems: Autonomy Education with Duckietown

Here are the conclusions from the author of this paper:

“In this work we provided a case study on the education practice of networked systems in the center for intelligent and networked systems, department of automation, Tsinghua University. The courses mentioned in this work have been delivered for 20 years, or even more. From this education practice, the following experience is summarized. First, use research to motivate the study. 

Networked systems is a vibrant research field. The exciting applications in smart buildings, autonomous driving, smart cities serve as good examples not just to motivate the students but also to make the teaching materials concrete. Inviting world-class talks and short-courses are also good practice. Second, education platforms help to learn the knowledge better. Students have hands-on experience while working on these education platforms. 

This project-based learning provides a comprehensive experience that will get the students ready for addressing the real-world engineering problems. Third, online/offline hybrid teaching mode is new and effective. This is especially important due to the pandemic. Lotus Pond, RainClassroom, and Tencent Meeting have been well adopted in Tsinghua. Students can interact with the teachers more frequently and with more specific questions. 

They can also replay the course offline, including their answers to the quiz and questions in the classroom. We hope that this summary on the education on networked systems might help the other educators in the field.”

Project Authors

Qing-Shan Jia is a Professor at the Tsinghua University, Beijing, People’s Republic of China.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Autonomous Calibration - Wheels & Camera in Duckietown

Autonomous Calibration – Wheels and Camera in Duckietown

General Information

Autonomous Calibration – Wheels and Camera in Duckietown

In robotics, accurate calibration of components like cameras and wheels is essential for precise operation. This research is focused on developing an autonomous calibration system for Duckiebots image sensors and odometry.

Traditional calibration methods require manual intervention, often taking time and relying on human accuracy, which can introduce variability. The paper presents a fully autonomous approach to calibration, enabling Duckiebots to perform self-calibration without human guidance. This enables users to calibrate multiple robots simultaneously, maximizing efficiency and reducing downtime.

Fiducial markers (AprilTags) are utilized in pre-marked environments. Although the method showed slightly reduced calibration precision compared to typical alternatives, the process still yields sufficient performance for Duckiebots to navigate autonomously in Duckietown.

Highlights - Autonomous Calibration - Wheels and Camera in Duckietown

Here is a visual tour of the work of the authors. For all the details, check out the full paper.

Abstract

In the author’s words:

After assembling the robot, it is necessary to calibrate its components such as camera and wheels for example. This requires human participation and depends on human factors. The article describes the approach to fully automatic calibration of the camera and the wheels of the robot. 

It consists in placing the robot in an inaccurate position, but in a pre-marked area and using data from the camera, information about the configuration of the environment. As well as the ability to move, to perform calibration without the participation of external observers or human participation. There are 2 stages: camera and wheels calibration. 

Camera calibration collects the necessary set of images by automatically moving the robot in front of the fiducial markers template, and moving the robot on the marked floor with an estimation of the curvature of the trajectory. Proposed approach was experimentally tested on the duckietown project base.

Conclusion - Autonomous Calibration - Wheels and Camera in Duckietown

Here are the conclusions from the authors of this paper:

“As a result, a solution was developed that allows fully automatic calibration of the camera and robot wheels in the Duckietown project. The main feature is the autonomy of the process, which allows one person to run in parallel the calibration of an arbitrary number of robots and not be blocked during their calibration. 

The limitation is the number of physically labeled sites. According to the results of comparing the developed solution with the initial one, a slight deterioration in accuracy can be noted, which is primarily associated with the accuracy of the camera calibration, however, the result obtained is nevertheless sufficient for the initial calibration of the robot and is comparable to manual calibration. 

As the planned improvements, which will have to increase the accuracy of the camera calibration, a larger number of chessboards located at different angles and a greater distance of movement used in calibrating the wheels will be used.”

Project Authors

Kirill Krinkin is an Adjunct Professor at Constructor University, Germany.

Konstantin Chaika is an Educational Content Manager, Tutor at JetBrains, Czech Republic.

Anton Filatov is currently affiliated with the Saint Petersburg Electrotechnical University “LETI”, Saint Petersburg, Russia.

Artyom Filatov is currently affiliated with the Saint Petersburg Electrotechnical University “LETI”, Saint Petersburg, Russia.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Multi-camera multi-robot visual localization system

Visual localization using multi-camera multi-robot system

General Information

Visual localization using multi-camera multi-robot system

Visual robot localization is a crucial problem in robotics: how to estimate the agents’ position using vision.

A common approach to solving it is through Simultaneous Localization and Mapping (SLAM) algorithms, using onboard sensors to map and estimate robot positions.

This work introduces a new algorithm for robot localization using AprilTag fiducial markers. It works on a rectangular map with four corner tags, requiring minimal configuration and offering flexibility in camera positions.

Unlike prior methods, this algorithm automatically stitches images from cameras, regardless of angle, and converts them into a top-down view for robot localization.

The approach promises flexibility, making adapting to dynamic camera setups easier without reconfiguration.

This solution offers automated robot localization with minimal setup, leveraging computer vision and AprilTags for more efficient mapping. The only constraint is the rectangular shape of the map and properly oriented corner markers, making it an ideal fit for scalable, adaptive robot environments.

Learn about robot autonomy, including perception, localization, and SLAM, starting from the link below!

Abstract

In the author’s words:

The article presents a general framework for detecting the boundaries of, stitching, adjusting perspective and finally localizing robot positions and azimuth angles for any rectangular map designated with AprilTag markers in the corners and possibly in the interior area. 

At the same time, the focus of the researchers was to minimize the configuration required for the algorithm to operate – here limited to just the orientation and data of markers, dimensions of the map, markers and robots. 

The location of cameras can be freely changed without the need to reconfigure anything or restart the program. This work has been tested on and turned out to be especially helpful for working with the Duckietown project.

 

Highlights - Visual localization using multi-camera multi-robot system

Here is a visual tour of the work of the authors. For more details, check out the full paper.

Conclusion - Visual localization using multi-camera multi-robot system

Here are the conclusions from the authors of this paper:

“The primary contribution and aim of this work is to provide a universal framework for stitching views of the same map from multiple cameras that can be freely moved and laid out around the map, with minimal required configuration. 

The requirements for placement of codes are also loose: only the orientation with respect to the map frame is constrained and configuration of corner codes is required, as well as the lower limit of visible common markers on two images to be processed is 1, with no need for any corner markers to be present in both images at the same time. 

The algorithms efficiency, however, depends on the quality of the homography matrices used in it, which implies that the more detections and corner detections, the better the result. It happens that the stitched / extrapolated coordinates may be off ’ground truth’ in some cases, or even stitching might fail, resulting in malformed output. 

The authors provided experiments on two cameras, yet the algorithm may be run sequentially with images from more cameras. The algorithm may be improved in the future by applying more sophisticated methods of aggregating values of multiple detections of a given robot, such as a weighted combination of the position based on the quality of each detection.”

Project Authors

Artur Morys – Magiera is a PhD candidate at AGH University of Krakow, Poland.

 
 

Marek Długosz is a graduate and faculty member of the Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering at the AGH University of Science and Technology in Krakow, Poland.

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Analysis of Object Detection Models on Duckietown Robot Based on YOLOv5 Architectures

Object Detection on Duckiebots Using YOLOv5 Models

General Information

Object Detection on Duckiebots Using YOLOv5 Models

Obstacle detection is about having autonomous vehicles perceive their surroundings, identify objects, and determine if they might conflict with the accomplishment of the robot’s task, e.g., navigating to reach a goal position.

Amongst the many applications of AI, object detection from images is arguably the one that experienced the most performance enhancement compared to “traditional approaches” such as color or blob detection. 

Images are, from the point of view of a machine, nothing but (several) “tables” of numbers, where each number represents the intensity of light, at that location, across a channel (e.g., R, G, B for colored images). 

Giving meaning to a cluster of numbers is not as easy as, for a human, it would be to identify a potential obstacle on the path. Machine learning-driven approaches have quickly outperformed traditional computer vision approaches at this task, strong of the abundant and cheap data for training made available by datasets and general imagery on the internet.

Various approaches (networks) for object detection have rapidly succeded in outperforming each other, and YOLO models particularly for their balance of computational efficiency and detection accuracy.  

Learn about robot autonomy, and the difference between traditional and machine learning approaches, from the links below!

Abstract

In the author’s words:

Object detection technology is an essential aspect of the development of autonomous vehicles. The crucial first step of any autonomous driving system is to understand the surrounding environment. 

In this study, we present an analysis of object detection models on the Duckietown robot based on You Only Look Once version 5 (YOLOv5) architectures. YOLO model is commonly used for neural network training to enhance the performance of object detection models. 

In a case study of Duckietown, the duckies and cones present hazardous obstacles that vehicles must not drive into. This study implements the popular autonomous vehicles learning platform, Duckietown’s data architecture and classification dataset, to analyze object detection models using different YOLOv5 architectures. Moreover, the performances of different optimizers are also evaluated and optimized for object detection. 

The experiment results show that the pre-trained of large size of YOLOv5 model using the Stochastic Gradient Decent (SGD) performs the best accuracy, in which a mean average precision (mAP) reaches 97.78%. The testing results can provide objective modeling references for relevant object detection studies.

 

Highlights - Object Detection on Duckiebots Using YOLOv5 Models

Here is a visual tour of the work of the authors. For more details, check out the full paper.

 

Conclusion - Object Detection on Duckiebots Using YOLOv5 Models

Here are the conclusions from the authors of this paper:

“This paper presents an analysis of object detection models on the Duckietown robot based on YOLOv5 architectures. The YOLOv5 model has been successfully used to recognize the duckies and cones on the Duckietown. Moreover, the performances of different YOLOv5 architectures are analyzed and compared. 

The results indicate that using the pre-trained model of YOLOv5 architecture with the SGD optimizer can provide excellent accuracy for object detection. The higher accuracy can also be obtained even with the medium size of the YOLOv5 model that enables to accelerate the computation of the system. 

Furthermore, once the object detection model is optimized, it is integrated into the ROS in the Duckietown robot. In future works, it is potential to investigate the YOLOv5 with Layer-wise Adaptive Moments Based (LAMB) optimizer instead of SGD, applying repeated augmentation with Binary Cross-Entropy (BCE), and using domain adaptation technique.”

Project Authors

Toan-Khoa Nguyen is currently working as an AI engineer at FPT Software AI Center, Vietnam.

 

Lien T. Vu is with the Faculty of Mechanical Engineering and Mechatronics, Phenikaa University, Vietnam.

 
 

Viet Q. Vu is with the Faculty of International Training, Thai Nguyen University of Technology, Vietnam.

 
 
 

Tien-Dat Hoang is with the Faculty of International Training, Thai Nguyen University of Technology, Vietnam.

 
 
 

Shu-Hao Liang is with the Center for Cyber-Physical System Innovation, National Taiwan University of Science and Technology, Taiwan.

 

Minh-Quang Tran is with the Industry 4.0 Implementation Center, Center for Cyber-Physical System Innovation, National Taiwan University of Science and Technology, Taiwan and also with the Department of Mechanical Engineering, Thai Nguyen University of Technology, Vietnam.

 

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.