Join the AI Driving Olympics, 6th edition, starting now!

The 2021 AI Driving Olympics

Compete in the 2021 edition of the Artificial Intelligence Driving Olympics (AI-DO 6)!

The AI-DO serves to benchmark the state of the art of artificial intelligence in autonomous driving by providing standardized simulation and hardware environments for tasks related to multi-sensory perception and embodied AI.

Duckietown traditionally hosts AI-DO competitions biannually, with finals events held at machine learning and robotics conferences such as the International Conference on Robotics and Automation (ICRA) and the Neural Information Processing Systems (NeurIPS). 

AI-DO 6 will be in conjunction with NeurIPS 2021 and have three leagues: urban driving, advanced perception, and racing. The winter champions will be announced during NeurIPS 2021, on December 10, 2021!

Urban driving league

The urban driving league uses the Duckietown platform and presents several challenges, each of increasing complexity.

The goal in each challenge is to develop a robotic agent for driving Duckiebots “well”. Baseline implementations are provided to test different approaches. There are no constraints on how your agents are designed.

Each challenge adds a layer of complexity: intersections, other vehicles, pedestrians, etc. You can check out the existing challenges on the Duckietown challenges server.

AI-DO 2021 features four challenges: lane following (LF), lane following with intersections (LFI), lane following with vehicles (LFV) and lane following with vehicles and intersections, multi-body, with full information (LFVI-multi-full).

All challenges have a simulation and hardware component (🚙,💻), except for LFVI-multi-full, which is simulation (💻) only.

The first phase (until Nov. 7) is a practice one. Results do not count towards leaderboards.

The second phase (Nov. 8-30) is the live competition and results count towards official leaderboards. 

Selected submissions (that perform well enough in simulation) will be evaluated on hardware in Autolabs. The submissions scoring best in Autolabs will access the finals.

During the finals (Dec. 1-8) one additional submission is possible for each finalist, per challenge.

Winners (top 3) of the resulting leaderboard will be declared AI-DO 2021 winter champions and celebrated live during NeurIPS 2021. We require champions to submit a short video (2 mins) introducing themselves and describing their submission.

Winners are invited to join (not mandatory) the NeurIPS event, on December 10th, 2021, starting at 11.25 GMT (Zoom link will follow).   

Overview
🎯Goal: develop robotic agents for challenges of increasing complexity
🚙Robot: Duckiebot (DB21M/J)
👀Sensors: camera, wheel encoders
Schedule
🏖️Practice: Nov. 1-7
🚙Competition: Nov. 8-30
🏘️Finals: Dec. 1 – 8
🏆Winners: Dec. 10
Rules
🏖️Practice: unlimited non-competing submissions
🚙Competition: best in sim are evaluated on hardware in Autolabs
🏘️Finals: one additional submission for Autolabs
🏆Winners: 2 mins video submission description for NeurIPS 2021 event.

The challenges

Lane following 🚙 💻

LF – The most traditional of AI-DO challenges: have a Duckiebot navigate a road loop without intersection, pedestrians (duckies) nor other vehicles. The objective is to travel the longest path in a given time while staying in the lane, i.e., not committing driving infractions.

Current AI-DO leaderboards: LF-sim-validation, LF-sim-testing.

Previous AI-DO leaderboards: sim-validation, sim-testing, real-validation.

A DB21 Duckietown in a Duckietown equipped with Autolab infrastructure.

Lane following with intersections 🚙 💻

LFI – This challenge builds upon LF by increasing the complexity of the road network, now featuring 3 and/or 4-way intersections, defined according to the Duckietown appearance specifications. Traffic lights will not be present on the map. The objective is to drive the longest distance while not breaking the rules of the road, now more complex due to the presence of traffic signs.

Current AI-DO leaderboards: LFI-sim-validation, LFI-sim-testing.

Previous AI-DO leaderboards: sim-validation, sim-testing.

Duckiebot facing a lane following with intersections (LFI) challenge

Lane following with vehicles 🚙 💻

LFV – In this traditional AI-DO challenge, contestants seek to travel the longest path in a city without intersections nor pedestrians, but with other vehicles on the road. Non-playing vehicles (i.e., not running the user’s submitted agent) can be in the same and/or opposite lanes and have variable speed.

Current AI-DO leaderboards: LFV-sim-validation, LFV-sim-testing.

Previous AI-DO leaderboards: (LFV-multi variant): sim-validation, sim-testing, real-validation.

Lane following with vehicles and intersections (stateful) 💻

LFVI-multi-full – this debuting challenge brings together roads with intersections and other vehicles. The submitted agent is deployed on all Duckiebots on the map (-multi), and is provided with full information, i.e., the state of the other vehicles on the map (-full). This challenge is in simulation only.

Getting started

All you need to get started and participate in the AI-DO is a computer, a good internet connection, and the ambition to challenge your skills against the international community!  

We provide webinars, operation manuals, and baselines to get started.

May the duck be with you! 

Thank you to our generous sponsors!

Join the AI Driving Olympics, 5th edition, starting now!

Compete in the 5th AI Driving Olympics (AI-DO)

The 5th edition of the Artificial Intelligence Driving Olympics (AI-DO 5) has officially started!

The AI-DO serves to benchmark the state of the art of artificial intelligence in autonomous driving by providing standardized simulation and hardware environments for tasks related to multi-sensory perception and embodied AI.

Duckietown hosts AI-DO competitions biannually, with finals events held at machine learning and robotics conferences such as the International Conference on Robotics and Automation (ICRA) and the Neural Information Processing Systems (NeurIPS). 

 The AI-DO 5 will be in conjunction with NeurIPS 2020 and have two leagues: Urban Driving and Advanced Perception

Urban driving league challenges

This year’s Urban League includes a traditional AI-DO challenge (LF) and introduces two new ones (LFP, LFVM).

Lane Following (LF)

The most traditional of AI-DO challenges: have a Duckiebot navigate a road loop without intersection, pedestrians (duckies) or other vehicles. The objective is traveling the longest path in a given time while staying in the lane.

Lane following with Pedestrian (LFP)

The LFP challenge is new to AI-DO. It builds upon LF by introducing static obstacles (duckies) on the road. The objectives are the same as for lane following, but do not hit the duckies! 

Lane Following with Vehicles, multi-body (LFVM)

In this traditional AI-DO challenge, contestants seek to travel the longest path in a city without intersections nor pedestrians, but with other vehicles on the road. Except this year there’s a twist. In this year’s novel multi-body variant, all vehicles on the road are controlled by the submission.

Getting started: the webinars

We offer a short webinar series to guide contestants through the steps for participating: from running our baselines in simulation as well as deploying them on hardware. All webinars are 9 am EST and free!

Introduction

Learn about the Duckietown project and the Artificial Intelligence Driving Olympics.

ROS baseline

How to run and build upon the “traditional” Robotic Operation System (ROS) baseline.

Local development

On the workflow for developing and deploying to Duckiebots, for hardware-based testing.

RL baseline

Learn how to use the Pytorch template for reinforcement learning approaches.

IL baseline

Introduction to the Tensorflow template, use of logs and simulator for imitation learning.

Advanced sensing league challenges

Previous AI-DO editions featured: detection, tracking and prediction challenges around the nuScenes dataset.

For the 5th iteration of AI-DO we have a brand new lidar segmentation challenge.

The challenge is based on the recently released lidar segmentation annotations for nuScenes and features an astonishing 1,400,000,000 lidar points annotated with one of 32 labels.

We hope that this new benchmark will help to push the boundaries in lidar segmentation. Please see https://www.nuscenes.org/lidar-segmentation for more details.

Furthermore, due to popular demand, we will organize the 3rd iteration of the nuScenes 3d detection challenge. Please see https://www.nuscenes.org/object-detection for more details.

AI-DO 5 Finals event

The AI-DO finals will be streamed LIVE during 2020 edition of the Neural Information Processing Systems (NeurIPS 2020) conference in December.

Learn more about the AI-DO here.

Thank you to our generous sponsors!

The Duckietown Foundation is grateful to its sponsors for supporting this fifth edition of the AI Driving Olympics!

Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Why is this important?

As robotics matures and increases in complexity, it is more necessary than ever that robot autonomy research be reproducible.

Compared to other sciences, there are specific challenges to benchmarking autonomy, such as the complexity of the software stacks, the variability of the hardware and the reliance on data-driven techniques, amongst others.

We describe a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained by design from the beginning of the research/development processes.

We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet.

The Duckietown Automated Laboratories (Autolabs)

One of the central components of this setup is the Duckietown Autolab (DTA), a remotely accessible standardized setup that is itself also relatively low-cost and reproducible.

DTAs include an off-the-shelf camera-based localization system. The accessibility of the hardware testing environment through enables experimental benchmarking that can be performed on a network of DTAs in different geographical locations.

The DUCKIENet

When evaluating agents, careful definition of interfaces allows users to choose among local versus remote evaluation using simulation, logs, or remote automated hardware setups. The Decentralized Urban Collaborative Benchmarking Environment Network (DUCKIENet) is an instantiation of this design based on the Duckietown platform that provides an accessible and reproducible framework focused on autonomous vehicle fleets operating in model urban environments. 

The DUCKIENet enables users to develop and test a wide variety of different algorithms using available resources (simulator, logs, cloud evaluations, etc.), and then deploy their algorithms locally in simulation, locally on a robot, in a cloud-based simulation, or on a real robot in a remote lab. In each case, the submitter receives feedback and scores based on well-defined metrics.

Validation

We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs. We built DTAs at the Swiss Federal Institute of Technology in Zurich (ETHZ) and at the Toyota Technological Institute at Chicago (TTIC).

Conclusions

Our contention is that there is a need for stronger efforts towards reproducible research for robotics, and that to achieve this we need to consider the evaluation in equal terms as the algorithms themselves. In this fashion, we can obtain reproducibility by design through the research and development processes. Achieving this on a large-scale will contribute to a more systemic evaluation of robotics research and, in turn, increase the progress of development.

If you found this interesting, you might want to:

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

We asked Róbert Moni to tell us more about his recent work. Enjoy the read!

The author's perspective

Most of us, proud nerd community members, experience driving first time by the discrete actions taken on our keyboards. We believe that the harder we push the forward arrow (or the W-key), the car from the game will accelerate faster (sooo true 😊 ). Few of us believes that we can resolve this task with machine learning. Even fever of us believes that this can be done accurately and in a robust mode with a basic Deep Reinforcement Learning (DRL) method known as Deep Q-Learning Networks (DQN).

It turned to be true in the case of a Duckiebot, and even more, with some added computer vision techniques it was able to perform well both in simulation (where the training process was carried out) and real world.

The pipeline

The complete training pipeline carried out in the Duckietown-gym environment is visualized in the figure above and works as follows. First, the camera images go through several preprocessing steps:

  • resizing to a smaller resolution (60×80) for faster processing;
  • cropping the upper part of the image, which doesn’t contain useful information for the navigation;
  • segmenting important parts of the image based on their color (lane markings);
  • and normalizing the image;
  • finally a sequence is formed from the last 5 camera images, which will be the input of the Convolutional Neural Network (CNN) policy network (the agent itself).

The agent is trained in the simulator with the DQN algorithm based on a reward function that describes how accurately the robot follows the optimal curve. The output of the network is mapped to wheel speed commands.

The workings

The CNN was trained with the preprocessed images. The network was designed such that the inference can be performed real-time on a computer with limited resources (i.e. it has no dedicated GPU). The input of the network is a tensor with the shape of (40, 80, 15), which is the result of stacking five RGB images. The network consists of three convolutional layers, each followed by ReLU (nonlinearity function) and MaxPool (dimension reduction) operations.

The convolutional layers use 32, 32, 64 filters with size 3 × 3. The MaxPool layers use 2 × 2 filters. The convolutional layers are followed by fully connected layers with 128 and 3 outputs. The output of the last layer corresponds to the selected action. The output of the neural network (one of the three actions) is mapped to wheel speed commands; these actions correspond to turning left, turning right, or going straight, respectively.

Learn more

Our work was acknowledged and presented at the IEEE World Congress on Computational Intelligence 2020 conference. We plan to publish the source code after AI-DO5 competition. Our paper is available on ieeexplore.ieee.org, deepai.org and arxiv.org.

Check out our sim and real demo on Youtube performed at our Duckietown Robotarium put together at Budapest University of Technology and Economics. .

Community Spotlight: Arian Houshmand – Control Algorithms for Traffic

Boston University, March 7, 2019: No one likes sitting in traffic: it is a waste of time and damaging to the environment. Thankfully researcher Arian Houshmand from Boston University CODES lab is on the case, and he’s using Duckietown to help solve the problem.

Control algorithms to improve traffic

Traffic congestion around the world is worsening, according to transport data firm INRIX. In the U.S. alone, Americans wasted an average of 97 hours in traffic in 2018 – that’s two precious weekends worth of time. Captivity in traffic also costs them nearly $87 billion in 2018, an average of $1,348 per driver. Clearly, the need for smart transportation is reaching a fervor, not only to alleviate the mental and financial state of drivers, but to address the significant economic toll on affected cities.
Traffic congestion around the world is worsening, according to transport data firm INRIX. In the U.S. alone, Americans wasted an average of 97 hours in traffic in 2018 – that’s two precious weekends worth of time. Captivity in traffic also costs them nearly $87 billion in 2018, an average of $1,348 per driver. Clearly, the need for smart transportation is reaching a fervor, not only to alleviate the mental and financial state of drivers, but to address the significant economic toll on affected cities. Fortunately, development of intelligent mobility technologies is advancing.  In an ongoing research project funded by the U.S. Department of Energy’s (DOE) Advanced Research Projects Agency-Energy (ARPA-E) NEXTCAR program, BU researchers in collaboration with researchers from University of Delaware, University of Michigan, Oak Ridge National Lab, and Bosch are developing technologies for Connected and Automated Vehicles (CAVs) to increase their fuel efficiency and as a bi-product reduce traffic congestion.

The goal

The goal of this project is to design control and optimization technologies that enable a plug-in hybrid electric vehicle (PHEV) to communicate with other cars and city infrastructure and act on that information. By providing cars with situational self-awareness, they will be able to efficiently calculate the best possible route, accelerate and decelerate as needed, and manage their powertrain. This is an important task toward advancing the vision to create an ‘Internet of Cars,’ in which connected and self-driving cars operate seamlessly with each other and traffic infrastructure, improving fuel efficiency and safety, and reducing traffic congestion and pollution.

Today’s commercially-available self-driving cars rely on costly sensors, specifically radar, camera, and LIDAR (light) to operate semi-autonomously. In the NEXTCAR project, BU researchers with project collaborators are looking to go beyond that by developing decision-making algorithms to improve the autonomous operation of a single hybrid vehicle as well as algorithms for communications between vehicles and their environment, enabling self-driving cars to cooperate and interact within their socio-cyber-physical environment.

Several different functions have been developed throughout this project including:

●      Eco-routing: Procedure of finding the optimal route for a vehicle to travel between two points, which utilizes the least amount of energy costs.

●      Eco-AND (Economical Arrival and Departure): An optimal control framework for approaching a traffic light without stopping at the intersection by having traffic light cycle time information.

●      CACC (Cooperative Adaptive Cruise Control): An extension of adaptive cruise control

"We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting."

(ACC) that by benefiting from vehicle to vehicle (V2V) communication increases the safety and energy efficiency by reducing headway.

In order to validate and test the developed technologies, researchers first use simulation environments to test the algorithms. After verifying through simulation, they implement the algorithms on Duckietown, and finally deploy them on real cars (Audi A3 e-tron) at the University of Michigan’s M-city (test track for self-driving cars).

We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting. Since most of our research focuses on Connected and Automated Vehicles (CAVs), we need to establish connections between individual Duckiebots and traffic lights. As a result, we created a platform for exchanging information and control commands between all the cars and traffic lights.

Online localization of Duckiebots is a challenging task, and is missing from the current framework. We relied on our external motion capture sensors (OptiTrack) to localize the robots.

Duckietown is a nice platform for performing experiments on autonomous robots since It is relatively simple to set up the town and Duckiebots. Moreover, the built in perception and lane keeping capabilities are very useful to kick off experiments quickly. Traffic lights and signs are also helpful to create different scenarios for testing algorithms in city-like scenarios.

What would make Duckietown even more useful in our application is feedback sensors for determining wheel rotational speed/position as it is difficult to correct for rotational speed errors of the wheels and a ROS node for exchanging information between robots and traffic lights for testing collaborative control algorithms.

Learn more about Duckietown

The Duckietown platform enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell? Reach out to us!

Interactive Learning with Corrective Feedback for Policies based on Deep Neural Networks

Interactive Learning with Corrective Feedback for Policies based on Deep Neural Networks

Deep Reinforcement Learning (DRL) has become a powerful strategy to
solve complex decision making problems based on Deep Neural Networks (DNNs).

However, it is highly data demanding, so unfeasible in physical systems for most
applications. In this work, we approach an alternative Interactive Machine Learning (IML) strategy for training DNN policies based on human corrective feedback,
with a method called Deep COACH (D-COACH). This approach not only takes advantage of the knowledge and insights of human teachers as well as the power of
DNNs, but also has no need of a reward function (which sometimes implies the
need of external perception for computing rewards). We combine Deep Learning
with the COrrective Advice Communicated by Humans (COACH) framework, in
which non-expert humans shape policies by correcting the agent’s actions during
execution. The D-COACH framework has the potential to solve complex problems
without much data or time required. 

Experimental results validated the efficiency of the framework in three different problems (two simulated, one with a real robot),with state spaces of low and high dimensions, showing the capacity to successfully learn policies for continuous action spaces like in the Car Racing and Cart-Pole problems faster than with DRL.

Introduction

Deep Reinforcement Learning (DRL) has obtained unprecedented results in decisionmaking problems, such as playing Atari games [1], or beating the world champion inGO [2]. 

Nevertheless, in robotic problems, DRL is still limited in applications with
real-world systems [3]. Most of the tasks that have been successfully addressed with
DRL have two common characteristics: 1) they have well-specified reward functions, and 2) they require large amounts of trials, which means long training periods
(or powerful computers) to obtain a satisfying behavior. These two characteristics
can be problematic in cases where 1) the goals of the tasks are poorly defined or
hard to specify/model (reward function does not exist), 2) the execution of many
trials is not feasible (real systems case) and/or not much computational power or
time is available, and 3) sometimes additional external perception is necessary for
computing the reward/cost function.

On the other hand, Machine Learning methods that rely on transfer of human
knowledge, Interactive Machine Learning (IML) methods, have shown to be time efficient for obtaining good performance policies and may not require a well-specified
reward function; moreover, some methods do not need expert human teachers for
training high performance agents [4–6]. In previous years, IML techniques were
limited to work with low-dimensional state spaces problems and to the use of function approximation such as linear models of basis functions (choosing a right basis
function set was crucial for successful learning), in the same way as RL. But, as
DRL have showed, by approximating policies with Deep Neural Networks (DNNs)
it is possible to solve problems with high-dimensional state spaces, without the need
of feature engineering for preprocessing the states. If the same approach is used in
IML, the DRL shortcomings mentioned before can be addressed with the support of
human users who participate in the learning process of the agent.
This work proposes to extend the use of human corrective feedback during task
execution to learn policies with state spaces of low and high dimensionality in continuous action problems (which is the case for most of the problems in robotics)
using deep neural networks.

We combine Deep Learning (DL) with the corrective advice based learning
framework called COrrective Advice Communicated by Humans (COACH) [6],
thus creating the Deep COACH (D-COACH) framework. In this approach, no reward functions are needed and the amount of learning episodes is significantly reduced in comparison to alternative approaches. D-COACH is validated in three different tasks, two in simulations and one in the real-world.

Conclusions

This work presented D-COACH, an algorithm for training policies modeled with
DNNs interactively with corrective advice. The method was validated in a problem
of low-dimensionality, along with problems of high-dimensional state spaces like
raw pixel observations, with a simulated and a real robot environment, and also
using both simulated and real human teachers.

The use of the experience replay buffer (which has been well tested for DRL) was
re-validated for this different kind of learning approach, since this is a feature not
included in the original COACH. The comparisons showed that the use of memory
resulted in an important boost in the learning speed of the agents, which were able
to converge with less feedback, and to perform better even in cases with a significant
amount of erroneous signals.

The results of the experiments show that teachers advising corrections can train
policies in fewer time steps than a DRL method like DDPG. So it was possible
to train real robot tasks based on human corrections during the task execution, in
an environment with a raw pixel level state space. The comparison of D-COACH
with respect to DDPG, shows how this interactive method makes it more feasible
to learn policies represented with DNNs, within the constraints of physical systems.
DDPG needs to accumulate millions of time steps of experience in order to obtain

Did you find this interesting?

Read more Duckietown based papers here.