
Category: Post categories


The Workshop on Benchmarking Progress in Autonomous Driving at IROS 2020
The IROS 2020 Workshop on Benchmarking Autonomous Driving
Duckietown has also a science mission: to help develop technologies for reproducible benchmarking in robotics.
The IROS 2020 Workshop on Benchmarking Autonomous Driving provides a platform to investigate and discuss the methods by which progress in autonomous driving is evaluated, benchmarked, and verified.
It is free to attend.
The workshop is structured into 4 panels around four themes.
- Assessing Progress for the Field of Autonomous Driving
- How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?
- Best practices for AV benchmarking
- Algorithms and Paradigms
The workshop will take place on Oct. 25, 2020 starting at 10am EDT.
Invited Panelists
We have a list of excellent invited panelists from academia, industry, and regulatory organizations. These include:
- Emilio Frazzoli (ETH Zürich / Motional)
- Alex Kendall (Wayve)
- Jane Lappin (National Academy of Sciences)
- Bryant Walker Smith (USC Faculty of Law)
- Luigi Di Lillo (Swiss Re Insurance),
- John Leonard (MIT)
- Fabio Bonsignorio (Heron Robots)
- Michael Milford (QUT)
- Oscar Beijbom (Motional)
- Raquel Urtasun (University of Toronto / Uber ATG).
Please join us...
Please join us on October 25, 2020 starting at 10am EST for what should be a very engaging conversation about the difficult issues around benchmarking progress in autonomous vehicles.
For full details about the event please see here.

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World
- Title: Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World - IEEE Conference Publication
- Authors: Péter Almási, Róbert Moni, Bálint Gyires-Tóth
- Published: 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1-8, doi: 10.1109/IJCNN48605.2020.9207497
Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World
We asked Róbert Moni to tell us more about his recent work. Enjoy the read!
The author's perspective
Most of us, proud nerd community members, experience driving first time by the discrete actions taken on our keyboards. We believe that the harder we push the forward arrow (or the W-key), the car from the game will accelerate faster (sooo true 😊 ). Few of us believes that we can resolve this task with machine learning. Even fever of us believes that this can be done accurately and in a robust mode with a basic Deep Reinforcement Learning (DRL) method known as Deep Q-Learning Networks (DQN).
It turned to be true in the case of a Duckiebot, and even more, with some added computer vision techniques it was able to perform well both in simulation (where the training process was carried out) and real world.

The pipeline
The complete training pipeline carried out in the Duckietown-gym environment is visualized in the figure above and works as follows. First, the camera images go through several preprocessing steps:
- resizing to a smaller resolution (60×80) for faster processing;
- cropping the upper part of the image, which doesn’t contain useful information for the navigation;
- segmenting important parts of the image based on their color (lane markings);
- and normalizing the image;
- finally a sequence is formed from the last 5 camera images, which will be the input of the Convolutional Neural Network (CNN) policy network (the agent itself).
The agent is trained in the simulator with the DQN algorithm based on a reward function that describes how accurately the robot follows the optimal curve. The output of the network is mapped to wheel speed commands.
The workings
The CNN was trained with the preprocessed images. The network was designed such that the inference can be performed real-time on a computer with limited resources (i.e. it has no dedicated GPU). The input of the network is a tensor with the shape of (40, 80, 15), which is the result of stacking five RGB images. The network consists of three convolutional layers, each followed by ReLU (nonlinearity function) and MaxPool (dimension reduction) operations.
The convolutional layers use 32, 32, 64 filters with size 3 × 3. The MaxPool layers use 2 × 2 filters. The convolutional layers are followed by fully connected layers with 128 and 3 outputs. The output of the last layer corresponds to the selected action. The output of the neural network (one of the three actions) is mapped to wheel speed commands; these actions correspond to turning left, turning right, or going straight, respectively.
Learn more
Our work was acknowledged and presented at the IEEE World Congress on Computational Intelligence 2020 conference. We plan to publish the source code after AI-DO5 competition. Our paper is available on ieeexplore.ieee.org, deepai.org and arxiv.org.
Check out our sim and real demo on Youtube performed at our Duckietown Robotarium put together at Budapest University of Technology and Economics. .



Duckietown and NVIDIA work together for accessible AI and robotics education: Meet the NVIDIA powered Duckiebot


Duckietown and NVIDIA partnership for accessible AI and robotics education
NVIDIA GTC, October 6, 2020: Duckietown and NVIDIA align efforts to push the boundaries of accessible, state-of-the-art higher-education in robotics and AI. The tangible outcome is a brand new “Founder’s edition” Duckiebot, which will be broadly available from January 2021, powered by the new NVIDIA Jetson Nano 2GB platform.
Read the full NVIDIA announcement here.
Meet the NVIDIA powered Duckiebot
Autonomy is already changing the world. Duckietown and NVIDIA recognize the importance of hands-on education in robotics and AI to empower everybody today to understand and design the next generations of autonomy.
The result of this collaboration is a new NVIDIA powered Duckiebot, using the novel Jetson Nano 2GB board, that will enable local execution of machine learning agents in the Duckietown ecosystem.
To celebrate this special occasion, the Duckiebot has been redesigned to include: new sensors (time of flight, IMU, encoders), a new custom-designed battery providing real time diagnostics (state of charge, remaining autonomy and other health metrics), and fun accessories like a screen to visualize key metrics. All of this while keeping the price accessible for anyone willing to experience the challenges of a real-life robotic ecosystem.

A great team
“The new NVIDIA Jetson Nano 2GB is the ultimate starter AI computer for educators and students to teach and learn AI at an incredibly affordable price.” said Deepu Talla, Vice President and General Manager of Edge Computing at NVIDIA. “Duckietown and its edX MOOC are leveraging Jetson to take hands-on experimentation and understanding of AI and autonomous machines to the next level.”
Learn more
To know more about the technical specifications of the new NVIDIA powered Duckiebot, or to pre-order yours, visit the Duckietown project shop here.
The new Duckiebot will be also used in the “Self-driving Cars with Duckietown” Massive Online Open Course (MOOC) that will be held in March 2021 on edX. You can find more information about the MOOC here.

Community Spotlight: Arian Houshmand – Control Algorithms for Traffic
Control algorithms to improve traffic



The goal
The goal of this project is to design control and optimization technologies that enable a plug-in hybrid electric vehicle (PHEV) to communicate with other cars and city infrastructure and act on that information. By providing cars with situational self-awareness, they will be able to efficiently calculate the best possible route, accelerate and decelerate as needed, and manage their powertrain. This is an important task toward advancing the vision to create an ‘Internet of Cars,’ in which connected and self-driving cars operate seamlessly with each other and traffic infrastructure, improving fuel efficiency and safety, and reducing traffic congestion and pollution.
Today’s commercially-available self-driving cars rely on costly sensors, specifically radar, camera, and LIDAR (light) to operate semi-autonomously. In the NEXTCAR project, BU researchers with project collaborators are looking to go beyond that by developing decision-making algorithms to improve the autonomous operation of a single hybrid vehicle as well as algorithms for communications between vehicles and their environment, enabling self-driving cars to cooperate and interact within their socio-cyber-physical environment.
Several different functions have been developed throughout this project including:
● Eco-routing: Procedure of finding the optimal route for a vehicle to travel between two points, which utilizes the least amount of energy costs.
● Eco-AND (Economical Arrival and Departure): An optimal control framework for approaching a traffic light without stopping at the intersection by having traffic light cycle time information.
● CACC (Cooperative Adaptive Cruise Control): An extension of adaptive cruise control
"We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting."
Arian Houshmand
(ACC) that by benefiting from vehicle to vehicle (V2V) communication increases the safety and energy efficiency by reducing headway.
In order to validate and test the developed technologies, researchers first use simulation environments to test the algorithms. After verifying through simulation, they implement the algorithms on Duckietown, and finally deploy them on real cars (Audi A3 e-tron) at the University of Michigan’s M-city (test track for self-driving cars).

We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting. Since most of our research focuses on Connected and Automated Vehicles (CAVs), we need to establish connections between individual Duckiebots and traffic lights. As a result, we created a platform for exchanging information and control commands between all the cars and traffic lights.
Online localization of Duckiebots is a challenging task, and is missing from the current framework. We relied on our external motion capture sensors (OptiTrack) to localize the robots.
Duckietown is a nice platform for performing experiments on autonomous robots since It is relatively simple to set up the town and Duckiebots. Moreover, the built in perception and lane keeping capabilities are very useful to kick off experiments quickly. Traffic lights and signs are also helpful to create different scenarios for testing algorithms in city-like scenarios.
What would make Duckietown even more useful in our application is feedback sensors for determining wheel rotational speed/position as it is difficult to correct for rotational speed errors of the wheels and a ROS node for exchanging information between robots and traffic lights for testing collaborative control algorithms.
Learn more about Duckietown
The Duckietown platform enables state-of-the-art robotics and AI learning experiences.
It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.
Tell us your story
Are you an instructor, learner, researcher or professional with a Duckietown story to tell? Reach out to us!

AI-DO 3 – Urban Event Winners
In case you missed it AI-DO 3 has come and gone. Interested in reliving the competition? Here’s the video.
We had a great time at NeurIPS hosting the Third Edition of the AI Driving Olympics. As usual the sound of Duckies attracted an engaging and supportive crowd.
Racing Event
The competition began with the Racing Event, hosted by AWS DeepRacer. They ran their top 10 submissions and selected the winner by who could complete the fastest lap.
Racing Event Winner
Ayrat Baykov at 8:08 seconds


Advanced Perception Event
The winners of the Advanced Perception Event hosted by APTIV and the nuScenes dataset were announced. Luckily a member of the winning team was present to accept the award.
Rank 3
CenterTrack – Open and Vision
Rank 2
VV_Team
Rank 1
StanfordlPRL-TRI


Urban Event
The competition culminated with Duckietown’s own Urban Driving Event, where we ran the top submissions for each of the three challenges on our competition tracks.
Winners
Lane Following
JBRRussia1: Konstantin Chaika, Nikita Sazanovich, Kirill Krinkin, Max Kuzmin

Lane Following with Vehicles
phmarm

Lane Following with Vehicles and Intersections
frank_qcd_qk

Final Scoreboard

A few pictures from the event
Congratulations to all the winners and thanks for participating in the competition. We look forward to seeing you for AI-DO 4!

STEM Intensive Learning with Prof. Krinkin
In the world of engineering education, there are many excellent courses, but often the curriculum has one serious drawback – the lack of good connectivity between different topics. Over in Saint Petersburg, Russia, Kirill Krinkin from SPbETU and JetBrains Research has been using Duckietown to address this problem through an intensive STEM winter course.

STEM Intensive Learning Approach
by Kirill Krinkin
The first part of the school program was a week of classes in the base topic areas which were chosen to complement each other and help students see the connection between seemingly different things – mathematics, electronics and programming.
Of course, the main goal of the program was to give students the opportunity to put their new found knowledge into practice themselves.

Duckietown was the perfect fit for our course because it offered a hands-on learning experience for all of our main topics areas, and once we covered those subject in the first lessons, we challenged the students with much more complex tasks – in the form of projects – in the second half of the course. It made for an exciting and engaging curriculum because students could address a problem, write a program to solve it, and then immediately launch it on a real robot.
The main advantage of Duckietown compared to many other platforms is that there is a very small learning curve: people who knew nothing about programming and robotics started working on projects after only a few days!
Overview of the course
Part 1 – Main Topic Areas
Subject 1: Linear Algebra
Students spent one day studying vectors and matrices, systems of linear equations, etc. Practical tasks were built in an interactive mode: the proposed tasks were solved individually, and the teacher and other students gave comments and tips.
Subject 2: Electricity and Simple Circuits
Students studied the basics of electrodynamics: voltage, current, resistance, Ohm’s law and Kirchhoff’s laws. Practical tasks were partially done in the electric circuits simulator or performed on the board, but more time was devoted to building real circuits, such as logic circuits, oscillatory circuits, etc.
Subject 3: Computer Architecture
In a sense, a bridge connecting physics and programming. Students studied the fundamental basis, the significance of which is more theoretical than practical. As a practice, students independently designed arithmetic-logic circuits in the simulator.



Subject 4: Programming
Python 2 was chosen as the programming language, as it is used in programming under ROS. After we taught the material and gave examples of solving problems, students were challenged with their own problems to solve, which we then evaluated.
Subject 5: ROS
Here the students started programming robots. Throughout the school day, students sat at computers, running the program code that the teacher talked about. They were able to independently launch the basic units of ROS, and also get acquainted with the Duckietown project. At the end of this day, students were ready to begin the design part of the course – solving practical problems.
Part 2 – Projects

1. Calibration of colors
Duckiebots needs to calibrate the camera when lighting conditions change, so this project focussed on the task of automatic calibration. The problem is that color ranges are very sensitive to light. Participants implemented a utility that would highlight the desired colors on the frame (red, white and yellow) and build ranges for each of the colors in HSV format.

2. Duck Taxi
The idea of this project was that Duckiebot could stop near some object, pick it up and then continue along, following a certain route. Of course, a bright yellow Duckie was the chosen passenger. The participants divided this task into two: detection and movement along the graph.
drive while Duckie is not detected
Duckie identified as a yellow spot with an orange triangle 🙂
Building a route according to the road graph and destination point

3. Building a road map
The goal of this project was to build a road map without providing a priori environmental data for the Duckiebot, relying solely on camera data. Here’s the working scheme of the algorithm developed by the participants:

4. The patrol car
This project was invented by the students themselves. They offered to teach one Duckiebot, the “patrol”, to find, follow, and stop an “intruding” Duckiebot. The students used ArUco markers to identify the Intruder on the road as they are easy to work with and they allow you to determine the orientation and distance of the marker. Next, the team changed the state machine of the Patrol Duckiebot so that when approaching the stop-line the bot would continue through the intersection without stopping. Finally, the team was able to get the Patrol Duckiebot to stop the Intruder bot by connecting via SSH and turning it off. The algorithm of the patrol robot can be represented as the following scheme:

Summary
Students walked away from our STEM intensive learning program with the foundations of autonomous driving, from the theoretical math and physics behind the programming and circuitry to the complex challenges of navigating through a city. We were successful in remaining accessible to beginners in a particular area, but also providing materials for repetition and consolidation to experienced students. Duckietown is an excellent resource for bringing education to life.
After our course ended students were asked about their experience. 100% of them said that the program exceed their expectations. We can certainly say that the Duckietown platform played a pivotal role in our success.

Round 3 of the the AI Driving Olympics is underway!
The AI Driving Olympics (AI-DO) is back!
We are excited to announce the launch of the AI-DO 3, which will culminate in a live competition event to be held at NeurIPS this Dec. 13-14.
The AI-DO is a global robotics competition that comprises a series of events based on autonomous driving. This year there are three events, urban (Duckietown), advanced perception (nuScenes), and racing (AWS Deepracer). The objective of the AI-DO is to engage people from around the world in friendly competition, while simultaneously benchmarking and advancing the field of robotics and AI.
Check out our official press release.

- Learn more about the AI-DO competition here.


If you've already joined the competition we want to hear from you!
Duckietown Workshop at RoboCup Junior
Duckietown Workshop at RoboCup Junior 2019
In collaboration with the RoboCup Federation, the Duckietown Foundation will be offering workshops at RoboCup 2019 in Sydney, Australia, providing a hands-on introduction to the Duckietown platform.

We will be hosting three one-day workshops as part of RoboCup 2019 from July 4-6, 2019 for teachers, students, and independent learners who are interested in finding out more about the Duckietown platform. Attendance is completely free and everyone is welcome to apply, even if you are not participating in RoboCup.
There are no formal requirements, though basic familiarity with GNU/Linux and shell usage is recommended.
If you would like to apply to attend a workshop, please complete this form.
We will have Duckiebots and Duckietowns for participants to use. However, you are more than welcome to bring your own Duckiebots, available for purchase at https://get.duckietown.com.

We will be hosting three one-day workshops as part of RoboCup 2019 from July 4-6, 2019 for teachers, students, and independent learners who are interested in finding out more about the Duckietown platform. Attendance is completely free and everyone is welcome to apply, even if you are not participating in RoboCup. There are no formal requirements, though basic familiarity with GNU/Linux and shell usage is recommended.
If you would like to apply to attend a workshop, please complete this form.
We will have Duckiebots and Duckietowns for participants to use. However, you are more than welcome to bring your own Duckiebots, available for purchase at https://get.duckietown.com.

Congratulations to the winners of the second edition of the AI Driving Olympics!

Team JetBrains came out on top on all 3 challenges
It was a busy (and squeaky) few days at the International Conference on Robotics and Automation in Montreal for the organizers and competitors of the AI Driving Olympics.
The finals were kicked off by a semifinals round, where we the top 5 submissions from the Lane Following in Simulation leaderboard. The finalists (JBRRussia and MYF) moved forward to the more complicated challenges of Lane Following with Vehicles and Lane Following with Vehicles and Intersections.

If you couldn’t make it to the event and missed the live stream on Facebook, here’s a short video of the first run of the JetBrains Lane Following submission.
Thanks to everyone that competed, dropped in to say hello, and cheered on the finalists by sending the song of the Duckie down the corridors of the Palais des Congrès.
A few pictures from the event
Don't know much about the AI Driving Olympics?
It is an accessible and reproducible autonomous car competition designed with straightforward standardized hardware, software and interfaces.

Get Started
Step 1: Build and test your agent with our available templates and baselines

Step 2: Submit to a challenge
Check out the leaderboard

View your submission in simulation
Step 3: Run your submission on a robot
in a Robotarium