AI Driving Olympics 2021: Urban League Finalists

AI Driving Olympics 2021 - Urban League Finalists

This year’s embodied urban league challenges were lane following (LF), lane following with vehicles (LFV) and lane following with intersections, (LFI). To account for differences between the real world and simulation, this edition finalists can make one additional submission to the real challenges to improve their scores. Finalists are the authors of AI-DO 2021 submissions in the top 5 ranks for each challenge. This year’s finalists are:

LF

  • András Kalapos
  • Bence Haromi
  • Sampsa Ranta
  • ETU-JBR Team
  • Giulio Vaccari

LFV

  • Sampsa Ranta
  • Adrian Brucker
  • Andras Beres
  • David Bardos

LFI

  • András Kalapos
  • Sampsa Ranta
  • Adrian Brucker
  • Andras Beres

The deadline for submitting the “final” submissions is Dec. 9th, 2 pm CET. All submissions received after this time will count towards the next edition of AI-DO.

Don’t forget to join the #aido channel on the Duckietown Slack for updates!

Congratulations to all the participants, and best of luck to the finalists!

Amazon Web Services (AWS)

AI-DO 5 competition leaderboard update

AI-DO 5 pre-finals update

With the fifth edition of the AI Driving Olympics finals day approaching, 1326 solutions submitted from 94 competitors in three challenges, it is time to glance over at the leaderboards

Leaderboards updates

This year’s challenges are lane following (LF), lane following with pedestrians (LFP) and lane following with other vehicles, multibody (LFV_multi). Learn more about the challenges here. Each submission can be sent to multiple challenges. Let’s look at some of the most promising or interesting submissions.

The Montréal menace

Raphael Jean at Mila / University of Montréal is a new entrant for this year. 

An interesting submission: submission #12962 

All of raph’s submissions.

The submissions from the cold

Team JetBrains from Saint Petersburg was a winner of previous editions of AI-DO. They have been dominating the leaderboards also this year.

Interesting submissions: submission #12905

All of JetBrains submissions: JBRRussia1. 

 

BME Conti

PhD student Robert Moni (BME-Conti) from Hungary. 

Interesting submissions: submission #12999 

All submissions: timur-BMEconti

 

Deadline for submissions

The deadline for submitting to the AI-DO 5 is 12am EST on Thursday, December 10th, 2020. The top three entries (more if time allows) in each simulation challenge will be evaluated on real robots and presented at the finals event at NeurIPS 2020, which happens at 5pm EST on Saturday, December 12.

The “Self-Driving cars with Duckietown” Massive Open Online Course on edX

"Self-Driving Cars with Duckietown" hands-on MOOC on edX

We are launching a massive open online course (MOOC): “Self-Driving Cars with Duckietown” on edX, and it is free to attend! 

This course is made possible thanks to the support of the Swiss Federal Institute of Technology in Zurich (ETHZ), in collaboration with the University of Montreal, the Duckietown Foundation, and the Toyota Technological Institute at Chicago.

This course combines remote and hands-on learning with real-world robots. It is offered on edX, the trusted platform for learning, and it is now open for enrollment

Learning activities will support the use of Jetson Nano equipped Duckiebots, powered by NVIDIA.

Learning autonomy

Participants will engage in software and hardware hands-on learning experiences, with focus on overcoming the challenges of deploying autonomous robots in the real world.

This course will explore the theory and implementation of model- and data-driven approaches for making a model self-driving car drive autonomously in an urban environment.

Pedestrian detection

MOOC Factsheet

Prerequisites

What you will learn

Why Self-driving cars with Duckietown?

Teaching autonomy requires a fundamentally different approach when compared to other computer science and engineering disciplines, because it is multi-disciplinaryMastering it requires expertise in domains ranging from fundamental mathematics to practical machine-learning skills.

Robot Perception 

Robots operate in the real world, and theory and practice often do not play well togetherThere are many hardware platforms and software tools, each with its own strengths and weaknesses. It is not always clear what tools are worth investing time in mastering, and how these skills will generalize to different platforms. 

Duckiebot Detection

Learning through challenges

Progressing through behaviors of increasing complexity, participants uncover concepts and tools that address the limitations of previous approaches. This allows to get Duckiebots to actually do things, while gradually re-iterating concepts through various technical frameworks. Simulation and real-world experiments will be performed using a Python, ROS, and Docker based software stack.

Robot Planning

(Hidden) This line and everything under this line are hidden

This course combines remote and hands-on learning with real-world robots.

It is offered on edX, the trusted platform for learning, and it is now open for enrollment.

Learning activities will support the use of NVIDIA Jetson Nano powered Duckiebots.

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

We asked Róbert Moni to tell us more about his recent work. Enjoy the read!

The author's perspective

Most of us, proud nerd community members, experience driving first time by the discrete actions taken on our keyboards. We believe that the harder we push the forward arrow (or the W-key), the car from the game will accelerate faster (sooo true 😊 ). Few of us believes that we can resolve this task with machine learning. Even fever of us believes that this can be done accurately and in a robust mode with a basic Deep Reinforcement Learning (DRL) method known as Deep Q-Learning Networks (DQN).

It turned to be true in the case of a Duckiebot, and even more, with some added computer vision techniques it was able to perform well both in simulation (where the training process was carried out) and real world.

The pipeline

The complete training pipeline carried out in the Duckietown-gym environment is visualized in the figure above and works as follows. First, the camera images go through several preprocessing steps:

  • resizing to a smaller resolution (60×80) for faster processing;
  • cropping the upper part of the image, which doesn’t contain useful information for the navigation;
  • segmenting important parts of the image based on their color (lane markings);
  • and normalizing the image;
  • finally a sequence is formed from the last 5 camera images, which will be the input of the Convolutional Neural Network (CNN) policy network (the agent itself).

The agent is trained in the simulator with the DQN algorithm based on a reward function that describes how accurately the robot follows the optimal curve. The output of the network is mapped to wheel speed commands.

The workings

The CNN was trained with the preprocessed images. The network was designed such that the inference can be performed real-time on a computer with limited resources (i.e. it has no dedicated GPU). The input of the network is a tensor with the shape of (40, 80, 15), which is the result of stacking five RGB images. The network consists of three convolutional layers, each followed by ReLU (nonlinearity function) and MaxPool (dimension reduction) operations.

The convolutional layers use 32, 32, 64 filters with size 3 × 3. The MaxPool layers use 2 × 2 filters. The convolutional layers are followed by fully connected layers with 128 and 3 outputs. The output of the last layer corresponds to the selected action. The output of the neural network (one of the three actions) is mapped to wheel speed commands; these actions correspond to turning left, turning right, or going straight, respectively.

Learn more

Our work was acknowledged and presented at the IEEE World Congress on Computational Intelligence 2020 conference. We plan to publish the source code after AI-DO5 competition. Our paper is available on ieeexplore.ieee.org, deepai.org and arxiv.org.

Check out our sim and real demo on Youtube performed at our Duckietown Robotarium put together at Budapest University of Technology and Economics. .

Learn robotics from the comfort of your home with Duckietown

These are difficult times for us all. 

With physical distancing directives issued across the globe and many people restricted to their homes, we want to reach out (virtually) and offer our support.

To help you beat the isolation blues, the Duckietown Foundation is 

towards the next 100 orders of Duckiebots, Starter Kits, and Navigation Packs.

Remember that you can still learn about robotics without a robot!

Almost all of our resources remain open and available for your use.

Join us on 
Slack, peruse our library, or start training for the Urban League of the AI Driving Olympics.

Due to the closure of academic institutions the Duckietown Autolabs are temporarily closed. 

Coming soon: online demonstrations and tutorials to help you get started!

Have fun learning and stay safe!

Community Spotlight: Arian Houshmand – Control Algorithms for Traffic

Boston University, March 7, 2019: No one likes sitting in traffic: it is a waste of time and damaging to the environment. Thankfully researcher Arian Houshmand from Boston University CODES lab is on the case, and he’s using Duckietown to help solve the problem.

Control algorithms to improve traffic

Traffic congestion around the world is worsening, according to transport data firm INRIX. In the U.S. alone, Americans wasted an average of 97 hours in traffic in 2018 – that’s two precious weekends worth of time. Captivity in traffic also costs them nearly $87 billion in 2018, an average of $1,348 per driver. Clearly, the need for smart transportation is reaching a fervor, not only to alleviate the mental and financial state of drivers, but to address the significant economic toll on affected cities.
Traffic congestion around the world is worsening, according to transport data firm INRIX. In the U.S. alone, Americans wasted an average of 97 hours in traffic in 2018 – that’s two precious weekends worth of time. Captivity in traffic also costs them nearly $87 billion in 2018, an average of $1,348 per driver. Clearly, the need for smart transportation is reaching a fervor, not only to alleviate the mental and financial state of drivers, but to address the significant economic toll on affected cities. Fortunately, development of intelligent mobility technologies is advancing.  In an ongoing research project funded by the U.S. Department of Energy’s (DOE) Advanced Research Projects Agency-Energy (ARPA-E) NEXTCAR program, BU researchers in collaboration with researchers from University of Delaware, University of Michigan, Oak Ridge National Lab, and Bosch are developing technologies for Connected and Automated Vehicles (CAVs) to increase their fuel efficiency and as a bi-product reduce traffic congestion.

The goal

The goal of this project is to design control and optimization technologies that enable a plug-in hybrid electric vehicle (PHEV) to communicate with other cars and city infrastructure and act on that information. By providing cars with situational self-awareness, they will be able to efficiently calculate the best possible route, accelerate and decelerate as needed, and manage their powertrain. This is an important task toward advancing the vision to create an ‘Internet of Cars,’ in which connected and self-driving cars operate seamlessly with each other and traffic infrastructure, improving fuel efficiency and safety, and reducing traffic congestion and pollution.

Today’s commercially-available self-driving cars rely on costly sensors, specifically radar, camera, and LIDAR (light) to operate semi-autonomously. In the NEXTCAR project, BU researchers with project collaborators are looking to go beyond that by developing decision-making algorithms to improve the autonomous operation of a single hybrid vehicle as well as algorithms for communications between vehicles and their environment, enabling self-driving cars to cooperate and interact within their socio-cyber-physical environment.

Several different functions have been developed throughout this project including:

●      Eco-routing: Procedure of finding the optimal route for a vehicle to travel between two points, which utilizes the least amount of energy costs.

●      Eco-AND (Economical Arrival and Departure): An optimal control framework for approaching a traffic light without stopping at the intersection by having traffic light cycle time information.

●      CACC (Cooperative Adaptive Cruise Control): An extension of adaptive cruise control

"We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting."

(ACC) that by benefiting from vehicle to vehicle (V2V) communication increases the safety and energy efficiency by reducing headway.

In order to validate and test the developed technologies, researchers first use simulation environments to test the algorithms. After verifying through simulation, they implement the algorithms on Duckietown, and finally deploy them on real cars (Audi A3 e-tron) at the University of Michigan’s M-city (test track for self-driving cars).

We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting. Since most of our research focuses on Connected and Automated Vehicles (CAVs), we need to establish connections between individual Duckiebots and traffic lights. As a result, we created a platform for exchanging information and control commands between all the cars and traffic lights.

Online localization of Duckiebots is a challenging task, and is missing from the current framework. We relied on our external motion capture sensors (OptiTrack) to localize the robots.

Duckietown is a nice platform for performing experiments on autonomous robots since It is relatively simple to set up the town and Duckiebots. Moreover, the built in perception and lane keeping capabilities are very useful to kick off experiments quickly. Traffic lights and signs are also helpful to create different scenarios for testing algorithms in city-like scenarios.

What would make Duckietown even more useful in our application is feedback sensors for determining wheel rotational speed/position as it is difficult to correct for rotational speed errors of the wheels and a ROS node for exchanging information between robots and traffic lights for testing collaborative control algorithms.

Learn more about Duckietown

The Duckietown platform enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell? Reach out to us!

AI-DO 3 – Urban Event Winners

In case you missed it AI-DO 3 has come and gone. Interested in reliving the competition? Here’s the video.

We had a great time at NeurIPS hosting the Third Edition of the AI Driving Olympics. As usual the sound of Duckies attracted an engaging and supportive crowd.

 

Racing Event

The competition began with the Racing Event, hosted by AWS DeepRacer. They ran their top 10 submissions and selected the winner by who could complete the fastest lap.

Racing Event Winner 
Ayrat Baykov at 8:08 seconds

 

Advanced Perception Event

The winners of the Advanced Perception Event hosted by APTIV and the nuScenes dataset were announced. Luckily a member of the winning team was present to accept the award.

Rank 3
CenterTrack – Open and Vision

Rank 2
VV_Team

Rank 1
StanfordlPRL-TRI

 

Urban Event

The competition culminated with Duckietown’s own Urban Driving Event, where we ran the top submissions for each of the three challenges on our competition tracks.

Winners

 

Lane Following 

JBRRussia1: Konstantin Chaika, Nikita Sazanovich, Kirill Krinkin, Max Kuzmin

Lane Following with Vehicles

phmarm

Lane Following with Vehicles and Intersections

frank_qcd_qk

 

Final Scoreboard

A few pictures from the event

Congratulations to all the winners and thanks for participating in the competition. We look forward to seeing you for AI-DO 4!

Community Spotlight: Kirill Krinkin – STEM Intensive Learning Approach


In the world of engineering education, there are many excellent courses, but often the curriculum has one serious drawback – the lack of good connectivity between different topics. Over in Saint Petersburg, Russia, 
Kirill Krinkin from SPbETU and JetBrains Research has been using Duckietown to address this problem through an intensive STEM winter course.

STEM Intensive Learning Approach

by Kirill Krinkin

The first part of the school program was a week of classes in the base topic areas which were chosen to complement each other and help students see the connection between seemingly different things – mathematics, electronics and programming.

Of course, the main goal of the program was to give students the opportunity to put their new found knowledge into practice themselves.

Duckietown was the perfect fit for our course because it offered a hands-on learning experience for all of our main topics areas, and once we covered those subject in the first lessons, we challenged the students with much more complex tasks – in the form of projects – in the second half of the course. It made for an exciting and engaging curriculum because students could address a problem, write a program to solve it, and then immediately launch it on a real robot. 

The main advantage of Duckietown compared to many other platforms is that there is a very small learning curve: people who knew nothing about programming and robotics started working on projects after only a few days!

Overview of the course

Part 1 – Main Topic Areas

Subject 1: Linear Algebra

Students spent one day studying vectors and matrices, systems of linear equations, etc. Practical tasks were built in an interactive mode: the proposed tasks were solved individually, and the teacher and other students gave comments and tips.

 

Subject 2: Electricity and Simple Circuits

Students studied the basics of electrodynamics: voltage, current, resistance, Ohm’s law and Kirchhoff’s laws. Practical tasks were partially done in the electric circuits simulator or performed on the board, but more time was devoted to building real circuits, such as logic circuits, oscillatory circuits, etc.

 

Subject 3: Computer Architecture

In a sense, a bridge connecting physics and programming. Students studied the fundamental basis, the significance of which is more theoretical than practical. As a practice, students independently designed arithmetic-logic circuits in the simulator.

 

Subject 4: Programming

Python 2 was chosen as the programming language, as it is used in programming under ROS. After we taught the material and gave examples of solving problems, students were challenged with their own problems to solve, which we then evaluated. 

 

Subject 5: ROS

Here the students started programming robots. Throughout the school day, students sat at computers, running the program code that the teacher talked about. They were able to independently launch the basic units of ROS, and also get acquainted with the Duckietown project. At the end of this day, students were ready to begin the design part of the course – solving practical problems.

Part 2 – Projects

1. Calibration of colors

Duckiebots needs to calibrate the camera when lighting conditions change, so this project focussed on the task of automatic calibration. The problem is that color ranges are very sensitive to light. Participants implemented a utility that would highlight the desired colors on the frame (red, white and yellow) and build ranges for each of the colors in HSV format.

2. Duck Taxi

The idea of this project was that Duckiebot could stop near some object, pick it up and then continue along, following a certain route. Of course, a bright yellow Duckie was the chosen passenger. The participants divided this task into two: detection and movement along the graph.

drive while Duckie is not detected

Duckie identified as a yellow spot with an orange triangle 🙂

Building a route according to the road graph and destination point

3. Building a road map

The goal of this project was to build a road map without providing a priori environmental data for the Duckiebot, relying solely on camera data. Here’s the working scheme of the algorithm developed by the participants:

4. The patrol car

This project was invented by the students themselves. They offered to teach one Duckiebot, the “patrol”, to find, follow, and stop an “intruding” Duckiebot. The students used ArUco markers to identify the Intruder on the road as they are easy to work with and they allow you to determine the orientation and distance of the marker. Next, the team changed the state machine of the Patrol Duckiebot so that when approaching the stop-line the bot would continue through the intersection without stopping. Finally, the team was able to get the Patrol Duckiebot to stop the Intruder bot by connecting via SSH and turning it off. The algorithm of the patrol robot can be represented as the following scheme:

Summary

Students walked away from our STEM intensive learning program with the foundations of autonomous driving, from the theoretical math and physics behind the programming and circuitry to the complex challenges of navigating through a city. We were successful in remaining accessible to beginners in a particular area, but also providing materials for repetition and consolidation to experienced students. Duckietown is an excellent resource for bringing education to life.

After our course ended students were asked about their experience. 100% of them said that the program exceed their expectations. We can certainly say that the Duckietown platform played a pivotal role in our success.

Duckietown Workshop at RoboCup Junior

Duckietown Workshop at RoboCup Junior 2019

In collaboration with the RoboCup Federation, the Duckietown Foundation will be offering workshops at RoboCup 2019 in Sydney, Australia, providing a hands-on introduction to the Duckietown platform.

We will be hosting three one-day workshops as part of RoboCup 2019 from July 4-6, 2019  for teachers, students, and independent learners who are interested in finding out more about the Duckietown platform. Attendance is completely free and everyone is welcome to apply, even if you are not participating in RoboCup.

There are no formal requirements, though basic familiarity with GNU/Linux and shell usage is recommended.

If you would like to apply to attend a workshop, please complete this form.

We will have Duckiebots and Duckietowns for participants to use. However, you are more than welcome to bring your own Duckiebots, available for purchase at https://get.duckietown.com.

We will be hosting three one-day workshops as part of RoboCup 2019 from July 4-6, 2019  for teachers, students, and independent learners who are interested in finding out more about the Duckietown platform. Attendance is completely free and everyone is welcome to apply, even if you are not participating in RoboCup. There are no formal requirements, though basic familiarity with GNU/Linux and shell usage is recommended.

 

If you would like to apply to attend a workshop, please complete this form.

We will have Duckiebots and Duckietowns for participants to use. However, you are more than welcome to bring your own Duckiebots, available for purchase at https://get.duckietown.com.