AI-DO 3 – Urban Event Winners

In case you missed it AI-DO 3 has come and gone. Interested in reliving the competition? Here’s the video.

We had a great time at NeurIPS hosting the Third Edition of the AI Driving Olympics. As usual the sound of Duckies attracted an engaging and supportive crowd.

 

Racing Event

The competition began with the Racing Event, hosted by AWS DeepRacer. They ran their top 10 submissions and selected the winner by who could complete the fastest lap.

Racing Event Winner 
Ayrat Baykov at 8:08 seconds

 

Advanced Perception Event

The winners of the Advanced Perception Event hosted by APTIV and the nuScenes dataset were announced. Luckily a member of the winning team was present to accept the award.

Rank 3
CenterTrack – Open and Vision

Rank 2
VV_Team

Rank 1
StanfordlPRL-TRI

 

Urban Event

The competition culminated with Duckietown’s own Urban Driving Event, where we ran the top submissions for each of the three challenges on our competition tracks.

Winners

 

Lane Following 

JBRRussia1: Konstantin Chaika, Nikita Sazanovich, Kirill Krinkin, Max Kuzmin

Lane Following with Vehicles

phmarm

Lane Following with Vehicles and Intersections

frank_qcd_qk

 

Final Scoreboard

A few pictures from the event

Congratulations to all the winners and thanks for participating in the competition. We look forward to seeing you for AI-DO 4!

Round 3 of the the AI Driving Olympics is underway!

The AI Driving Olympics (AI-DO) is back!

We are excited to announce the launch of the AI-DO 3, which will culminate in a live competition event to be held at NeurIPS this Dec. 13-14.

The AI-DO is a global robotics competition that comprises a series of events based on autonomous driving. This year there are three events, urban (Duckietown), advanced perception (nuScenes), and racing (AWS Deepracer).  The objective of the AI-DO is to engage people from around the world in friendly competition, while simultaneously benchmarking and advancing the field of robotics and AI. 

Check out our official press release.

  • Learn more about the AI-DO competition here.

If you've already joined the competition we want to hear from you! 

 Share your pictures on facebook and twitter

AI-DO 1 at NeurIPS report. Congratulations to our winners!

The winners of AIDO-1 at NeurIPS

duckie-only-transparent

There was a great turnout for the first AI Driving Olympics competition, which took place at the NeurIPS conference in Montreal, Canada on Dec 8, 2018. In the finals, the submissions from the top five competitors were run from  five different locations on the competition track. 

Our top five competitors were awarded $3000 worth of AWS Credits (thank you AWS!) and a trip to one of nuTonomy’s offices for a ride in one of their self-driving cars (thanks APTIV!) 

2000px-Amazon_Web_Services_Logo.svg
aptiv_logo_color_rgb

WINNER

Team Panasonic R&D Center Singapore & NUS

(Wei Gao)


Check out the submission.

The approach: We used the random template for its flexibility and created a debug framework to test the algorithm. After that, we created one python package for our algorithm and used the random template to directly call it. The algorithm basically contains three parts: 1. Perception, 2. Prediction and 3. Control. Prediction plays the most important role when the robot is at the sharp turn where the camera can not observe useful information.

2nd Place

Jon Plante


Check out the submission.

The approach:  “I tried and imitate what a human does when he follows a lane. I believe the human tries to center itself at all times in the lane using the two lines as guides. I think the human implicitly projects the two lines into the horizon and where they intersect is where the human directs the vehicle towards.”

 

3rd Place

Vincent Mai


Check out the submission.

The approach: “The AI-DO application I made was using the ROS lane following baseline. After running it out of the box, I noticed a couple of problems and corrected them by changing several parameters in the code.”

 

 

Jacopo Tani - IMG_20181208_163935

4th Place

Team JetBrains

(Mikita Sazanovich)


Check out the submission.

The approach: “We used our framework for parallel deep reinforcement learning. Our network consisted of five convolutional layers (1st layer with 32 9×9 filters, each following layer with 32 5×5 filters), followed by two fully connected layers (with 768 and 48 neurons) that took as an input four last frames downsampled to 120 by 160 pixels and filtered for white and yellow color. We trained it with Deep Deterministic Policy Gradient algorithm (Lillicrap et al. 2015). The training was done in three stages: first, on a full track, then on the most problematic regions, and then on a full track again.”

5th Place

Team SAIC Moscow

(Anton Mashikhin)


Check out the submission.

The approach: Our solution is based on reinforcement learning algorithm. We used a Twin delayed DDPG and ape-x like distributed scheme. One of the key insights was to add PID controller as an additional  explorative policy. It has significantly improved learning speed and quality

A few photos from the day