Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

The AIDO challenge is divided into two global stages: simulation and real-world. A single algorithm needs to perform well in both. It was quickly identified that one of the major problems is the simulation to real-world transfer. 

Many algorithms trained in the simulated environment performed very poorly in the real world, and many classic control algorithms that are known to perform well in a real-world environment, once tuned to that environment, do not perform well in the simulation. Some approaches suggest randomizing the domain for the simulation to real-world transfer.

We propose a novel method of training a neural network model that can perform well in diverse environments, such as simulations and real-world environment.

Dataset Generation

To that end, we have trained our model through imitation learning on a dataset compiled from four different sources:

  1. Real-world Duckietown dataset from logs.duckietown.com (REAL-DT).
  2. Simulation dataset on a simple loop map (SIM-LP).
  3. Simulation dataset on an intersection map (SIM-IS).
  4. Real-world dataset collected by us in our environment with car driven by PD controller (REAL-IH).

We aimed to collect data with as many possible situations such as twists in the road, driving in circles clockwise/counterclockwise, and so on. We have also tried to diversify external factors such as scene lighting, items in the room that can get into the camera’s field of view, roadside objects, etc. If we keep these conditions constant, our model may overfit to them and perform poorly in a different environment. For this reason, we changed the lighting and environment after each duckiebot run. The lane detection was calibrated for every lighting condition since different lighting changes the color scheme of the image input.

We made the following change to the standard PD algorithm: since most Duckietown turns and intersections are standard-shaped, we hard-coded the robot’s motion in these situations, but we did not exclude imperfect trajectories. For example, the ones that would go slightly out of bounds of the lane. Imperfections in the robot’s actions increase the robustness of the model. 

Neural network architecture and training

Original images are 640×480 RGB. As a preprocessing step, we remove the top third of the image, since it mostly contains the sky, resize the image to 64×32 pixels and convert it into the YUV colorspace.

We have used 5 convolutional layers with a small number of filters, followed by 2 fully-connected layers. The small size of the network is not only due to it being less prone to overfitting, but we also need a model that can run on a single CPU on RaspberryPi.

We have also incorporated Independent-Component (IC) layers. These layers aim to make the activations of each layer more independent by combining two popular techniques, BatchNorm and Dropout. For convolutional layers, we substitute Dropout with Spatial Dropout which has been shown to work better with them. The model outputs two values for voltages of the left and the right wheel drives. We use the mean square error (MSE) as our training loss.

Results

For the training evaluation, we compute the mean square error (MSE) of the left and the right wheels outputs on the validation set of each data source. 

The first table shows the results for the models trained on all data sources (HYBRID), on real-world data sources only (REAL) and on simulation data sources only (SIM). As we can see, while training on a single dataset sometimes achieves lower error on the same dataset than our hybrid approach. We can also see that our method performs on par with the best single methods. In terms of the average error it outperforms the closest one tenfold. This demonstrates definitively the high dependence of MSE on the training method, and highlights the differences between the data sources.

The next table shows simulation closed-loop performance for all our approaches using the Duckietown simulator. All methods drove for 15 seconds without major infractions, and the SIM model that was trained specifically on the simulation data only drove just 1.8 tiles more than our hybrid approach.

The third table shows the closed-loop performance in the real-world environment. Comparing the number of tiles, we see that our hybrid approach drove about 3.5 tiles more than the following in the rankings model trained on real-world data only.

Conclusion

Our method follows the imitation learning approach and consists of a convolutional neural network which is trained on a dataset compiled from data from different sources, such as simulation model and real-world Duckietown vehicle driven by a PD controller, tuned to various conditions, such as different map configuration and lighting. 

We believe that our approach of emphasizing neurons independence and monitoring generalization performance can offer more robustness to control models that have to perform in diverse environments. We also believe that the described approach of imitation learning on data obtained from several algorithms that are fitted to specific environments may yield a single algorithm that will perform well in general.

 —
 JBRRussia1 team