Automatic Wheels and Camera Calibration for Monocular and Differential Mobile Robots

Automatic Wheels and Camera Calibration for Monocular and Differential Mobile Robots

After assembling the robot, components such as the camera and wheels need to be calibrated. This requires human participation and depends on human factors. We describe the approach to fully automatic calibration of a robot’s camera and wheels.

The camera calibration collects the necessary set of images by automatically moving the robot in front of the chess boards, and then moving it on the marked floor, assessing its trajectory curvature. As a result of the calibration, coefficient k is calculated for the wheels, and camera matrix K (which includes the focal length, the optical center, and the skew coefficient) and distortion coefficients D are calculated for the camera. 

Proposed approach has been tested on duckiebots in Alexander Popov’s International Innovation Institute for Artificial Intelligence, Cybersecurity and Communication, SPbETU “LETI”. This solution is comparable to manual calibrations and is capable of replacing a human for this task. 

Camera calibration process

The initial position of the robot is a part of the floor with chessboards in front, where the robot is located from the very beginning, on which its camera is directed and the floorsurface is marked with aruco markers on the other side of it.

There can be any number of chessboards, determined by the amount of free space around the robot. To a greater extent, the accuracy of calibration is affected by the frames with different positions of the boards, e.g., boards located at different distances from the robot and at different angles. The physical size and type of all the boards around the robot must be the same.

In fact, the camera calibration implies that the robot is rotating around its axis and taking pictures of all the viewable chessboards in turn. In this case, the ability to make several “passes” during the shooting process should be provided for, to control which of the boards the robot is currently observing and in which direction it should turn. As a result, the algorithm can be represented as a sequence of actions: “get a frame from the camera” and “turn” a littleThe final algorithm comprises the following sequence of actions:

  1. Obtain frame from the camera;
  2. Find a chessboard on the camera frame;
  3. Save information about board corners found in the image;
  4. Determine the direction of rotation according to the schedule;
  5. Make a step;
  6. Either repeat the steps described above, or complete the data
    collection and proceed with the camera calibration using OpenCV.

 

Wheels calibration process

Floor markers should be oriented towards the chessboards and begin as close to the robot as possible. The distance between the markers depends on camera’s resolution, as well as its height and angle of inclination, but it must be such that at least three recognizable markers can simultaneously be in the frame. For ours experiments, the distance between the markers was set as 15 cm with a marker size of 6.5 cm. The algorithm does not take into account the relative position of the markers against each other; however, the orientation of all markers must be strictly the same.

Let us consider the first iteration of the automatic wheel calibration algorithm:

  1. The robot receives the orientation of the marker closest to it and remembers it.
  2. Next, the robot moves forward with thespeeds of the left and right wheels equal to
    ω1ω2 for some fixed time t. The speeds are calculated taking into account the calibration
    coefficient k, which for the first iteration is chosen to equal 1 – that is, it is assumed that
    the real wheel speeds are equal.
  3. The robot obtains the orientation of the marker closest to it again and calculates the
    difference in angles between them.
  4. The coefficient ki for this step is calculated.
  5. The robot moves back for the same time t.

In order to reduce the influence of the error in calculating ki, coefficient k is refined only by the value of (ki−1)/2 after each iteration. It is important to complete this step after the robot moves back, because it reduces the chance of the robot moving outside the area width. If, after the next step, the modulus of the difference between (ki−1)/2 and 1.0 becomes less than the pre-selected E, then at this iteration (ki−1)/2 is not taken into account. If after three successive iterations ki is not taken into account, the wheel calibration is considered to be completed.

Accuracy Evaluation

To compare camera calibration errors, the knowledge of how to calculate these errors is needed. Since the calibration mechanism is used by the OpenCV library, the error is also calculated by the method offered by this library.

As noted earlier, with respect to calibration factors, the approach used to calibrate the camera is not applicable. Therefore, the influence of the coefficient on the robot’s trajectory curvature is estimated. To do this, the robot was located at a certain fixed distance from a straight line, along which it was oriented and then moved in manual mode strictly directly to a distance of two meters from the start point along the axis, relative to which it was oriented. Then, the robot stopped and the distance between the initial distance to the line and the final one was calculated.

Two metrices were estimated – reprojection error and straight line deviation. First one shows the quality of camera calibration, and the second one represents the quality of wheels calibration. Two pictures below present result of 10 independent tests in comparison with manual calibration.

 

 

 

The tests found that the suggested solution, on average, shows that the results are not much worse, than the classical manual solution when calibrating the camera, as well as when calibrating the wheels with a well known calibrated camera. However, when calibrating both the wheels and the camera, the wheel calibration can be significantly affected by the camera calibration effect. As a result of testing, a clear relationship was found between the reprojection error and the straight line deviation.

Method Modifications

After the integration of this approach, it became necessary to automate the last step-moving the robot to the field. Due to the fact that after the calibration step completion the robot becomes fully prepared for launching autonomous driving algorithms on it, the automation of this step further reduces the time spent by the operator when calibrating the robot, since instead of moving the robot to the field manually, he can place the next robot at the starting position. In our case, the calibration field was located at the side of the road lane so that the floor markers used to calibrate the wheels are oriented perpendicular to the road lane.

Thus, the first stage of the robot automatic removal from the calibration zone is to return its orientation back to the same state, as it was at the moment when the wheel calibration started. This was carried out using exactly the same approach that was described earlier—depending on the orientation of the floor marker closest to the robot, the robot rotates step by step about its axis clockwise or counterclockwise until the value of the robot’s orientation angle is modulo less than some preselected value.

At this point, the robot is still on the wheel calibration field, but in this case, it is oriented towards the lane. Thus, the last step is to move the robot outside the border of the field with markers. To do this, it is enough to give the robot a command to move directly until it stops observing the markers, when the last marker is hidden from the camera view. This means that the robot has left the calibration zone, and the robot can be put into the lane following mode.

 

 

 

 

Future Work

During the robot’s operation, the wheels calibration may become irrelevant. It can be influenced by various factors: a change in the wheel diameter due to wear of the wheel coating, a slight change in the characteristics of motors due to the wear of the gearbox plastic, and a change in the robot’s weight distribution, e.g., laying the cables on the other side of the case after charging the robot, and so a slight calibration mismatch can occur. However, all these factors have a rather small impact, and the robot will still have a satisfactory calibration. There is no need to re-perform the calibration process, just a little refinement of the current one seems to be enough. To do this, a section of the road along which the robots will be guaranteed to pass regularly, was selected. 

Further, markers were placed in this lane according to the rules described earlier: the distance between the markers is 15 cm; the size of the marker is 6.5 cm. The markers are located in the center of the lane. The distance between the markers may be not completely accurate, but they should be oriented in the same direction and co-directed with the movement in the lane on which they are placed. 

The first marker in the direction of travel must have a predefined ID. It can be anything, the only limitation is that it must be unique for a current robot environment. Further, the following changes were made to the algorithm for the standard control of the robot: when the robot recognizes the first marker with a predetermined ID while driving right in the lane, it corrects its orientation relative to this marker and continues to move strictly straight ahead. Further, the algorithm is similar to the one described earlier—the robot recognizing the next marker can refine its wheel calibration coefficient, apply it, and change the orientation coaxially with the next marker.

 

Conclusions

As a result, a solution was developed that allows a fully automatic calibration of the camera and the Duckiebot’s wheels. The main feature is the autonomy of the process, which allows one person to run the calibration of an arbitrary number of robots in parallel and not be blocked during their calibration. In addition, the robot is able to improve its calibration as it operates in default mode.

Comparing the developed solution with the initial one resulted in finding a slight deterioration in accuracy, which is primarily associated with the accuracy of the camera calibration; however, the result obtained is sufficient for the robot’s initial calibration and is comparable to manual calibration. 

Did you find this interesting?

Read more Duckietown based papers here.

Embedded out-of-distribution detection on an autonomous robot platform

Embedded out-of-distribution detection on an autonomous robot platform

Introduction

Machine learning is becoming more and more common in cyber-physical systems; many of these systems are safety critical, e.g. autonomous vehicles, UAVs, and surgical robots.  However, machine learning systems can only provide accurate outputs when their input data is similar to their training data.  For example, if an object detector in an autonomous vehicle is trained on images containing various classes of objects, but no ducks, what will it do when it encounters a duck during runtime?  One method for dealing with this challenge is to detect inputs that lie outside the training distribution of data: out-of-distribution (OOD) detection.  Many OOD detector architectures have been explored, however the cyber-physical domain adds additional challenges: hard runtime requirements and resource constrained systems.  In this paper, we implement a real-time OOD detector on the Duckietown framework and use it to demonstrate the challenges as well as the importance of OOD detection in cyber-physical systems.

Out-of-Distribution Detection

Machine learning systems perform best when their test data is similar to their training data.  In some applications unreliable results from a machine learning algorithm may be a mere nuisance, but in other scenarios they can be safety critical.  OOD detection is one method to ensure that machine learning systems remain safe during test time.  The goal of the OOD detector is to determine if the input sample is from a different distribution than that of the training data.  If an OOD sample is detected, the detector can raise a flag indicating that the output of the machine learning system should not be considered safe, and that the system should enter a new control regime.  In an autonomous vehicle, this may mean handing control back to the driver, or bringing the vehicle to a stop as soon as practically possible.

In this paper we consider the existing β-VAE based OOD detection architecture.  This architecture takes advantage of the information bottleneck in a variational auto-encoder (VAE) to learn the distribution of training data.  In this detector the VAE undergoes unsupervised training with the goal of minimizing the error between a true prior probability in input space p(z), and an approximated posterior probability from the encoder output p(z|x).  During test time, the Kullback-Leibler divergence between these distributions p(z) and q(z|x) will be used to assign an OOD score to each input sample.  Because the training goal was to minimize the distance between these two distributions on in-distribution data, in-distribution data found at runtime should have a low OOD score while OOD data should have a higher OOD score.

Duckietown

We used Duckietown to implement our OOD detector.  Duckietown provides a natural test bed because:

  • It is modular and easy to learn: the focus of our research is about implementing an OOD detector, not building a robot from scratch
  • It is a resource constrained system: the RPi on the DB18 is powerful enough to be capable of navigation tasks, but resource constrained enough that real-time performance is not guaranteed.  It servers as a good analog for a  system in which an OOD detector shares a CPU with perception, planning, and control software.
  • It is open source: this eliminates the need to purchase and manage licenses, allows us to directly check the source code when we encounter implementation issues, and allows us to contribute back to the community once our project is finished.
  • It is low-cost: we’re not made of money 🙂
 In our experiment, we used the stock DB18 robot.  Because we took advantage of the existing Duckietown framework, we only had to write three ROS nodes ourselves:
  • Lane following node: a simple OpenCV-based lane follower that navigates based on camera images.  This represents the perception and planning system for the mobile robot that we are trying to protect.  In our system the lane following node takes 640×480 RGB images and updates the planned trajectory at a rate of 5Hz.
  • OOD detection node: this node also takes images directly from the camera, but its job is to raise a flag when an OOD input appears (image with an OOD score greater than some threshold).  On the RPi with no GPU or TPU, it takes a considerable amount of time to make an inference on the VAE, so our detection node does not have a target rate, but rather uses the last available camera frame, dropping any frames that arrive while the OOD score is being computed.
  • Motor control node: during normal operation it takes the trajectory planned by the lane following node and sends it to the wheels.  However, if it receives a signal from the OOD detection node, it begins emergency breaking.

The Experiment

Our experiment considers the emergency stopping distance required for the Duckiebot when an OOD input is detected.  In our setup the Duckiebot drives forward along a straight track.  The area in front of the robot is divided into two zones: the risk zone and the safe zone.  The risk zone is an area where if an obstacle appears, it poses a risk to the Duckiebot.  The safe zone is further away and to the sides; this is a region where unknown obstacles may be present, but they do not pose an immediate threat to the robot.  An obstacle that has not appeared in the training set is placed in the safe zone in front of the robot.  As the robot drives forward along the track, the obstacle will eventually enter the risk zone.   Upon entry into the risk zone we measure how far the Duckiebot travels before the OOD detector triggers an emergency stop.

We defined the risk zone as the area 60cm directly in front of our Duckiebot.  We repeated the experiment 40 times and found that with our system architecture, the Duckiebot stopped on average 14.5cm before the obstacle.  However, in 5 iterations of the experiment, the Duckiebot collided with the stationary obstacle.

We wanted to analyze what lead to the collision in those five cases.  We started by looking at the times it took for our various nodes to run.  We plotted the distribution of end-to-end stopping times, image capture to detection start times, OOD detector execution times, and detection result to motor stop times.  We observed that there was a long tail on the OOD execution times, which lead us to suspect that the collisions occurred when the OOD detector took too long to produce a result.  This hypothesis was bolstered by the fact that even when a collision had occurred, the last logged OOD score was above the detection threshold, it had just been produced too late.  We also looked at the final two OOD detection times for each collision and found that in every case the final two times were above the median detector execution time.  This highlights the importance of real-time scheduling  when performing OOD detection in a cyber-physical system.

We also wanted to analyze what would happen if we adjusted the OOD detection threshold.  Because we had logged the the detection threshold every time the detector had run, we were able to interpolate the position of the robot at every detection time and discover when the robot would have stopped for different OOD detection thresholds.  We observe there is a tradeoff associated with moving the detection threshold.  If the detection threshold is lowered, the frequency of collisions can be reduced and even eliminated.  However, the mean stopping distance is also moved further from the obstacle and the robot is more likely to stop spuriously when the obstacle is outside of the risk zone.

 

Next Steps

In this paper we successfully implemented an OOD detector on a mobile robot, but our experiment leaves many more questions:

  • How does the performance of other OOD detector architectures compare with the β-VAE detector we used in this paper?
  • How can we guarantee the real-time performance of an OOD detector on a resource-constrained system, especially when sharing a CPU with other computationally intensive tasks like perception, planning, and control?
  • Does the performance vary when detecting more complex OOD scenarios: dynamic obstacles, turning corners, etc.?

Did you find this interesting?

Read more Duckietown based papers here.

AI Driving Olympics 5th edition: results

AI-DO 5: Urban league winners

This year’s challenges were lane following (LF), lane following with pedestrians (LFP) and lane following with other vehicles, multibody (LFV_multi). 

Let’s find out the results in each category:

LF

  1. Andras Beres 🇭🇺  
  2. Zoltan Lorincz 🇭🇺
  3. András Kalapos 🇭🇺

LFP

  1. Bea Baselines 🐤
  2. Melisande Teng 🇨🇦 
  3. Raphael Jean 🇨🇦

LFV_multi

  1. Robert Moni 🇭🇺
  2. Márton Tim 🇭🇺
  3. Anastasiya Nikolskay 🇷🇺

Congratulations to the Hungarian Team from the Budapest University of Technology and Economics for collecting the highest rankings in the urban league!

Here’s how the winners in each category performed both in the qualification (simulation) and in the finals running on real hardware:

Andras Beres - Lane following (LF) winner

Melisande Teng - Lane following with pedestrians (LFP) winner

Robert Moni - Lane following with other vehicles, multibody (LFV_multi) winner

AI-DO 5: Advanced Perception league winners

Great participation and results in the Advanced Perception league! Check out this year’s winners in the video below:

AI-DO 5 sponsors

Many thanks to our amazing sponsors, without which none of this would have been possible!

Stay tuned for next year AI Driving Olympics. Visit the AI-DO page for more information on the competition and to browse this year’s introductory webinars, or check out the Duckietown massive open online course (MOOC) and prepare for next year’s competition!

AI-DO 5 competition leaderboard update

AI-DO 5 pre-finals update

With the fifth edition of the AI Driving Olympics finals day approaching, 1326 solutions submitted from 94 competitors in three challenges, it is time to glance over at the leaderboards

Leaderboards updates

This year’s challenges are lane following (LF), lane following with pedestrians (LFP) and lane following with other vehicles, multibody (LFV_multi). Learn more about the challenges here. Each submission can be sent to multiple challenges. Let’s look at some of the most promising or interesting submissions.

The Montréal menace

Raphael Jean at Mila / University of Montréal is a new entrant for this year. 

An interesting submission: submission #12962 

All of raph’s submissions.

The submissions from the cold

Team JetBrains from Saint Petersburg was a winner of previous editions of AI-DO. They have been dominating the leaderboards also this year.

Interesting submissions: submission #12905

All of JetBrains submissions: JBRRussia1. 

 

BME Conti

PhD student Robert Moni (BME-Conti) from Hungary. 

Interesting submissions: submission #12999 

All submissions: timur-BMEconti

 

Deadline for submissions

The deadline for submitting to the AI-DO 5 is 12am EST on Thursday, December 10th, 2020. The top three entries (more if time allows) in each simulation challenge will be evaluated on real robots and presented at the finals event at NeurIPS 2020, which happens at 5pm EST on Saturday, December 12.

Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

The AIDO challenge is divided into two global stages: simulation and real-world. A single algorithm needs to perform well in both. It was quickly identified that one of the major problems is the simulation to real-world transfer. 

Many algorithms trained in the simulated environment performed very poorly in the real world, and many classic control algorithms that are known to perform well in a real-world environment, once tuned to that environment, do not perform well in the simulation. Some approaches suggest randomizing the domain for the simulation to real-world transfer.

We propose a novel method of training a neural network model that can perform well in diverse environments, such as simulations and real-world environment.

Dataset Generation

To that end, we have trained our model through imitation learning on a dataset compiled from four different sources:

  1. Real-world Duckietown dataset from logs.duckietown.com (REAL-DT).
  2. Simulation dataset on a simple loop map (SIM-LP).
  3. Simulation dataset on an intersection map (SIM-IS).
  4. Real-world dataset collected by us in our environment with car driven by PD controller (REAL-IH).

We aimed to collect data with as many possible situations such as twists in the road, driving in circles clockwise/counterclockwise, and so on. We have also tried to diversify external factors such as scene lighting, items in the room that can get into the camera’s field of view, roadside objects, etc. If we keep these conditions constant, our model may overfit to them and perform poorly in a different environment. For this reason, we changed the lighting and environment after each duckiebot run. The lane detection was calibrated for every lighting condition since different lighting changes the color scheme of the image input.

We made the following change to the standard PD algorithm: since most Duckietown turns and intersections are standard-shaped, we hard-coded the robot’s motion in these situations, but we did not exclude imperfect trajectories. For example, the ones that would go slightly out of bounds of the lane. Imperfections in the robot’s actions increase the robustness of the model. 

Neural network architecture and training

Original images are 640×480 RGB. As a preprocessing step, we remove the top third of the image, since it mostly contains the sky, resize the image to 64×32 pixels and convert it into the YUV colorspace.

We have used 5 convolutional layers with a small number of filters, followed by 2 fully-connected layers. The small size of the network is not only due to it being less prone to overfitting, but we also need a model that can run on a single CPU on RaspberryPi.

We have also incorporated Independent-Component (IC) layers. These layers aim to make the activations of each layer more independent by combining two popular techniques, BatchNorm and Dropout. For convolutional layers, we substitute Dropout with Spatial Dropout which has been shown to work better with them. The model outputs two values for voltages of the left and the right wheel drives. We use the mean square error (MSE) as our training loss.

Results

For the training evaluation, we compute the mean square error (MSE) of the left and the right wheels outputs on the validation set of each data source. 

The first table shows the results for the models trained on all data sources (HYBRID), on real-world data sources only (REAL) and on simulation data sources only (SIM). As we can see, while training on a single dataset sometimes achieves lower error on the same dataset than our hybrid approach. We can also see that our method performs on par with the best single methods. In terms of the average error it outperforms the closest one tenfold. This demonstrates definitively the high dependence of MSE on the training method, and highlights the differences between the data sources.

The next table shows simulation closed-loop performance for all our approaches using the Duckietown simulator. All methods drove for 15 seconds without major infractions, and the SIM model that was trained specifically on the simulation data only drove just 1.8 tiles more than our hybrid approach.

The third table shows the closed-loop performance in the real-world environment. Comparing the number of tiles, we see that our hybrid approach drove about 3.5 tiles more than the following in the rankings model trained on real-world data only.

Conclusion

Our method follows the imitation learning approach and consists of a convolutional neural network which is trained on a dataset compiled from data from different sources, such as simulation model and real-world Duckietown vehicle driven by a PD controller, tuned to various conditions, such as different map configuration and lighting. 

We believe that our approach of emphasizing neurons independence and monitoring generalization performance can offer more robustness to control models that have to perform in diverse environments. We also believe that the described approach of imitation learning on data obtained from several algorithms that are fitted to specific environments may yield a single algorithm that will perform well in general.

 —
 JBRRussia1 team

Join the AI Driving Olympics, 5th edition, starting now!

Compete in the 5th AI Driving Olympics (AI-DO)

The 5th edition of the Artificial Intelligence Driving Olympics (AI-DO 5) has officially started!

The AI-DO serves to benchmark the state of the art of artificial intelligence in autonomous driving by providing standardized simulation and hardware environments for tasks related to multi-sensory perception and embodied AI.

Duckietown hosts AI-DO competitions biannually, with finals events held at machine learning and robotics conferences such as the International Conference on Robotics and Automation (ICRA) and the Neural Information Processing Systems (NeurIPS). 

 The AI-DO 5 will be in conjunction with NeurIPS 2020 and have two leagues: Urban Driving and Advanced Perception

Urban driving league challenges

This year’s Urban League includes a traditional AI-DO challenge (LF) and introduces two new ones (LFP, LFVM).

Lane Following (LF)

The most traditional of AI-DO challenges: have a Duckiebot navigate a road loop without intersection, pedestrians (duckies) or other vehicles. The objective is traveling the longest path in a given time while staying in the lane.

Lane following with Pedestrian (LFP)

The LFP challenge is new to AI-DO. It builds upon LF by introducing static obstacles (duckies) on the road. The objectives are the same as for lane following, but do not hit the duckies! 

Lane Following with Vehicles, multi-body (LFVM)

In this traditional AI-DO challenge, contestants seek to travel the longest path in a city without intersections nor pedestrians, but with other vehicles on the road. Except this year there’s a twist. In this year’s novel multi-body variant, all vehicles on the road are controlled by the submission.

Getting started: the webinars

We offer a short webinar series to guide contestants through the steps for participating: from running our baselines in simulation as well as deploying them on hardware. All webinars are 9 am EST and free!

Introduction

Learn about the Duckietown project and the Artificial Intelligence Driving Olympics.

ROS baseline

How to run and build upon the “traditional” Robotic Operation System (ROS) baseline.

Local development

On the workflow for developing and deploying to Duckiebots, for hardware-based testing.

RL baseline

Learn how to use the Pytorch template for reinforcement learning approaches.

IL baseline

Introduction to the Tensorflow template, use of logs and simulator for imitation learning.

Advanced sensing league challenges

Previous AI-DO editions featured: detection, tracking and prediction challenges around the nuScenes dataset.

For the 5th iteration of AI-DO we have a brand new lidar segmentation challenge.

The challenge is based on the recently released lidar segmentation annotations for nuScenes and features an astonishing 1,400,000,000 lidar points annotated with one of 32 labels.

We hope that this new benchmark will help to push the boundaries in lidar segmentation. Please see https://www.nuscenes.org/lidar-segmentation for more details.

Furthermore, due to popular demand, we will organize the 3rd iteration of the nuScenes 3d detection challenge. Please see https://www.nuscenes.org/object-detection for more details.

AI-DO 5 Finals event

The AI-DO finals will be streamed LIVE during 2020 edition of the Neural Information Processing Systems (NeurIPS 2020) conference in December.

Learn more about the AI-DO here.

Thank you to our generous sponsors!

The Duckietown Foundation is grateful to its sponsors for supporting this fifth edition of the AI Driving Olympics!

Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Why is this important?

As robotics matures and increases in complexity, it is more necessary than ever that robot autonomy research be reproducible.

Compared to other sciences, there are specific challenges to benchmarking autonomy, such as the complexity of the software stacks, the variability of the hardware and the reliance on data-driven techniques, amongst others.

We describe a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained by design from the beginning of the research/development processes.

We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet.

The Duckietown Automated Laboratories (Autolabs)

One of the central components of this setup is the Duckietown Autolab (DTA), a remotely accessible standardized setup that is itself also relatively low-cost and reproducible.

DTAs include an off-the-shelf camera-based localization system. The accessibility of the hardware testing environment through enables experimental benchmarking that can be performed on a network of DTAs in different geographical locations.

The DUCKIENet

When evaluating agents, careful definition of interfaces allows users to choose among local versus remote evaluation using simulation, logs, or remote automated hardware setups. The Decentralized Urban Collaborative Benchmarking Environment Network (DUCKIENet) is an instantiation of this design based on the Duckietown platform that provides an accessible and reproducible framework focused on autonomous vehicle fleets operating in model urban environments. 

The DUCKIENet enables users to develop and test a wide variety of different algorithms using available resources (simulator, logs, cloud evaluations, etc.), and then deploy their algorithms locally in simulation, locally on a robot, in a cloud-based simulation, or on a real robot in a remote lab. In each case, the submitter receives feedback and scores based on well-defined metrics.

Validation

We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs. We built DTAs at the Swiss Federal Institute of Technology in Zurich (ETHZ) and at the Toyota Technological Institute at Chicago (TTIC).

Conclusions

Our contention is that there is a need for stronger efforts towards reproducible research for robotics, and that to achieve this we need to consider the evaluation in equal terms as the algorithms themselves. In this fashion, we can obtain reproducibility by design through the research and development processes. Achieving this on a large-scale will contribute to a more systemic evaluation of robotics research and, in turn, increase the progress of development.

If you found this interesting, you might want to:

IROS2020: Watch The Workshop on Benchmarking Progress in Autonomous Driving

What a start for IROS 2020 with the "Benchmarking Progress in Autonomous Driving" workshop!

The 2020 edition of the International Conference on Intelligent Robots and Systems (IROS) started great with the workshop on “Benchmarking Progress in Autonomous Driving”.

The workshop was held virtually on October 25th, 2020, using an engaging and concise format of a sequence of four 1.5-hour moderated round-table discussions (including an introduction) centered around 4 themes.

The discussions on the methods by which progress in autonomous driving is evaluated, benchmarked, and verified were exciting. Many thanks to all the panelists and the organizers!  

Here are the videos of the various sessions. 

Opening remarks

Theme 1: Assessing progress for the field of autonomous vehicles (AVs)

Moderator: Andrea Censi

Invited Panelists:

Theme 2: How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?

Moderator: Jacopo Tani

Invited Panelists:

Theme 3: Best practices for AV benchmarking

Moderator: Liam Paull

Invited Panelists:

Theme 4: Do we need new paradigms for AV development?

Moderator: Matt Walter

Invited Panelists:

Closing remarks

You can find additional information about the workshop here.

The Workshop on Benchmarking Progress in Autonomous Driving at IROS 2020

The IROS 2020 Workshop on Benchmarking Autonomous Driving

Duckietown has also a science mission: to help develop technologies for reproducible benchmarking in robotics.  

The IROS 2020 Workshop on Benchmarking Autonomous Driving provides a platform to investigate and discuss the methods by which progress in autonomous driving is evaluated, benchmarked, and verified.

It is free to attend.

The workshop is structured into 4 panels around four themes. 

  1. Assessing Progress for the Field of Autonomous Driving
  2. How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?
  3. Best practices for AV benchmarking
  4. Algorithms and Paradigms

The workshop will take place on Oct. 25, 2020 starting at 10am EDT

Invited Panelists

We have  a list of excellent invited panelists from academia, industry, and regulatory organizations. These include: 

  • Emilio Frazzoli (ETH Zürich / Motional)
  • Alex Kendall (Wayve)
  • Jane Lappin (National Academy of Sciences)
  • Bryant Walker Smith (USC Faculty of Law)
  • Luigi Di Lillo (Swiss Re Insurance), 
  • John Leonard (MIT)
  • Fabio Bonsignorio (Heron Robots)
  • Michael Milford (QUT)
  • Oscar Beijbom (Motional)
  • Raquel Urtasun (University of Toronto / Uber ATG). 

Please join us...

Please join us on October 25, 2020 starting at 10am EST for what should be a very engaging conversation about the difficult issues around benchmarking progress in autonomous vehicles.  

For full details about the event please see here.

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

We asked Róbert Moni to tell us more about his recent work. Enjoy the read!

The author's perspective

Most of us, proud nerd community members, experience driving first time by the discrete actions taken on our keyboards. We believe that the harder we push the forward arrow (or the W-key), the car from the game will accelerate faster (sooo true 😊 ). Few of us believes that we can resolve this task with machine learning. Even fever of us believes that this can be done accurately and in a robust mode with a basic Deep Reinforcement Learning (DRL) method known as Deep Q-Learning Networks (DQN).

It turned to be true in the case of a Duckiebot, and even more, with some added computer vision techniques it was able to perform well both in simulation (where the training process was carried out) and real world.

The pipeline

The complete training pipeline carried out in the Duckietown-gym environment is visualized in the figure above and works as follows. First, the camera images go through several preprocessing steps:

  • resizing to a smaller resolution (60×80) for faster processing;
  • cropping the upper part of the image, which doesn’t contain useful information for the navigation;
  • segmenting important parts of the image based on their color (lane markings);
  • and normalizing the image;
  • finally a sequence is formed from the last 5 camera images, which will be the input of the Convolutional Neural Network (CNN) policy network (the agent itself).

The agent is trained in the simulator with the DQN algorithm based on a reward function that describes how accurately the robot follows the optimal curve. The output of the network is mapped to wheel speed commands.

The workings

The CNN was trained with the preprocessed images. The network was designed such that the inference can be performed real-time on a computer with limited resources (i.e. it has no dedicated GPU). The input of the network is a tensor with the shape of (40, 80, 15), which is the result of stacking five RGB images. The network consists of three convolutional layers, each followed by ReLU (nonlinearity function) and MaxPool (dimension reduction) operations.

The convolutional layers use 32, 32, 64 filters with size 3 × 3. The MaxPool layers use 2 × 2 filters. The convolutional layers are followed by fully connected layers with 128 and 3 outputs. The output of the last layer corresponds to the selected action. The output of the neural network (one of the three actions) is mapped to wheel speed commands; these actions correspond to turning left, turning right, or going straight, respectively.

Learn more

Our work was acknowledged and presented at the IEEE World Congress on Computational Intelligence 2020 conference. We plan to publish the source code after AI-DO5 competition. Our paper is available on ieeexplore.ieee.org, deepai.org and arxiv.org.

Check out our sim and real demo on Youtube performed at our Duckietown Robotarium put together at Budapest University of Technology and Economics. .