YOLO based object detection in Duckietown at night and day

YOLO-based Robust Object Detection in Duckietown

YOLO-based Robust Object Detection in Duckietown

Project Resources

Why Robust Object Detection?

Object detection is the ability of a robot to identify a feature in its surroundings that might influence its actions. For example, if an object is laid on the road it might represent an obstacle, i.e., a region of space that the Duckiebot cannot occupy. Robust object detection becomes particularly important when operating in dynamic environmental conditions.

Obstacles can be of various, shape or color and they can be detected through different sensing modalities, for example, through vision or lidar scanning. 

In this project, students use a purely vision-based approach for obstacle detection. Using vision is very tricky because small nuisances such as in-class variations (think of many different type of duckies) or environmental lighting conditions will dramatically affect the outcome. 

Robust object detection refers to the ability of a system to detect objects in a broad spectrum of operating conditions, and to do so reliably. 

Detecting object in Duckietown is therefore important to avoid static and moving obstacles, detect traffic signs and otherwise guarantee safe driving. 

Model Performance Under Normal and Low Lighting Conditions

Robust Object Detection: the challenges

Some of the key challenges associated with vision-based object detection are the following:

Robustness across variable lighting conditions: Ensuring accurate object detection under diverse lighting is complex due to changes in object appearance (check out why in our computer vision classes). The model must handle different lighting scenarios effectively.

Balancing robustness and performance: There’s a trade-off between robustness to lighting variations and achieving high accuracy in standard operating conditions. Prioritizing one may affect the other.

Integration and real-time performance: Integrating the trained neural network (NN) model into the Duckiebot’s system is required for real-time operation, avoid lags associated with transport of images across networks. The model’s complexity therefore must align with the computational resources available. This project was executed on DB19 model Duckiebots, equipped with Raspberry Pi 3B+ and a Coral board.

Data quality and generalization: Ensuring the model generalizes well despite potential biases in the training dataset and transfer learning challenges is crucial. Proper dataset curation and validation are essential.

Project Highlights

Here is the output of their work. Check out the github repository for more details!

Robust Obstacle Detection: Results

Robust Object Detection: Authors

Maximilian Stölzle is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, and currently works at MIT as a Visiting Researcher.

Stefan Lionar is a former Duckietown student of class Autonomous Mobility on Demand at ETH Zurich, currently an Industrial PhD student at Sea AI Lab (SAIL), Singapore.

Learn more

Duckietown is a modular, customizable and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.

It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.