Dino Claro: a Duckietown journey from project to thesis

Dino Claro's Duckietown journey: from project to graduate thesis

Dino Claro, a mechanical and mechatronics engineering graduate from the University of Cape Town, shares his Duckietown journey: with challenges and results.

Cape Town, February 13th, 2024: Dino Claro, Graduate Mechanical and Mechatronics Engineer at the University of Cape Town, shares his experience with Duckietown and the project he developed using Duckiebots for his masters thesis.

Duckinator: an Odometry pose-estimation for the Duckiebot robotic car platform

Duckies - abbey road
Hello and welcome Mr. Dino Claro! Could you introduce yourself?

My name is Dino Claro and I’m a Graduate Mechanical and Mechatronics Engineer at the University of Cape Town.

Thanks for accepting to share your experience with us. When did you first run into Duckietown?

During vacation work at the University of Cape Town (UCT) Mechatronic Systems Group, I was given the open-ended task of estimating the pose of a robot car. The goal of the vacation work was to solve a problem independently but also free from the
stresses of receiving a mark or grade. There was no expectation for novel work. In fact, the vacation work was only two weeks, and the expected solution would have been straightforward, probably odometry-based. Thus, Duckinator was born.

That's when you decided to use Duckietown?

With two platforms available, a basic Arduino 4WD kit and the Duckiebot, I could simply not resist the Duckies’ pull. The idea of using a Linux-based platform geared toward AI was extremely exciting. 

At the end of the two-week vacation work, I was still ploughing through Duckietown documentation, the EdX: Self-Driving Cars with Duckietown MOOC, and ROS tutorials. My pose estimation solution seemed very far down the road. At that point, I should have realised that the DB (besides its cute exterior) is nuanced, to say the least.

duckies pyramid
Could you describe us your project?

The early phase of my project was extremely rudimentary. I had only had a couple of weeks during the vacation work to play with the DB [Duckiebot]. I planned to continue with the EdX MOOC [Self-Driving Cars with Duckietown, 2023 edition] while researching Docker and ROS on the side for the first couple of weeks and then begin development. A pitfall with this technique was completing a section of the MOOC or some other tutorial and believing I could implement it myself. My initial thinking was that if the MOOC could be completed in 10 weeks or so and given that I have already a couple of weeks’ headstart due to vacation work, I should be able to implement my standalone autonomous solution for the DB in the 12-week frame. 

Spoiler alert, Duckinator did not rival Tesla. I made the realisation about 4 weeks into the project. At that stage, I was in the Object Detection activity of the MOOC. With the world in a frenzy over AI and ML, I was itching to dip my hands in some of this mysterious ML stuff. 

Dr. Pretorius obliged, and my plan from this point was to implement my own standalone Duckietown-compliant Docker image for the YOLOv5. Charged with the excitement of the new project direction, I began researching ML, computer vision algorithms and YOLO itself. Implementing the YOLOv5 model was relatively smooth sailing and I loved learning computer vision. In all honesty, my YOLOv5 model was just organising the Object Detection MOOC into a standalone Docker image as the MOOC hides the Docker image from the student. I obtained the training data using the MOOC helper files and then trained the YOLOv5 model using a very similar Google Colab script as provided by the MOOC. 

I slightly extended the YOLOv5 model from the MOOC by training the model to detect DBs, which proved to be sort of successful. As I only had one Duckiebot, I tested the model by parking Duckinator in front of a mirror or putting it in front of my laptop showing photos of other DBs. Due to this shabby testing, I left this extension out of my write-up. This was all completed after week 7.

With the world in a frenzy over AI and ML, I was itching to dip my hands in some of this mysterious ML stuff.

Duckiebot image detection
Did you meet your objectives?

Completing the Object Detection model effectively meant that my revised project brief had been met but as I still had some time, I needed to extend the model in some way. 

Duckinator had eyes but I wanted to make it move … autonomously. I had the idea of creating a safety controller where the distance of objects from the duckiebot could be inferred using the predicted bound box and perspective geometry. 

My theory went as follows: knowing the real-world size of all the objects the DB could detect and comparing this to the dimensions of the bounding box provided by the YOLO model, it would be possible to infer the depth of the object and this depth could then be used to base autonomous controller commands. This led me to research autonomous vehicle safety architectures/controllers and modern depth estimation algorithms. I soon realised that much more advanced autonomous architectures existed. For instance, modern autonomous vehicles fuse camera feeds, object detection models, kinematic models and various other sensors to generate vector or depth maps. The creation of these depth maps is extremely complex and a field of intense research.

imposter_marked
What where the challenges you encountered during your project?

After coding up my algorithm for the projective projection algorithm, I obtained unexpected results. Negative in most cases. Describing my algorithm in more clarity to Dr. Pretorius, he made it clear that a simple projection perspective would not work in this case. 

I was projecting everything from the camera image to the ground plane but of course, the duckies and any other objects do not exist solely on the ground plane. This being week 10 of the project, I had simply run out of time and had to scrap the projective perspective and had no time to implement any of the more complex algorithms out there.  

I was devastated at the fact that Duckinator was not going to move. 

Upon some reflection though, the YOLOv5 model was working quite well, and I had all this research about autonomous architectures and depth estimation. One of the autonomous architectures I researched was Braitenberg vehicles acting as, possibly, the simplest autonomous architecture. A basic Braitenberg controller was simple enough to implement and would mean once again Duckinator could move. 

Bounding boxes were populated onto a black image and then divided into left and right region maps. These maps were then element-wise multiplied with a weight matrix to provide a scalar value which can be used for wheel commands. Using the ‘fear’ Braitenberg vehicle the DB would then steer away from any detected objects. Another realisation was that my project was experimental with one of the main goals being for it to act as a stepping stone for future projects. 

At this stage (two weeks before my project was due) I was satisfied with a newly engineered aim: Evaluating the viability of the Duckietown platform at the undergraduate level by implementing an ML object detection model. The key outputs being the YOLOv5 model (Duckie Detector) as well as possible future projects and trajectories for students.

The real learning occurs when getting your hands dirty, experimenting and troubleshooting.

duckie_avoidance
What are your final considerations?

Reflecting on my journey from a complete beginner to a slightly more competent beginner, here’s my advice for those on a similar journey: 

Begin with a blank Duckietown-compliant Docker image and dive into coding a demo, whether it’s based on my solution or another. Ultimately, the goal is to first understand the code and then attempt to recreate it without directly copying. 

While documentation and EdX activities are useful in providing broad overviews and points of contact for debugging, relying solely on them may create a deceptive sense of competency. 

The real learning occurs when getting your hands dirty, experimenting and troubleshooting.

Thank you very much for taking the time, we appreciated your story very much! Is there anything else you would like to add?

Yes, I would say that embracing the hands-on experience is key to understanding the platform and being immersed in the infectious ethos surrounding Duckietown.

On that note, I would like to express my gratitude to Dr. Pretorius for granting me the freedom to experiment and the opportunity to work with the Duckiebot. I eagerly await future projects and the growth of the Duckietown community

DB and Duckie pyramid

Learn more about Duckietown

Duckietown enables state-of-the-art robotics and AI learning experiences.

It is designed to help teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of human knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell?

Reach out to us!