Projects for Learning Robotics and AI
This is a collection of hands-on projects for learning robotics and AI. Creating new autonomous behaviors is a great way for learners to collaborate and gain real-world skills in robotics systems and AI.
Project ideas
Reinforcement Learning for the Control of Autonomous Robots
This thesis applies Reinforcement Learning (RL) for autonomous lane-keeping and YOLO v5 obstacle detection in Duckietown, achieving safe navigation.
Smart Lighting: Realistic Day and Night in Duckietown
What if Duckietowns had smart lighting, so that car and street light fields would combine dynamically for optimal visual perception?
Duckiebot Intersection Navigation with DBSCAN
This project uses DBSCAN (Density-Based Algorithm for Discovering Clusters
in Large Spatial Databases with Noise) to improve Duckiebot intersection navigation.
Obstacle Avoidance for Dynamic Navigation Using Obstavoid
The Obstavoid Algorithm enables obstacle avoidance in Duckietown in realtime, calculating optimal paths using a 3D grid for dynamic, collision-free navigation.
ProTip: Duckiebot Remote Connection
Have you ever wanted to work from home but your Duckiebot is back at the lab? Learn how to access your Duckiebot from anywhere at any time.
Monocular Visual Odometry for Duckiebot Navigation
This project by Gianmarco Bernasconi, a former Duckietown student, provided an estimate of the Duckiebot’s pose using a monocular visual odometry approach.
Goto-1: Planning with Dijkstra
This project enhances Duckiebot planning capabilities for autonomous navigation in Duckietowns using the Dijkstra algorithm.
YOLO-based Robust Object Detection in Duckietown
This project implements robust object detection in Duckietown for Duckiebots under varying lighting conditions and object clutter using a YOLO-based NN.
Implementing vision based dynamic obstacle avoidance
This student project implements dynamic obstacle avoidance for Duckiebots with the aim of detecting and navigating around static and moving obstacles.
Development of an Ackermann steering autonomous vehicle
This student project implements an Ackermann steering system on a Duckiebot to simulate 4 wheels, and better and more complex real-world car model.
Introducing Autonomous Parking in Duckietown Cities
This student project implements an autonomous parking solution, inclusive of parking lot design and autonomous behavior, for Duckiebots in Duckietown.
Safe Reinforcement Learning (RL) Thesis Project
“Safe Reinforcement Learning (Safe-RL)” explores using Deep Q Learning to train Duckiebots to perform lane following. Reproduce these results with Duckietown.
Anatidaephilia: centralized city-based SLAM (cSLAM)
Anatidaephilia is loving the idea that somewhere, somehow, a duck is watching you. The cSLAM equips Duckietowns with the ability to localize Duckiebots.
This page is still a work in progress. Please pardon the dust as we upload more projects from our archives. Do not hesitate to contact us for additional ideas for your class!
Duckietown projects for learning robotics to share? Let us know!
If you would like your project to be featured on this page, let us know about it!