Vincenzo Polizzi - ETH

Learning Autonomy with Vincenzo Polizzi

ETHZ, Zurich, March 11, 2022: How Vincenzo discovered his true professional passion as a student using Duckietown.  

Learning Autonomy in practice with Vincenzo Polizzi

Vincenzo Polizzi studied robotics, systems and control at the Swiss Federal institute of Technology (ETH Zurich). Vincenzo shares below his experience with Duckietown. Starting off as a student, becoming a Teaching Assistant and onto how he uses Duckietown to power his own research as he moves from academia to industry.

IMG_20191216_211129

Could you tell us something about yourself?

I’m Vincenzo Polizzi , I studied automation engineering at the Politecnico di Milano, and I currently study robotics, systems and control at the Swiss Federal Institute of Technology (ETH Zurich).

You use Duckietown. Could you tell us when you first came into contact with the project, and what attracted you to Duckietown?

Sure! I learned about Duckietown my first year during a master’s program at ETH, where there was a course called: “Autonomous mobility on demand, from car to fleet” where I saw these cars, these robots. And I asked myself “what is this thing?”. It seemed very interesting. The first thing that struck me was that it did not look theoretical, but clearly practical.

"It captures you with simplicity and then you stay for the complexity."

So the idea of a practical aspect interested you?

Yes, during the presentation, it was clear that the course was based on projects the student had to carry out, where one could practice what they had learned theoretically in other classes.
I come from a scientific high school, and I studied automation engineering in Milan. In both my study experiences, I was used to learning concepts theoretically. For example, in the control system for a plant you design on paper in university, you don’t really face the complexity of implementing it on a real object.

I have to say that I have always been very passionate about robotics and informatics. In fact, even in high school, I was building these little robots,I participated in the robotics competition Rome Cup held by Fondazione Mondo Digitale, and there were these robots that were similar in shape to those of Duckietown, but where the scientific content was completely different. So in Duckietown, I saw something similar to what I was doing in my free time. I wanted to see exactly how it was inside, and there I discovered a whole other world that is obviously much more scientific than what a normal high school student could imagine by themself. However, initially I was curious to see a course where one can practice all the knowledge they have gradually acquired. It is not just about writing an equation and finding a solution but making things work.

What is your relationship with Duckietown, how long have you been using it? How do you interact with the Duckietown ecosystem? How do you use it, what do you do with it?

These are interesting questions because I started as a student and then managed to see what’s behind Duckietown. I was attending the course Duckietown held at ETH in 2019. The class was limited to 30 students, I was really excited to be part of it. I met many excellent students there, some of whom I am still in touch with today.

When I started the course, I immediately told myself, “Duckietown is a great thing. If all universities used Duckietown, this would be a better world.” I liked the class a lot, then I also had the opportunity of being a TA. The TAship was an important step because I learned more than during the course. One thing is to live the experience as a student who has to take exams, complete various projects, etc. You need a deeper understanding to organize an activity. You have to take care of all the details and foresee the parts of the exercise that can be harder or simpler for the students. This experience helped me a lot. For example, I did an internship in Zurich where we had to develop a software infrastructure for a drone, and I found myself thinking, “wow this can be done with Duckietown, we can use the same technologies.” I noticed that even in the industry, often we see the use of the same technologies and tools that you can learn about thanks to Duckietown. Of course, maybe a company has its own customized tools, probably well optimized for its products. Perhaps it uses some other specific tool but let’s say you already know more or less what these tools are about. You know because in Duckietown, you have already seen how a robotics system should work and the pieces it is composed of. Duckietown has given me a huge boost with the internship and my Master’s thesis at NASA JPL. Consider that my thesis was on a system of multi drones, so I used, for example, Docker as a tool to simulate the different agents. With Duckietown, I acquired technical knowledge that I used in many other projects, including work.

Do you still use it today?

The last project I did with Duckietown is DuckVision. I know we could have thought of a better name. With one of my Duckietowner friends, Trevor Phillips, we enhanced the Duckiebot perception pipeline with another camera, a stereocamera made by Luxonis and Open CV called OAKD (OpenCV AI Kit with depth). This sensor is not just a simple camera, but it also mounts a VPU, Visual Processing Unit. Namely, it can analyze and make inferences on the images that the camera acquires onboard. It can perform object detection and tracking, gesture recognition, semantic segmentation, etc. There are plenty of models freely available online that can run on the OAK-D. We have integrated this sensor in the Duckietown ecosystem, using a similar approach used in the MOOC “Self-Driving Cars with Duckietown”, we created a small series of tutorials where you can just plug the camera on the robot, run our Docker container and have fun! With this project, we passed the first phase of the OpenCV AI Competition 2021. The idea behind the project was to increase the Duckiebot understanding of the environment, by using the depth information, the robot can have a better representation of its surroundings and so, for example, a better knowledge of its position. Also, in our opinion, the OAK-D in Duckietown can boost the research in autonomous vehicles and perception.
I would like to add something about the use of Duckietown, I have seen this project both as a student and from behind the scenes and I really understood that by using this platform you really learn a lot of things that are useful not only in the academic field but can also be very useful in the working environment with the practical knowledge that is often difficult to acquire during school. And in this regard I thought then given my history, I am Sicilian but I studied in Milan and then I went to Zurich, I asked myself what can I bring as a contribution of my travels, so I thought about using Duckietown in some universities here in Sicily in the universities of Palermo and Messina. And also, at the Polytechnic of Milan, for example, they have already begun to use it and have participated in the AIDO and have also placed well, they ended up among the finalists, so there is a lot of interest in this project.

Did you receive a positive response every time you proposed Duckietown?

Yes, and then there is a huge enthusiasm on the part of the students. I spoke with student associations first, then with the professors etc. but when the students see Duckietown for the first time, they are always really enthusiastic about using it.

"There is something that captures you in some way, and then just opens up a world when you start to actually see how all the systems are implemented. This is the nice thing in my opinion, you can decide the level of complexity you want to achieve."

The duck was a great idea!

Absolutely right! The duck was a great idea, yes. I like contrasts, you see a super simple friendly thing that hides a state-of-the-art robotics platform. Even in the students I saw this reaction, because the duck is the first thing you see, it looks like a game, something to play with, this is the first impact, then when you start you get curious. It captures you with simplicity and then you stay for the complexity.

Would you suggest Duckietown to friends and colleagues?

Sure! There is something that captures you and opens up a world when you start to see how all the systems are implemented. This is the nice thing in my opinion, you can decide the level of complexity you want to achieve. It’s a platform that looks like something to play with, a game or something, but in reality there is a huge potential, in terms of knowledge that everyone can acquire, it’s something that you can not easily find elsewhere. I also think it offers great support, such as educational material, exercises that are of high quality. You can learn a lot of different aspects of robotics, in my opinion. You can do control, you can do the machine learning part, perception . There’s really a world to explore. You can see everything there is about robotics. But you can also just focus on one aspect that maybe you’re more passionate about. So yes, I would recommend it because you can learn a lot, and as a student myself I would recommend it to my fellow colleagues.

Learn more about Duckietown

The Duckietown platform offers robotics and AI learning experiences.

Duckietown is modular, customizable and state-of-the-art. It is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.

Tell us your story

Are you an instructor, learner, researcher or professional with a Duckietown story to tell? Reach out to us!

Join the AI Driving Olympics, 6th edition, starting now!

The 2021 AI Driving Olympics

Compete in the 2021 edition of the Artificial Intelligence Driving Olympics (AI-DO 6)!

The AI-DO serves to benchmark the state of the art of artificial intelligence in autonomous driving by providing standardized simulation and hardware environments for tasks related to multi-sensory perception and embodied AI.

Duckietown traditionally hosts AI-DO competitions biannually, with finals events held at machine learning and robotics conferences such as the International Conference on Robotics and Automation (ICRA) and the Neural Information Processing Systems (NeurIPS). 

AI-DO 6 will be in conjunction with NeurIPS 2021 and have three leagues: urban driving, advanced perception, and racing. The winter champions will be announced during NeurIPS 2021, on December 10, 2021!

Urban driving league

The urban driving league uses the Duckietown platform and presents several challenges, each of increasing complexity.

The goal in each challenge is to develop a robotic agent for driving Duckiebots “well”. Baseline implementations are provided to test different approaches. There are no constraints on how your agents are designed.

Each challenge adds a layer of complexity: intersections, other vehicles, pedestrians, etc. You can check out the existing challenges on the Duckietown challenges server.

AI-DO 2021 features four challenges: lane following (LF), lane following with intersections (LFI), lane following with vehicles (LFV) and lane following with vehicles and intersections, multi-body, with full information (LFVI-multi-full).

All challenges have a simulation and hardware component (🚙,💻), except for LFVI-multi-full, which is simulation (💻) only.

The first phase (until Nov. 7) is a practice one. Results do not count towards leaderboards.

The second phase (Nov. 8-30) is the live competition and results count towards official leaderboards. 

Selected submissions (that perform well enough in simulation) will be evaluated on hardware in Autolabs. The submissions scoring best in Autolabs will access the finals.

During the finals (Dec. 1-8) one additional submission is possible for each finalist, per challenge.

Winners (top 3) of the resulting leaderboard will be declared AI-DO 2021 winter champions and celebrated live during NeurIPS 2021. We require champions to submit a short video (2 mins) introducing themselves and describing their submission.

Winners are invited to join (not mandatory) the NeurIPS event, on December 10th, 2021, starting at 11.25 GMT (Zoom link will follow).   

Overview
🎯Goal: develop robotic agents for challenges of increasing complexity
🚙Robot: Duckiebot (DB21M/J)
👀Sensors: camera, wheel encoders
Schedule
🏖️Practice: Nov. 1-7
🚙Competition: Nov. 8-30
🏘️Finals: Dec. 1 – 8
🏆Winners: Dec. 10
Rules
🏖️Practice: unlimited non-competing submissions
🚙Competition: best in sim are evaluated on hardware in Autolabs
🏘️Finals: one additional submission for Autolabs
🏆Winners: 2 mins video submission description for NeurIPS 2021 event.

The challenges

Lane following 🚙 💻

LF – The most traditional of AI-DO challenges: have a Duckiebot navigate a road loop without intersection, pedestrians (duckies) nor other vehicles. The objective is to travel the longest path in a given time while staying in the lane, i.e., not committing driving infractions.

Current AI-DO leaderboards: LF-sim-validation, LF-sim-testing.

Previous AI-DO leaderboards: sim-validation, sim-testing, real-validation.

A DB21 Duckietown in a Duckietown equipped with Autolab infrastructure.

Lane following with intersections 🚙 💻

LFI – This challenge builds upon LF by increasing the complexity of the road network, now featuring 3 and/or 4-way intersections, defined according to the Duckietown appearance specifications. Traffic lights will not be present on the map. The objective is to drive the longest distance while not breaking the rules of the road, now more complex due to the presence of traffic signs.

Current AI-DO leaderboards: LFI-sim-validation, LFI-sim-testing.

Previous AI-DO leaderboards: sim-validation, sim-testing.

Duckiebot facing a lane following with intersections (LFI) challenge

Lane following with vehicles 🚙 💻

LFV – In this traditional AI-DO challenge, contestants seek to travel the longest path in a city without intersections nor pedestrians, but with other vehicles on the road. Non-playing vehicles (i.e., not running the user’s submitted agent) can be in the same and/or opposite lanes and have variable speed.

Current AI-DO leaderboards: LFV-sim-validation, LFV-sim-testing.

Previous AI-DO leaderboards: (LFV-multi variant): sim-validation, sim-testing, real-validation.

Lane following with vehicles and intersections (stateful) 💻

LFVI-multi-full – this debuting challenge brings together roads with intersections and other vehicles. The submitted agent is deployed on all Duckiebots on the map (-multi), and is provided with full information, i.e., the state of the other vehicles on the map (-full). This challenge is in simulation only.

Getting started

All you need to get started and participate in the AI-DO is a computer, a good internet connection, and the ambition to challenge your skills against the international community!  

We provide webinars, operation manuals, and baselines to get started.

May the duck be with you! 

Thank you to our generous sponsors!

EdTech awards 2021: Duckietown finalist in 3 categories!

Duckietown reaches the finals in the EdTech Awards 2021

The EdTech awards are the largest and most competitive recognition program in all of education technology.

The competition, led by the EdTech digest, recognizes the biggest names in edtech – and those who soon will be, by identifying all over the world the products, services and people that bet promote education through the use of technology, for the benefit of learners.

The 2021 edition has brought a big surprise to Duckietown, as it was nominated as a finalist in 3 different categories:

  • Cool Tool Award: as robotics (for learning, education) solution;
  • Cool Tool Award: as higher education solution;
  • Trendsetter Award: as a product or service setting a trend in education technologies.

Although a final is just a starting point, we are proud of the hard work done by the team in this particularly difficult year of pandemic and lockdowns, and grateful to you all for the incredible support, constructive feedback and contributions!

To the future, and beyond!

(hidden) Want to learn more about us?

AI Driving Olympics 5th edition: results

AI-DO 5: Urban league winners

This year’s challenges were lane following (LF), lane following with pedestrians (LFP) and lane following with other vehicles, multibody (LFV_multi). 

Let’s find out the results in each category:

LF

  1. Andras Beres 🇭🇺  
  2. Zoltan Lorincz 🇭🇺
  3. András Kalapos 🇭🇺

LFP

  1. Bea Baselines 🐤
  2. Melisande Teng 🇨🇦 
  3. Raphael Jean 🇨🇦

LFV_multi

  1. Robert Moni 🇭🇺
  2. Márton Tim 🇭🇺
  3. Anastasiya Nikolskay 🇷🇺

Congratulations to the Hungarian Team from the Budapest University of Technology and Economics for collecting the highest rankings in the urban league!

Here’s how the winners in each category performed both in the qualification (simulation) and in the finals running on real hardware:

Andras Beres - Lane following (LF) winner

Melisande Teng - Lane following with pedestrians (LFP) winner

Robert Moni - Lane following with other vehicles, multibody (LFV_multi) winner

AI-DO 5: Advanced Perception league winners

Great participation and results in the Advanced Perception league! Check out this year’s winners in the video below:

AI-DO 5 sponsors

Many thanks to our amazing sponsors, without which none of this would have been possible!

Stay tuned for next year AI Driving Olympics. Visit the AI-DO page for more information on the competition and to browse this year’s introductory webinars, or check out the Duckietown massive open online course (MOOC) and prepare for next year’s competition!

Join the AI Driving Olympics, 5th edition, starting now!

Compete in the 5th AI Driving Olympics (AI-DO)

The 5th edition of the Artificial Intelligence Driving Olympics (AI-DO 5) has officially started!

The AI-DO serves to benchmark the state of the art of artificial intelligence in autonomous driving by providing standardized simulation and hardware environments for tasks related to multi-sensory perception and embodied AI.

Duckietown hosts AI-DO competitions biannually, with finals events held at machine learning and robotics conferences such as the International Conference on Robotics and Automation (ICRA) and the Neural Information Processing Systems (NeurIPS). 

 The AI-DO 5 will be in conjunction with NeurIPS 2020 and have two leagues: Urban Driving and Advanced Perception

Urban driving league challenges

This year’s Urban League includes a traditional AI-DO challenge (LF) and introduces two new ones (LFP, LFVM).

Lane Following (LF)

The most traditional of AI-DO challenges: have a Duckiebot navigate a road loop without intersection, pedestrians (duckies) or other vehicles. The objective is traveling the longest path in a given time while staying in the lane.

Lane following with Pedestrian (LFP)

The LFP challenge is new to AI-DO. It builds upon LF by introducing static obstacles (duckies) on the road. The objectives are the same as for lane following, but do not hit the duckies! 

Lane Following with Vehicles, multi-body (LFVM)

In this traditional AI-DO challenge, contestants seek to travel the longest path in a city without intersections nor pedestrians, but with other vehicles on the road. Except this year there’s a twist. In this year’s novel multi-body variant, all vehicles on the road are controlled by the submission.

Getting started: the webinars

We offer a short webinar series to guide contestants through the steps for participating: from running our baselines in simulation as well as deploying them on hardware. All webinars are 9 am EST and free!

Introduction

Learn about the Duckietown project and the Artificial Intelligence Driving Olympics.

ROS baseline

How to run and build upon the “traditional” Robotic Operation System (ROS) baseline.

Local development

On the workflow for developing and deploying to Duckiebots, for hardware-based testing.

RL baseline

Learn how to use the Pytorch template for reinforcement learning approaches.

IL baseline

Introduction to the Tensorflow template, use of logs and simulator for imitation learning.

Advanced sensing league challenges

Previous AI-DO editions featured: detection, tracking and prediction challenges around the nuScenes dataset.

For the 5th iteration of AI-DO we have a brand new lidar segmentation challenge.

The challenge is based on the recently released lidar segmentation annotations for nuScenes and features an astonishing 1,400,000,000 lidar points annotated with one of 32 labels.

We hope that this new benchmark will help to push the boundaries in lidar segmentation. Please see https://www.nuscenes.org/lidar-segmentation for more details.

Furthermore, due to popular demand, we will organize the 3rd iteration of the nuScenes 3d detection challenge. Please see https://www.nuscenes.org/object-detection for more details.

AI-DO 5 Finals event

The AI-DO finals will be streamed LIVE during 2020 edition of the Neural Information Processing Systems (NeurIPS 2020) conference in December.

Learn more about the AI-DO here.

Thank you to our generous sponsors!

The Duckietown Foundation is grateful to its sponsors for supporting this fifth edition of the AI Driving Olympics!

IROS2020: Watch The Workshop on Benchmarking Progress in Autonomous Driving

What a start for IROS 2020 with the "Benchmarking Progress in Autonomous Driving" workshop!

The 2020 edition of the International Conference on Intelligent Robots and Systems (IROS) started great with the workshop on “Benchmarking Progress in Autonomous Driving”.

The workshop was held virtually on October 25th, 2020, using an engaging and concise format of a sequence of four 1.5-hour moderated round-table discussions (including an introduction) centered around 4 themes.

The discussions on the methods by which progress in autonomous driving is evaluated, benchmarked, and verified were exciting. Many thanks to all the panelists and the organizers!  

Here are the videos of the various sessions. 

Opening remarks

Theme 1: Assessing progress for the field of autonomous vehicles (AVs)

Moderator: Andrea Censi

Invited Panelists:

Theme 2: How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?

Moderator: Jacopo Tani

Invited Panelists:

Theme 3: Best practices for AV benchmarking

Moderator: Liam Paull

Invited Panelists:

Theme 4: Do we need new paradigms for AV development?

Moderator: Matt Walter

Invited Panelists:

Closing remarks

You can find additional information about the workshop here.

Duckietown and NVIDIA work together for accessible AI and robotics education: Meet the NVIDIA powered Duckiebot

Duckietown and NVIDIA partnership for accessible AI and robotics education

NVIDIA GTC, October 6, 2020: Duckietown and NVIDIA align efforts to push the boundaries of accessible, state-of-the-art higher-education in robotics and AI. The tangible outcome is a brand new “Founder’s edition” Duckiebot, which will be broadly available from January 2021, powered by the new NVIDIA Jetson Nano 2GB platform.

Read the full NVIDIA announcement here.

Meet the NVIDIA powered Duckiebot

Autonomy is already changing the world. Duckietown and NVIDIA recognize the importance of hands-on education in robotics and AI to empower everybody today to understand and design the next generations of autonomy.

The result of this collaboration is a new NVIDIA powered Duckiebot, using the novel Jetson Nano 2GB board, that will enable local execution of machine learning agents in the Duckietown ecosystem. 

To celebrate this special occasion, the Duckiebot has been redesigned to include: new sensors (time of flight, IMU, encoders), a new custom-designed battery providing real time diagnostics (state of charge, remaining autonomy and other health metrics), and fun accessories like a screen to visualize key metrics. All of this while keeping the price accessible for anyone willing to experience the challenges of a real-life robotic ecosystem. 

A great team

“The new NVIDIA Jetson Nano 2GB is the ultimate starter AI computer for educators and students to teach and learn AI at an incredibly affordable price.” said Deepu Talla, Vice President and General Manager of Edge Computing at NVIDIA. “Duckietown and its edX MOOC are leveraging Jetson to take hands-on experimentation and understanding of AI and autonomous machines to the next level.”

“The Duckietown educational platform provides a hands-on, scaled down, accessible version of real world autonomous systems.” said Emilio Frazzoli,  Professor of Dynamic Systems and Control, ETH Zurich, “Integrating NVIDIA’s Jetson Nano power in Duckietown enables unprecedented access to state-of-the-art compute solutions for learning autonomy.”

Learn more

To know more about the technical specifications of the new NVIDIA powered Duckiebot, or to pre-order yours, visit the Duckietown project shop here.

The new Duckiebot will be also used in the “Self-driving Cars with Duckietown” Massive Online Open Course (MOOC) that will be held in March 2021 on edX. You can find more information about the MOOC here.

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired – Learning from Virtual and Real Worlds

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds

Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.

Did you find this interesting?

Read more Duckietown based papers here.

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

In recent years, population aging becomes a serious problem. To decrease the demand for labor when navigating visitors in museums, exhibitions, or libraries, this research designs an automatic museum robotic guide which integrates image and gesture recognition technologies to enhance the guided tour quality of visitors. The robot is a self-propelled vehicle developed by ROS (Robot Operating System), in which we achieve the automatic driving based on the function of lane-following via image recognition. This enables the robot to lead guests to visit artworks following the preplanned route. In conjunction with the vocal service about each artwork, the robot can convey the detailed description of the artwork to the guest. We also design a simple wearable device to perform gesture recognition. As a human machine interface, the guest is allowed to interact with the robot by his or her hand gestures. To improve the accuracy of gesture recognition, we design a two phase hybrid machine learning-based framework. In the first phase (or training phase), k-means algorithm is used to train historical data and filter outlier samples to prevent future interference in the recognition phase. Then, in the second phase (or recognition phase), we apply KNN (k-nearest neighboring) algorithm to recognize the hand gesture of users in real time. Experiments show that our method can work in real time and get better accuracy than other methods.

Did you find this interesting?

Read more Duckietown based papers here.

Hybrid control and learning with coresets for autonomous vehicles

Hybrid control and learning with coresets for autonomous vehicles

Modern autonomous systems such as driverless vehicles need to safely operate in a wide range of conditions. A potential solution is to employ a hybrid systems approach, where safety is guaranteed in each individual mode within the system. This offsets complexity and responsibility from the individual controllers onto the complexity of determining discrete mode transitions. In this work we propose an efficient framework based on recursive neural networks and coreset data summarization to learn the transitions between an arbitrary number of controller modes that can have arbitrary complexity. Our approach allows us to efficiently gather annotation data from the large-scale datasets that are required to train such hybrid nonlinear systems to be safe under all operating conditions, favoring underexplored parts of the data. We demonstrate the construction of the embedding, and efficient detection of switching points for autonomous and non-autonomous car data. We further show how our approach enables efficient sampling of training data, to further improve either our embedding or the controllers.

Did you find this interesting?

Read more Duckietown based papers here.