Las Olimpiadas AI Driving en NIPS 2018

Autores:

Andrea Censi Liam Paull, Jacopo Tani, Julian Zilly, Thomas Ackermann, Oscar Beijbom, Berabi Berkai, Gianmarco Bernasconi, Anne Kirsten Bowser, Simon Bing, Pin-Wei David Chen, Yu-Chen Chen, Maxime Chevalier-Boisvert, Breandan Considine, Andrea Daniele, Justin De Castri, Maurilio Di Cicco, Manfred Diaz, Paul Aurel Diederichs, Florian Golemo, Ruslan Hristov, Lily Hsu, Yi-Wei Daniel Huang, Chen-Hao Peter Hung, Qing-Shan Jia, Julien Kindle, Dzenan Lapandic, Cheng-Lung Lu, Sunil Mallya, Bhairav Mehta, Aurel Neff, Eryk Nice, Yang-Hung Allen Ou, Abdelhakim Qbaich, Josefine Quack, Claudio Ruch, Adam Sigal, Niklas Stolz, Alejandro Unghia, Ben Weber, Sean Wilson, Zi-Xiang Xia, Timothius Victorio Yasin, Nivethan Yogarajah, Yoshua Bengio, Tao Zhang, Hsueh-Cheng Wang, Matthew Walter, Stefano Soatto, Magnus Egerstedt, Emilio Frazzoli,

Publicado en RSS Workshop on New Benchmarks, Metrics, and Competitions for Robotic Learning

Link: Disponible aquí

Die AI-Fahrolympiade auf der NIPS 2018

Autoren:

Andrea Censi Liam Paull, Jacopo Tani, Julian Zilly, Thomas Ackermann, Oscar Beijbom, Berabi Berkai, Gianmarco Bernasconi, Anne Kirsten Bowser, Simon Bing, Pin-Wei David Chen, Yu-Chen Chen, Maxime Chevalier-Boisvert, Breandan Considine, Andrea Daniele, Justin De Castri, Maurilio Di Cicco, Manfred Diaz, Paul Aurel Diederichs, Florian Golemo, Ruslan Hristov, Lily Hsu, Yi-Wei Daniel Huang, Chen-Hao Peter Hung, Qing-Shan Jia, Julien Kindle, Dzenan Lapandic, Cheng-Lung Lu, Sunil Mallya, Bhairav Mehta, Aurel Neff, Eryk Nice, Yang-Hung Allen Ou, Abdelhakim Qbaich, Josefine Quack, Claudio Ruch, Adam Sigal, Niklas Stolz, Alejandro Unghia, Ben Weber, Sean Wilson, Zi-Xiang Xia, Timothius Victorio Yasin, Nivethan Yogarajah, Yoshua Bengio, Tao Zhang, Hsueh-Cheng Wang, Matthew Walter, Stefano Soatto, Magnus Egerstedt, Emilio Frazzoli,

Veröffentlicht auf dem RSS-Workshop über neue Benchmarks, Metriken und Wettbewerbe für Robotisches Lernen.

Link: Verfügbar hier

How Duckietown inspired a 14 year old girl to become a tech entrepreneur

We host a guest post by Valeria Cagnina, who had the luck to meet our team very early – in fact, when the first Duckietown was still being built – and she helped with the tape!

Nothing is impossible…the word itself says “I’m possible”!

I discovered robotics when I was 11 years old with a digital plant made with Arduino that I saw in Milan Coderdojo. I really liked robotics and decided I would like to make my own robot.

So I searched online for a robot I could make myself. I found some videos on the web about a robot from MIT. I really loved this wonderful robot… but I was too young and I didn’t have the skills necessary to build it. So I surfed online to search other types that would be easier to build, but in my mind remained the dream to go to see this cool robot at MIT in Boston.

After a while, following and making my own Youtube videos, I made my first robot alone at 11 years old: it could move itself around a room avoiding obstacles thanks to its distance sensor programming with Arduino.

In Italy it was not so common to make a robot at 11, so I was able to share this experience a lot of events and conferences that brought me to speak in a TEDx at 14 years old.

Casually, at the same age, I travelled to United States to visit New York, Boston and Canada… at the beginning it seemed a normal holiday… 

I convinced my parents to extend our trip to stay more time around MIT. We went sightseeing in Boston and in MIT but it wasn’t enough for me! I wanted to look inside this place that was so magical to me, and I especially wanted to talk with the engineers that build and program robots! Maybe I would see that same robot that I found when I was 11 years old!

The early stages of Duckietown at MIT

 

I left my parents visiting the rest of Boston and I started to go alone around the MIT departments, trying to open every door that I found in front of me.

While I was walking, I was looking through the laboratories windows and my attention was caught by an empty room -I mean with no humans inside 😀 ! – full of duckies and with a sort of track for cars on the floor.

What was this room about? What was the purpose of these duckies? I was very, very curious about it and had many questions, but there was no one in the lab!

Obviously I never give up, I absolutely believe that nothing is impossible so, every day, until my departure to the next leg of our trip, I continued to go around MIT passing in front of THAT lab hoping to find someone in it.

Finally one day I saw some people inside the lab doing something. I was really excited! I watched them from the window. I absolutely wanted to know what they were doing –  one of them was soldering, another one was using duct tape. Suddenly they saw me and they invited into the lab! What an astonishment for me!

Immediately they asked me a lot of questions: why was a 14 year old roaming MIT alone, why was I so excited about that lab… Then one of them (I didn’t know his name) asked if I wanted to help build “Duckietown”. He told me about the project (at that time it wasn’t started yet) and he asked me about myself and the first robot I built. After an afternoon spent together, I discovered that this strange guy was Andrea Censi, one of the founders of the Duckietown project! Amazing!

Andrea proposed to me a challenge: I had to try to make my own Duckietown robot, a Duckiebot.  Since it was a university project, I was able to follow the online tutorials and ask lots of questions to all the other Duckietown members on the communication forum, Slack. He had only one request of me: he told me that even though the robot was hard to build and program, I shouldn’t give up.  

I was so happy that I immediately agreed. I was handed the robot kit, a list of various links and some Duckies ☺.

Now it was my turn! I didn’t want to disappoint Andrea, so as soon as I arrived in Italy I put myself to work but, wow, building the Duckiebot was very hard! I spent an entire afternoon trying to comprehend just 4 rows of the tutorial. I began to ask questions on Slack and I tried, I tried and I tried again.

I never worked with Linux before so that was a completely new world for me. I started from the beginning, without knowledge at all but I worked for a few months until I received a message from Andrea: “Do you want to spend some time here, in Boston, working with us in Duckietown?” Of course I was willing, I couldn’t wait, it was an amazing proposal!

So I became a Duckietown Senior Tester at 15 years old and I spent almost all the summer inside the labs of MIT. My task was simplifying the university-level tutorial and making it accessible to the high-school students (like me ☺) as well as making the Duckiebot, which had now evolved!

Thanks to the help of Andrea and Liam (the other founder) I finally succeeded to program my robot: it was now able to drive autonomously in Duckietown. If felt like a dream come true!  

Spending the summer in Duckietown at MIT allowed to me to discover a completely new world: I understood that education could be playful and that learning could be fun!

 

Valeria's duckiebot (back)
Valeria's Duckiebot (side)

The AI Driving Olympics at NIPS 2018

General Information

Learn more

Duckietown is a platform for creating and disseminating robotics and AI learning experiences.

It is modular, customizable and state-of-the-art, and designed to teach, learn, and do research. From exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge, Duckietown evolves with the skills of the user.

Duckiebots are ready to conquer the world!

Dear friends of Duckietown:

We are excited to bring you tremendous news about the Duckietown project.

In the past years we have had the support from many enthusiastic individuals who have donated their time and efforts to help the Duckietown project grow, and grown it has!

Duckietown started at MIT in 2016 – almost two years ago. Now Duckietown classes have been taught in 10 countries with more than 700 alumni.

The last months have been a transformative period for the project, as we prepare to jump to the next level in terms of scope and reach.

The Duckietown Foundation

We have established the Duckietown Foundation, a non-profit entity that will lead the Duckietown project.

Our mission: make the world excited about the beauty, the fun, the importance, and the challenges of robotics and artificial intelligence, through learning experiences that are tangible, accessible, and inclusive.

The Duckietown Foundation will serve as the coordination point for the development of Duckietown. As a non-profit, the foundation can accept donations from individuals and companies for the promotion of affordable and fun robotics learning programs around the world.

A Kickstarter

duckietown-00165
data-from-img-fleet_management-daaa-e76ab943

We are organizing a Kickstarter to make it easier for people to obtain Duckiebots and Duckietowns.

This solves the biggest hurdle so on reproducing the Duckietown experience: the the lack of a one-click solution to acquire the hardware.

Also, working with thousands of pieces allows to drive down the price and to design our own custom boards.

See: Our Kickstarter

A donation program

As much as we aim to have affordable hardware, in certain parts of the world the only realistic price is $0.

That is why we have included a donate-a-Duckiebot and donate-a-class program through the Kickstarter.

Become a friend of Duckietown and support the distribution of low-cost and playful AI and robotics education to even more schools across the globe by backing our Kickstarter campaign.

To learn more about how to support Duckietown, reach out to [email protected]

A new website…

We’ve designed a new website that better serves users of the platform by offering support forums and more organized access to the teaching materials.

See: The new forums.

See: New “duckumentation” site docs.duckietown.com

… and 700 more new websites

We want people to share their Duckietown experiences with other Duckie-enthusiasts, whether they be far or near. That’s now possible through upwards of  700 “community” subsites, each with a blog and a forum.

For more information, see the post Communities sites launched.

The AI Driving Olympics

In addition to its role as an education platform, Duckietown is a useful research tool.

We are happy to announce that Duckietown is the official platform for the AI Driving Olympics, a machine learning competition to be held at NIPS 2018 and ICRA 2019, the two largest machine learning and robotics conferences in the world. We challenge you to put your coding to the test and join the competition.

That’s all for now! Thanks for listening –

The Duckietown project relies on an active and engaged community, which is why we want you to stay involved! Support robotics education and research –  Sign up on our website! Back our kickstarter! Compete in the AI Driving Olympics!

 

For any additional information of if you would like to help us in other ways, please see here for how to help us.

Duckietown: An open, inexpensive and flexible platform for autonomy education and research

Duckietown: An open, inexpensive and flexible platform for autonomy education and research

Duckietown is an open, inexpensive and flexible platform for autonomy education and research. The platform comprises small autonomous vehicles (“Duckiebots”) built from off-the-shelf components, and cities (“Duckietowns”) complete with roads, signage, traffic lights, obstacles, and citizens (duckies) in need of transportation. The Duckietown platform offers a wide range of functionalities at a low cost. Duckiebots sense the world with only one monocular camera and perform all processing onboard with a Raspberry Pi 2, yet are able to: follow lanes while avoiding obstacles, pedestrians (duckies) and other Duckiebots, localize within a global map, navigate a city, and coordinate with other Duckiebots to avoid collisions. Duckietown is a useful tool since educators and researchers can save money and time by not having to develop all of the necessary supporting infrastructure and capabilities. All materials are available as open source, and the hope is that others in the community will adopt the platform for education and research.

Did you find this interesting?

Read more Duckietown based papers here.

Learning autonomous systems — An interdisciplinary project-based experience

Learning autonomous systems — An interdisciplinary project-based experience

With the increased influence of automation into every part of our lives, tomorrow’s engineers must be capable working with autonomous systems. The explosion of automation and robotics has created a need for a massive increase in engineers who possess the skills necessary to work with twenty-first century systems. Autonomous Systems (MEEM4707) is a new senior/graduate level elective course with goals of: 1) preparing the next generation of skilled engineers, 2) creating new opportunities for learning and well informed career choices, 3) increasing confidence in career options upon graduation, and 4) connecting academic research to the students world. Presented in this paper is the developed curricula, key concepts of the project-based approach, and resources for other educators to implement a similar course at their institution. In the course, we cover the fundamentals of autonomous robots in a hands-on manner through the use of a low-cost mobile robot. Each student builds and programs their own robot, culminating in operation of their autonomous mobile robot in a miniature city environment. The concepts covered in the course are scalable from middle school through graduate school. Evaluation of student learning is completed using pre/post surveys, student progress in the laboratory environment, and conceptual examinations.

Did you find this interesting?

Read more Duckietown based papers here.

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired – Learning from Virtual and Real Worlds

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds

Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.

Did you find this interesting?

Read more Duckietown based papers here.

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

In recent years, population aging becomes a serious problem. To decrease the demand for labor when navigating visitors in museums, exhibitions, or libraries, this research designs an automatic museum robotic guide which integrates image and gesture recognition technologies to enhance the guided tour quality of visitors. The robot is a self-propelled vehicle developed by ROS (Robot Operating System), in which we achieve the automatic driving based on the function of lane-following via image recognition. This enables the robot to lead guests to visit artworks following the preplanned route. In conjunction with the vocal service about each artwork, the robot can convey the detailed description of the artwork to the guest. We also design a simple wearable device to perform gesture recognition. As a human machine interface, the guest is allowed to interact with the robot by his or her hand gestures. To improve the accuracy of gesture recognition, we design a two phase hybrid machine learning-based framework. In the first phase (or training phase), k-means algorithm is used to train historical data and filter outlier samples to prevent future interference in the recognition phase. Then, in the second phase (or recognition phase), we apply KNN (k-nearest neighboring) algorithm to recognize the hand gesture of users in real time. Experiments show that our method can work in real time and get better accuracy than other methods.

Did you find this interesting?

Read more Duckietown based papers here.