Duckiebots are ready to conquer the world!

Dear friends of Duckietown:

We are excited to bring you tremendous news about the Duckietown project.

In the past years we have had the support from many enthusiastic individuals who have donated their time and efforts to help the Duckietown project grow, and grown it has!

Duckietown started at MIT in 2016 – almost two years ago. Now Duckietown classes have been taught in 10 countries with more than 700 alumni.

The last months have been a transformative period for the project, as we prepare to jump to the next level in terms of scope and reach.

The Duckietown Foundation

We have established the Duckietown Foundation, a non-profit entity that will lead the Duckietown project.

Our mission: make the world excited about the beauty, the fun, the importance, and the challenges of robotics and artificial intelligence, through learning experiences that are tangible, accessible, and inclusive.

The Duckietown Foundation will serve as the coordination point for the development of Duckietown. As a non-profit, the foundation can accept donations from individuals and companies for the promotion of affordable and fun robotics learning programs around the world.

A Kickstarter

duckietown-00165
data-from-img-fleet_management-daaa-e76ab943

We are organizing a Kickstarter to make it easier for people to obtain Duckiebots and Duckietowns.

This solves the biggest hurdle so on reproducing the Duckietown experience: the the lack of a one-click solution to acquire the hardware.

Also, working with thousands of pieces allows to drive down the price and to design our own custom boards.

See: Our Kickstarter

A donation program

As much as we aim to have affordable hardware, in certain parts of the world the only realistic price is $0.

That is why we have included a donate-a-Duckiebot and donate-a-class program through the Kickstarter.

Become a friend of Duckietown and support the distribution of low-cost and playful AI and robotics education to even more schools across the globe by backing our Kickstarter campaign.

To learn more about how to support Duckietown, reach out to [email protected]

A new website…

We’ve designed a new website that better serves users of the platform by offering support forums and more organized access to the teaching materials.

See: The new forums.

See: New “duckumentation” site docs.duckietown.com

… and 700 more new websites

We want people to share their Duckietown experiences with other Duckie-enthusiasts, whether they be far or near. That’s now possible through upwards of  700 “community” subsites, each with a blog and a forum.

For more information, see the post Communities sites launched.

The AI Driving Olympics

In addition to its role as an education platform, Duckietown is a useful research tool.

We are happy to announce that Duckietown is the official platform for the AI Driving Olympics, a machine learning competition to be held at NIPS 2018 and ICRA 2019, the two largest machine learning and robotics conferences in the world. We challenge you to put your coding to the test and join the competition.

That’s all for now! Thanks for listening –

The Duckietown project relies on an active and engaged community, which is why we want you to stay involved! Support robotics education and research –  Sign up on our website! Back our kickstarter! Compete in the AI Driving Olympics!

 

For any additional information of if you would like to help us in other ways, please see here for how to help us.

Duckietown: An open, inexpensive and flexible platform for autonomy education and research

Duckietown: An open, inexpensive and flexible platform for autonomy education and research

Duckietown is an open, inexpensive and flexible platform for autonomy education and research. The platform comprises small autonomous vehicles (“Duckiebots”) built from off-the-shelf components, and cities (“Duckietowns”) complete with roads, signage, traffic lights, obstacles, and citizens (duckies) in need of transportation. The Duckietown platform offers a wide range of functionalities at a low cost. Duckiebots sense the world with only one monocular camera and perform all processing onboard with a Raspberry Pi 2, yet are able to: follow lanes while avoiding obstacles, pedestrians (duckies) and other Duckiebots, localize within a global map, navigate a city, and coordinate with other Duckiebots to avoid collisions. Duckietown is a useful tool since educators and researchers can save money and time by not having to develop all of the necessary supporting infrastructure and capabilities. All materials are available as open source, and the hope is that others in the community will adopt the platform for education and research.

Did you find this interesting?

Read more Duckietown based papers here.

Learning autonomous systems — An interdisciplinary project-based experience

Learning autonomous systems — An interdisciplinary project-based experience

With the increased influence of automation into every part of our lives, tomorrow’s engineers must be capable working with autonomous systems. The explosion of automation and robotics has created a need for a massive increase in engineers who possess the skills necessary to work with twenty-first century systems. Autonomous Systems (MEEM4707) is a new senior/graduate level elective course with goals of: 1) preparing the next generation of skilled engineers, 2) creating new opportunities for learning and well informed career choices, 3) increasing confidence in career options upon graduation, and 4) connecting academic research to the students world. Presented in this paper is the developed curricula, key concepts of the project-based approach, and resources for other educators to implement a similar course at their institution. In the course, we cover the fundamentals of autonomous robots in a hands-on manner through the use of a low-cost mobile robot. Each student builds and programs their own robot, culminating in operation of their autonomous mobile robot in a miniature city environment. The concepts covered in the course are scalable from middle school through graduate school. Evaluation of student learning is completed using pre/post surveys, student progress in the laboratory environment, and conceptual examinations.

Did you find this interesting?

Read more Duckietown based papers here.

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired – Learning from Virtual and Real Worlds

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds

Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.

Did you find this interesting?

Read more Duckietown based papers here.

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

In recent years, population aging becomes a serious problem. To decrease the demand for labor when navigating visitors in museums, exhibitions, or libraries, this research designs an automatic museum robotic guide which integrates image and gesture recognition technologies to enhance the guided tour quality of visitors. The robot is a self-propelled vehicle developed by ROS (Robot Operating System), in which we achieve the automatic driving based on the function of lane-following via image recognition. This enables the robot to lead guests to visit artworks following the preplanned route. In conjunction with the vocal service about each artwork, the robot can convey the detailed description of the artwork to the guest. We also design a simple wearable device to perform gesture recognition. As a human machine interface, the guest is allowed to interact with the robot by his or her hand gestures. To improve the accuracy of gesture recognition, we design a two phase hybrid machine learning-based framework. In the first phase (or training phase), k-means algorithm is used to train historical data and filter outlier samples to prevent future interference in the recognition phase. Then, in the second phase (or recognition phase), we apply KNN (k-nearest neighboring) algorithm to recognize the hand gesture of users in real time. Experiments show that our method can work in real time and get better accuracy than other methods.

Did you find this interesting?

Read more Duckietown based papers here.

Hybrid control and learning with coresets for autonomous vehicles

Hybrid control and learning with coresets for autonomous vehicles

Modern autonomous systems such as driverless vehicles need to safely operate in a wide range of conditions. A potential solution is to employ a hybrid systems approach, where safety is guaranteed in each individual mode within the system. This offsets complexity and responsibility from the individual controllers onto the complexity of determining discrete mode transitions. In this work we propose an efficient framework based on recursive neural networks and coreset data summarization to learn the transitions between an arbitrary number of controller modes that can have arbitrary complexity. Our approach allows us to efficiently gather annotation data from the large-scale datasets that are required to train such hybrid nonlinear systems to be safe under all operating conditions, favoring underexplored parts of the data. We demonstrate the construction of the embedding, and efficient detection of switching points for autonomous and non-autonomous car data. We further show how our approach enables efficient sampling of training data, to further improve either our embedding or the controllers.

Did you find this interesting?

Read more Duckietown based papers here.

Towards blockchain-based robonomics: autonomous agents behavior validation

Towards blockchain-based robonomics: autonomous agents behavior validation

The decentralized trading market approach, where both autonomous agents and people can consume and produce services expanding own opportunities to reach goals, looks very promising as a part of the Fourth Industrial revolution. The key component of the approach is a blockchain platform that allows an interaction between agents via liability smart contracts. Reliability of a service provider is usually determined by a reputation model. However, this solution only warns future customers about an extent of trust to the service provider in case it could not execute any previous liabilities correctly. From the other hand a blockchain consensus protocol can additionally include a validation procedure that detects incorrect liability executions in order to suspend payment transactions to questionable service providers. The paper presents the validation methodology of a liability execution for agent-based service providers in a decentralized trading market, using the Model Checking method based on the mathematical model of finite state automata and Temporal Logic properties of interest. To demonstrate this concept, we implemented the methodology in the Duckietown application, moving an autonomous mobile robot to achieve a mission goal with the following behavior validation at the end of a completed scenario.

Did you find this interesting?

Read more Duckietown based papers here.