Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired – Learning from Virtual and Real Worlds

Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds

Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.

Did you find this interesting?

Read more Duckietown based papers here.

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

Integration of open source platform Duckietown and gesture recognition as an interactive interface for the museum robotic guide

In recent years, population aging becomes a serious problem. To decrease the demand for labor when navigating visitors in museums, exhibitions, or libraries, this research designs an automatic museum robotic guide which integrates image and gesture recognition technologies to enhance the guided tour quality of visitors. The robot is a self-propelled vehicle developed by ROS (Robot Operating System), in which we achieve the automatic driving based on the function of lane-following via image recognition. This enables the robot to lead guests to visit artworks following the preplanned route. In conjunction with the vocal service about each artwork, the robot can convey the detailed description of the artwork to the guest. We also design a simple wearable device to perform gesture recognition. As a human machine interface, the guest is allowed to interact with the robot by his or her hand gestures. To improve the accuracy of gesture recognition, we design a two phase hybrid machine learning-based framework. In the first phase (or training phase), k-means algorithm is used to train historical data and filter outlier samples to prevent future interference in the recognition phase. Then, in the second phase (or recognition phase), we apply KNN (k-nearest neighboring) algorithm to recognize the hand gesture of users in real time. Experiments show that our method can work in real time and get better accuracy than other methods.

Did you find this interesting?

Read more Duckietown based papers here.

Hybrid control and learning with coresets for autonomous vehicles

Hybrid control and learning with coresets for autonomous vehicles

Modern autonomous systems such as driverless vehicles need to safely operate in a wide range of conditions. A potential solution is to employ a hybrid systems approach, where safety is guaranteed in each individual mode within the system. This offsets complexity and responsibility from the individual controllers onto the complexity of determining discrete mode transitions. In this work we propose an efficient framework based on recursive neural networks and coreset data summarization to learn the transitions between an arbitrary number of controller modes that can have arbitrary complexity. Our approach allows us to efficiently gather annotation data from the large-scale datasets that are required to train such hybrid nonlinear systems to be safe under all operating conditions, favoring underexplored parts of the data. We demonstrate the construction of the embedding, and efficient detection of switching points for autonomous and non-autonomous car data. We further show how our approach enables efficient sampling of training data, to further improve either our embedding or the controllers.

Did you find this interesting?

Read more Duckietown based papers here.

Towards blockchain-based robonomics: autonomous agents behavior validation

Towards blockchain-based robonomics: autonomous agents behavior validation

The decentralized trading market approach, where both autonomous agents and people can consume and produce services expanding own opportunities to reach goals, looks very promising as a part of the Fourth Industrial revolution. The key component of the approach is a blockchain platform that allows an interaction between agents via liability smart contracts. Reliability of a service provider is usually determined by a reputation model. However, this solution only warns future customers about an extent of trust to the service provider in case it could not execute any previous liabilities correctly. From the other hand a blockchain consensus protocol can additionally include a validation procedure that detects incorrect liability executions in order to suspend payment transactions to questionable service providers. The paper presents the validation methodology of a liability execution for agent-based service providers in a decentralized trading market, using the Model Checking method based on the mathematical model of finite state automata and Temporal Logic properties of interest. To demonstrate this concept, we implemented the methodology in the Duckietown application, moving an autonomous mobile robot to achieve a mission goal with the following behavior validation at the end of a completed scenario.

Did you find this interesting?

Read more Duckietown based papers here.

Announcing the AI Driving Olympics (AI-DO)

Press release The Duckietown Foundation is excited to announce the official opening of the The AI Driving Olympics, a new competition focused around AI for self-driving cars. The first edition of the AI Driving Olympics 2018 will take place in December 2018, at NIPS, the premiere machine learning conference, in Montréal. This is the first competition that will take place at a machine learning conference with real robots. The second edition of AI-DO is already scheduled to take place in May 2019 in conjunction with the International Conference on Robotics and Automation (ICRA) 2019.

The competition will use the Duckietown platform, a scaled-down affordable and accessible vision-based self-driving car platform used for autonomy education and research. This open-source project originated at MIT in 2016 and is now used by many institutions worldwide.

The AI Driving Olympics is presented in collaboration with 6 academic institutions: ETH Zurich (Switzerland), Université de Montréal (Canada), NCTU (Taiwan), TTIC (USA), Tsinghua (China) and Georgia Tech (USA), as well as two industry co-organizers: nuTonomy and Amazon Web Services (AWS).

The competition will comprise 5 challenges of increasing complexity: 1) Road following on an empty road; 2) Road following with obstacles; 3) Point-to-point navigation in a city network; 4) Point to point navigation in a city network with other vehicles; and 5) Fleet planning for a full autonomous mobility on demand system.

Competitors will have access to simulators, logs, reference implementations, and finally real environments (“Robotariums”) that will be remotely accessible for evaluation. The entries that score best in the robotariums will be run during the live event at NIPS 2018 to determine the winners.

 

The competition aims at directing academic research towards the hard problems of embodied AI, such as modularity of learning processes, and learning in simulation while deploying in reality. The competition also promotes the democratization of AI/robotics research by offering a common infrastructure available to everybody through the use of remote testing facilities.

Competitors can also build their own Duckiebots using provided DIY instructions, or buy Duckiebots and Duckietown hardware through a kickstarter campaign.

For rules and timeline, please see the site https://driving-olympics.ai/

Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions

Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions

This paper presents a data-driven approach for multi-robot coordination in partially-observable domains based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and macro-actions (MAs). Dec-POMDPs provide a general framework for cooperative sequential decision making under uncertainty and MAs allow temporally extended and asynchronous action execution. To date, most methods assume the underlying Dec-POMDP model is known a priori or a full simulator is available during planning time. Previous methods which aim to address these issues suffer from local optimality and sensitivity to initial conditions. Additionally, few hardware demonstrations involving a large team of heterogeneous robots and with long planning horizons exist. This work addresses these gaps by proposing an iterative sampling based Expectation-Maximization algorithm (iSEM) to learn polices using only trajectory data containing observations, MAs, and rewards. Our experiments show the algorithm is able to achieve better solution quality than the state-of-the-art learning-based methods. We implement two variants of multi-robot Search and Rescue (SAR) domains (with and without obstacles) on hardware to demonstrate the learned policies can effectively control a team of distributed robots to cooperate in a partially observable stochastic environment.

Did you find this interesting?

Read more Duckietown based papers here.