- Title: Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds
- Authors: Tzu-Kuan Chuang, Ni-Ching Lin, Jih-Shi Chen, Chen-Hao Hung, Yi-Wei Huang, Chunchih Teng, Haikun Huang, Lap-Fai Yu, Laura Giarre, and Hsueh-Cheng Wang
- Published in ICRA 2018
Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds
Navigation in pedestrian environments is critical
to enabling independent mobility for the blind and visually
impaired (BVI) in their daily lives. White canes have been
commonly used to obtain contact feedback for following walls,
curbs, or man-made trails, whereas guide dogs can assist in
avoiding physical contact with obstacles or other pedestrians.
However, the infrastructures of tactile trails or guide dogs are
expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities
of existing navigation solutions for BVI users. We proposed
an autonomous, trail-following robotic guide dog that would
be robust to variances of background textures, illuminations,
and interclass trail variations. A deep convolutional neural
network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions:
1) conducting experiments to verify that the performance of
our models trained in virtual worlds was comparable to that
of models trained in the real world; 2) conducting user studies
with 10 blind users to verify that the proposed robotic guide
dog could effectively assist them in reliably following man-made
trails.
Did you find this interesting?
Read more Duckietown based papers here.