Deep Reinforcement Learning for Autonomous Lane Following
Project Resources
- Objective: Develop a deep reinforcement learning model for autonomous lane following with sim-to-real transfer.
- Approach: Train a reinforcement learning agent in simulation using an autoencoder-based feature extraction pipeline and deploy it on a real Duckiebot with domain adaptation techniques.
- Author: Mickyas Tamiru Asfaw
Project highlights
Here is a visual tour of the author’s work on implementing deep reinforcement learning for autonomous lane following in Duckietown.
Deep reinforcement learning for autonomous lane following in Duckietown: objective and importance
Would it not be great if we could train an end-to-end neural network in simulation, plug it in the physical robot and have it drive safely on the road?
Inspired by this idea, Mickyas worked to implement deep reinforcement learning (DRL) for autonomous lane following in Duckietown, training the agent using sim-to-real transfer.
The project focuses on training DRL agents, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC), to learn steering control using high-dimensional camera inputs. It integrates an autoencoder to compress image observations into a latent space, improving computational efficiency.
The hope is for the trained DRL model to generalize from simulation to real-world deployment on a Duckiebot. This involves addressing domain adaptation, camera input variations, and real-time inference constraints, amongst other implementation challenges.
Autonomous lane following is a fundamental component of self-driving systems, requiring continuous adaptation to environmental changes, especially whn using vision as main sensing modality. This project identifies limitations in existing DRL algorithms when applied to real-world robotics, and explores modifications in reward functions, policy updates, and feature extraction methods analyzing the results through real world experimentation.
The method and challenges in implementing deep reinforcement learning in Duckietown
The method involves training a DRL agent in a simulated Duckietown environment (Gym Duckietown Simulator) using an autoencoder for feature extraction.
The encoder compresses image data into a latent space, reducing input dimensions for policy learning. The agent receives sequential encoded frames as observations and optimizes steering actions based on reward-driven updates. The trained model is then transferred to a real Duckiebot using a ROS-based communication framework.
Challenges for pulling this off include accounting for discrepancies between simulated and real-world camera inputs, which affect performance and generalization. Differences in lighting, surface textures, and image normalization require domain adaptation techniques.
Moreover, computational limitations on the Duckiebot prevent direct onboard execution, requiring a distributed processing setup.
Reward shaping influences learning stability, and improper design of the reward function leads to policy exploitation or suboptimal behavior. Debugging DRL models is complex due to interdependencies between network architecture, exploration strategies, and training dynamics.
The project addresses these challenges by refining preprocessing, incorporating domain randomization, and modifying policy structures.
Deep reinforcement learning for autonomous lane following: full report
Deep reinforcement learning for autonomous lane following in Duckietown: Authors
Mickyas Tamiru Asfaw is currently pursuing Masters M2 , Mobile Autonomous and Robotic system (MARS) at the Grenoble INP – UGA, France.
Learn more
Duckietown is a modular, customizable, and state-of-the-art platform for creating and disseminating robotics and AI learning experiences.
Duckietown is designed to teach, learn, and do research: from exploring the fundamentals of computer science and automation to pushing the boundaries of knowledge.
These spotlight projects are shared to exemplify Duckietown’s value for hands-on learning in robotics and AI, enabling students to apply theoretical concepts to practical challenges in autonomous robotics, boosting competence and job prospects.