Short-Term Trajectory Planning in TORCS using Deep Reinforcement Learning

The applications of deep reinforcement learning to racing games so far struggled to reach a performance competitive with the state of the art in this field. Previous work, mainly focused on a low-level input design, show that artificial agents are able to learn to stay on track starting from no driving knowledge; however, the final performance is still far from those of competitive driving. The scope of this work is to investigate in which measure rising the abstraction level can help the learning process. Using The Open Racing Car Simulator (TORCS) environment and the Deep Deterministic Policy Gradients (DDPG) algorithm, we develop artificial agents, considering both numerical and visual inputs, based on deep neural networks. These agents learn to compute either a target point on track or, additionally, a correction to the target maximum speed at the current position, which are then provided as input to a low-level control logic. Our results show that our approach is able to achieve a fair performance, though extremely sensitive to the low-level logic. Further work is necessary in order to understand how to fully exploit a high-level control design.

[1]  Lawrence D. Jackel,et al.  Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.

[2]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[3]  Tom Schaul,et al.  Prioritized Experience Replay , 2015, ICLR.

[4]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[5]  Fawzi Nashashibi,et al.  End-to-End Race Driving with Deep Reinforcement Learning , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Peter Stone,et al.  Deep Recurrent Q-Learning for Partially Observable MDPs , 2015, AAAI Fall Symposia.

[7]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[8]  Daniele Loiacono,et al.  Simulated Car Racing Championship: Competition Software Manual , 2013, ArXiv.

[9]  Julian Togelius,et al.  The 2009 Simulated Car Racing Championship , 2010, IEEE Transactions on Computational Intelligence and AI in Games.

[10]  Victor Talpaert,et al.  Deep Reinforcement Learning for Autonomous Driving: A Survey , 2020, IEEE Transactions on Intelligent Transportation Systems.

[11]  Vicente Milanés Montero,et al.  A modular parametric architecture for the TORCS racing engine , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[12]  Jianxiong Xiao,et al.  DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[13]  Shane Legg,et al.  Massively Parallel Methods for Deep Reinforcement Learning , 2015, ArXiv.

[14]  Patrick M. Pilarski,et al.  Model-Free reinforcement learning with continuous action in practice , 2012, 2012 American Control Conference (ACC).

[15]  Marc G. Bellemare,et al.  The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract) , 2012, IJCAI.

[16]  Rishi Bedi,et al.  Deep Reinforcement Learning for Simulated Autonomous Vehicle Control , 2016 .

[17]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[18]  Frank Hutter,et al.  Fixing Weight Decay Regularization in Adam , 2017, ArXiv.

[19]  Daniele Loiacono,et al.  Learning drivers for TORCS through imitation using supervised methods , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[20]  Yuandong Tian,et al.  Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning , 2016, ICLR.

[21]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[22]  Etienne Perot,et al.  End-to-End Deep Reinforcement Learning for Lane Keeping Assist , 2016, ArXiv.

[23]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[24]  Julian Togelius,et al.  Deep Learning for Video Game Playing , 2017, IEEE Transactions on Games.

[25]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[26]  Araceli Sanchis,et al.  A human-like TORCS controller for the Simulated Car Racing Championship , 2010, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games.