Incremental multi-step Q-learning

[1]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[2]  Richard S. Sutton,et al.  Temporal credit assignment in reinforcement learning , 1984 .

[3]  Peter Dayan,et al.  The convergence of TD(λ) for general λ , 1992, Machine Learning.

[4]  Michael I. Jordan,et al.  On the Convergence of Stochastic Iterative Dynamic Programming Algorithms , 1993, Neural Computation.

[5]  Long-Ji Lin,et al.  Reinforcement learning for robots using neural networks , 1992 .

[6]  C. Watkins Learning from delayed rewards , 1989 .

[7]  Mark D. Pendrith On Reinforcement Learning of Control Actions in Noisy and Non-Markovian Domains , 1994 .

[8]  Andrew W. Moore,et al.  Prioritized sweeping: Reinforcement learning with less data and less time , 2004, Machine Learning.

[9]  Peter Dayan,et al.  Technical Note: Q-Learning , 1992, Machine Learning.

[10]  Mahesan Niranjan,et al.  On-line Q-learning using connectionist systems , 1994 .

[11]  Leslie Pack Kaelbling,et al.  On reinforcement learning for robots , 1996, IROS.

[12]  Richard S. Sutton,et al.  Neuronlike adaptive elements that can solve difficult learning control problems , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[13]  Paul J. Werbos,et al.  Consistency of HDP applied to a simple reinforcement learning problem , 1990, Neural Networks.

[14]  Richard S. Sutton,et al.  Learning to Predict by the Methods of Temporal Differences , 1988, Machine Learning.

[15]  Pawel Cichosz,et al.  Fast and Efficient Reinforcement Learning with Truncated Temporal Differences , 1995, ICML.

[16]  Richard S. Sutton,et al.  Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming , 1990, ML.

[17]  J. Peng,et al.  Efficient learning and planning within the Dyna framework , 1993, IEEE International Conference on Neural Networks.