The methods of temporal differences (Samuel, 1959; Sutton, 1984, 1988) allow an agent to learn accurate predictions of stationary stochastic future outcomes. The learning is effectively stochastic approximation based on samples extracted from the process generating the agent's future. Sutton (1988) proved that for a special case of temporal differences, the expected values of the predictions converge to their correct values, as larger samples are taken, and Dayan (1992) extended his proof to the general case. This article proves the stronger result that the predictions of a slightly modified form of temporal difference learning converge with probability one, and shows how to quantify the rate of convergence. Keywords, reinforcement learning, temporal differences, Q-learning
[1]
Arthur L. Samuel,et al.
Some Studies in Machine Learning Using the Game of Checkers
,
1967,
IBM J. Res. Dev..
[2]
Harold J. Kushner,et al.
wchastic. approximation methods for constrained and unconstrained systems
,
1978
.
[3]
Richard S. Sutton,et al.
Temporal credit assignment in reinforcement learning
,
1984
.
[4]
W. Grassman.
Approximation and Weak Convergence Methods for Random Processes with Applications to Stochastic Systems Theory (Harold J. Kushner)
,
1986
.
[5]
Elie Bienenstock,et al.
Neural Networks and the Bias/Variance Dilemma
,
1992,
Neural Computation.
[6]
Ben J. A. Kröse,et al.
Learning from delayed rewards
,
1995,
Robotics Auton. Syst..