Policy-gradient methods have received increased attention recently as a mechanism for learning to act in partially observable environments. They have shown promise for problems admitting memoryless policies but have been less successful when memory is required. In this paper we develop several improved algorithms for learning policies with memory in an infinite-horizon setting — directly when a known model of the environment is available, and via simulation otherwise. We compare these algorithms on some large POMDPs, including noisy robot navigation and multi-agent problems.
WU KarenT,et al.
J. Douglas Faires,et al.
John N. Tsitsiklis,et al.
The Complexity of Markov Decision Processes
Math. Oper. Res..
Lonnie Chrisman,et al.
Reinforcement Learning with Perceptual Aliasing: The Perceptual Distinctions Approach
M. Kaiser,et al.
Time-delay neural networks for control
Yoshua Bengio,et al.
Input-output HMMs for sequence processing
IEEE Trans. Neural Networks.
G. Casella,et al.
Rao-Blackwellisation of sampling schemes
Andrew W. Moore,et al.
Gradient Descent for General Reinforcement Learning
A. Cassandra,et al.
Exact and approximate algorithms for partially observable markov decision processes
Kee-Eung Kim,et al.
Solving POMDPs by Searching the Space of Finite Policies
Leslie Pack Kaelbling,et al.
Learning Policies with External Memory
Terrence L. Fine.
Feedforward Neural Network Methodology
Information Science and Statistics.
Christian R. Shelton,et al.
Policy Improvement for POMDPs Using Normalized Importance Sampling