Planning with predictive state representations

Predictive state representation (PSR) models for controlled dynamical systems have recently been proposed as an alternative to traditional models such as partially observable Markov decision processes (POMDPs). In this paper we develop and evaluate two general planning algorithms for PSR models. First, we show how planning algorithms for POMDPs that exploit the piecewise linear property of value functions for finite-horizon problems can be extended to PSRs. This requires an interesting replacement of the role of hidden nominalstates in POMDPs with linearly independent predictions in PSRs. Second, we show how traditional reinforcement learning algorithms such as Q-learning can be extended to PSR models. We empirically evaluate both our algorithms on a standard set of test POMDP problems.