Dual Representations for Dynamic Programming

We propose a dual approach to dynamic programming and reinforcement learning, based on maintaining an explicit representation of visit distributions as opposed to value functions. An advantage of working in the dual is that it allows one to exploit techniques for representing, approximating, and estimating probability distributions, while also avoiding any risk of divergence. We begin by formulating a modied dual of the standard linear program that guarantees the solution is a globally normalized visit distribution. Using this alternative representation, we then derive dual forms of dynamic programming, including on-policy updating, policy improvement and o-p olicy updating, and furthermore show how to incorporate function approximation. We then investigate the convergence properties of these algorithms, both theoretically and empirically, and show that the dual approach remains stable in situations when primal value function approximation diverges. Overall, the dual approach oers a viable alternative to standard dynamic programming techniques and oers new avenues for developing algorithms for sequential decision making.

[1]  Sheldon M. Ross,et al.  Introduction to probability models , 1975 .

[2]  Peter Dayan,et al.  Improving Generalization for Temporal Difference Learning: The Successor Representation , 1993, Neural Computation.

[3]  Andrew W. Moore,et al.  Generalization in Reinforcement Learning: Safely Approximating the Value Function , 1994, NIPS.

[4]  Geoffrey J. Gordon Stable Function Approximation in Dynamic Programming , 1995, ICML.

[5]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[6]  Richard S. Sutton,et al.  Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding , 1995, NIPS.

[7]  Leemon C. Baird,et al.  Residual Algorithms: Reinforcement Learning with Function Approximation , 1995, ICML.

[8]  John N. Tsitsiklis,et al.  Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.

[9]  John N. Tsitsiklis,et al.  Analysis of Temporal-Diffference Learning with Function Approximation , 1996, NIPS.

[10]  Alex M. Andrew,et al.  Reinforcement Learning: : An Introduction , 1998 .

[11]  J. W. Nieuwenhuis,et al.  Boekbespreking van D.P. Bertsekas (ed.), Dynamic programming and optimal control - volume 2 , 1999 .

[12]  Andrew Y. Ng,et al.  Policy Search via Density Estimation , 1999, NIPS.

[13]  Benjamin Van Roy,et al.  On the existence of fixed points for approximate value iteration and temporal-difference learning , 2000 .

[14]  Benjamin Van Roy,et al.  The Linear Programming Approach to Approximate Dynamic Programming , 2003, Oper. Res..

[15]  Michail G. Lagoudakis,et al.  Least-Squares Policy Iteration , 2003, J. Mach. Learn. Res..

[16]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[17]  Tao Wang,et al.  Stable Dual Dynamic Programming , 2007, NIPS.

[18]  Tao Wang,et al.  Dual Representations for Dynamic Programming and Reinforcement Learning , 2007, 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning.

[19]  Tao Wang New representations and approximations for sequential decision making under uncertainty , 2007 .