暂无分享,去创建一个
[1] Edward J. Sondik,et al. The Optimal Control of Partially Observable Markov Processes over a Finite Horizon , 1973, Oper. Res..
[2] John N. Tsitsiklis,et al. The Complexity of Markov Decision Processes , 1987, Math. Oper. Res..
[3] W. Lovejoy. A survey of algorithmic methods for partially observed Markov decision processes , 1991 .
[4] Leslie Pack Kaelbling,et al. Acting Optimally in Partially Observable Stochastic Domains , 1994, AAAI.
[5] Daniel S. Weld,et al. Probabilistic Planning with Information Gathering and Contingent Execution , 1994, AIPS.
[6] Craig Boutilier,et al. Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations , 1996, AAAI/IAAI, Vol. 2.
[7] Michael L. Littman,et al. Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes , 1997, UAI.
[8] Ronen I. Brafman,et al. A Heuristic Variable Grid Solution Method for POMDPs , 1997, AAAI/IAAI.
[9] Xavier Boyen,et al. Tractable Inference for Complex Stochastic Processes , 1998, UAI.
[10] Anne Condon,et al. On the Undecidability of Probabilistic Planning and Infinite-Horizon Partially Observable Markov Decision Problems , 1999, AAAI/IAAI.
[11] Craig Boutilier,et al. Decision-Theoretic Planning: Structural Assumptions and Computational Leverage , 1999, J. Artif. Intell. Res..
[12] Craig Boutilier,et al. Value-Directed Belief State Approximation for POMDPs , 2000, UAI.
[13] P. Poupart. Approximate value-directed belief state monitoring for partially observable Markov decision processes , 2000 .
[14] Milos Hauskrecht,et al. Value-Function Approximations for Partially Observable Markov Decision Processes , 2000, J. Artif. Intell. Res..
[15] Zhengzhu Feng,et al. Dynamic Programming for POMDPs Using a Factored State Representation , 2000, AIPS.
[16] Craig Boutilier,et al. Value-directed sampling methods for monitoring POMDPs , 2001, UAI 2001.
[17] Craig Boutilier,et al. Value-Directed Sampling Methods for POMDPs , 2001, UAI.