Solving Continuous POMDPs: Value Iteration with Incremental Learning of an Efficient Space Representation
暂无分享,去创建一个
Rüdiger Dillmann | Sebastian Brechtel | Tobias Gindele | R. Dillmann | T. Gindele | S. Brechtel | Tobias Gindele
[1] David Hsu,et al. Monte Carlo Value Iteration for Continuous-State POMDPs , 2010, WAFR.
[2] Nikos A. Vlassis,et al. Robot Planning in Partially Observable Continuous Domains , 2005, BNAIC.
[3] Geoffrey J. Gordon,et al. Finding Approximate POMDP solutions Through Belief Compression , 2011, J. Artif. Intell. Res..
[4] David Hsu,et al. SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces , 2008, Robotics: Science and Systems.
[5] Joelle Pineau,et al. Point-based value iteration: An anytime algorithm for POMDPs , 2003, IJCAI.
[6] David J. C. MacKay,et al. Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.
[7] E. J. Sondik,et al. The Optimal Control of Partially Observable Markov Decision Processes. , 1971 .
[8] Jesse Hoey,et al. Solving POMDPs with Continuous or Large Discrete Observation Spaces , 2005, IJCAI.
[9] Milos Hauskrecht,et al. Value-Function Approximations for Partially Observable Markov Decision Processes , 2000, J. Artif. Intell. Res..
[10] Rüdiger Dillmann,et al. A probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments , 2010, 13th International IEEE Conference on Intelligent Transportation Systems.
[11] Kee-Eung Kim,et al. Closing the Gap: Improved Bounds on Optimal POMDP Solutions , 2011, ICAPS.
[12] Michael C. Fu,et al. Solving Continuous-State POMDPs via Density Projection , 2010, IEEE Transactions on Automatic Control.
[13] Craig Boutilier,et al. Value-Directed Compression of POMDPs , 2002, NIPS.
[14] Rüdiger Dillmann,et al. Probabilistic MDP-behavior planning for cars , 2011, 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC).
[15] R. Bellman. A Markovian Decision Process , 1957 .
[16] Andrew W. Moore,et al. Variable Resolution Discretization in Optimal Control , 2002, Machine Learning.
[17] Leslie Pack Kaelbling,et al. Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..
[18] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[19] Pascal Poupart,et al. Point-Based Value Iteration for Continuous POMDPs , 2006, J. Mach. Learn. Res..
[20] Reid G. Simmons,et al. Heuristic Search Value Iteration for POMDPs , 2004, UAI.
[21] Zhengzhu Feng,et al. An Approach to State Aggregation for POMDPs , 2004 .
[22] Timothy J. Robinson,et al. Sequential Monte Carlo Methods in Practice , 2003 .
[23] Nikos A. Vlassis,et al. Perseus: Randomized Point-based Value Iteration for POMDPs , 2005, J. Artif. Intell. Res..
[24] Ron Alterovitz,et al. Efficient Approximate Value Iteration for Continuous Gaussian POMDPs , 2012, AAAI.
[25] Alexei Makarenko,et al. Parametric POMDPs for planning in continuous state spaces , 2006, Robotics Auton. Syst..