Learning modular policies for robotics
暂无分享,去创建一个
Jan Peters | Alexandros Paraschos | Christian Daniel | Gerhard Neumann | Andras Gabor Kupcsik | Jan Peters | G. Neumann | Christian Daniel | A. Paraschos | A. Kupcsik
[1] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[2] Geoffrey E. Hinton,et al. A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.
[3] Jun Morimoto,et al. Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning , 2000, Robotics Auton. Syst..
[4] Jun Nakanishi,et al. Learning Attractor Landscapes for Learning Motor Primitives , 2002, NIPS.
[5] Michael I. Jordan,et al. Optimal feedback control as a theory of motor coordination , 2002, Nature Neuroscience.
[6] Stefan Schaal,et al. Reinforcement Learning for Humanoid Robotics , 2003 .
[7] Jun Nakanishi,et al. Learning Movement Primitives , 2005, ISRR.
[8] Sridhar Mahadevan,et al. Hierarchical Policy Gradient Algorithms , 2003, ICML.
[9] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[10] Christopher K. I. Williams,et al. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) , 2005 .
[11] Zoubin Ghahramani,et al. Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.
[12] Marc Toussaint,et al. Modelling motion primitives and their timing in biologically executed movements , 2007, NIPS.
[13] Aude Billard,et al. On Learning, Representing, and Generalizing a Task in a Humanoid Robot , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[14] Jan Peters,et al. Fitted Q-iteration by Advantage Weighted Regression , 2008, NIPS.
[15] Stefan Schaal,et al. 2008 Special Issue: Reinforcement learning of motor skills with policy gradients , 2008 .
[16] Marc Toussaint,et al. Robot trajectory optimization using approximate inference , 2009, ICML '09.
[17] Marc Toussaint,et al. Learning model-free robot control by a Monte Carlo EM algorithm , 2009, Auton. Robots.
[18] Dana Kulic,et al. Online Segmentation and Clustering From Continuous Observation of Whole Body Motions , 2009, IEEE Transactions on Robotics.
[19] Stefan Schaal,et al. Reinforcement learning of motor skills in high dimensions: A path integral approach , 2010, 2010 IEEE International Conference on Robotics and Automation.
[20] Christoph H. Lampert,et al. Movement templates for learning of hitting and batting , 2010, 2010 IEEE International Conference on Robotics and Automation.
[21] Alessandro Lazaric,et al. Bayesian Multi-Task Reinforcement Learning , 2010, ICML.
[22] Darwin G. Caldwell,et al. Robot motor skill coordination with EM-based Reinforcement Learning , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[23] Jan Peters,et al. Noname manuscript No. (will be inserted by the editor) Policy Search for Motor Primitives in Robotics , 2022 .
[24] A. Billard,et al. Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models , 2011, IEEE Transactions on Robotics.
[25] Yasemin Altun,et al. Relative Entropy Policy Search , 2010 .
[26] Andrea d'Avella,et al. Modularity for Sensorimotor Control: Evidence and a New Prediction , 2010, Journal of motor behavior.
[27] Odest Chadwicke Jenkins,et al. Learning from demonstration using a multi-valued function regressor for time-series data , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.
[28] Bernhard Schölkopf,et al. Switched Latent Force Models for Movement Segmentation , 2010, NIPS.
[29] Jun Morimoto,et al. Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives , 2010, IEEE Transactions on Robotics.
[30] Bayesian MultiTask Reinforcement Learning , 2010 .
[31] Jan Peters,et al. Movement extraction by detecting dynamics switches and repetitions , 2010, NIPS.
[32] Aude Billard,et al. Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models , 2011, IEEE Transactions on Robotics.
[33] Stefan Schaal,et al. Movement segmentation using a primitive library , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[34] Gerhard Neumann,et al. Variational Inference for Policy Search in changing situations , 2011, ICML.
[35] Stefan Schaal,et al. Hierarchical reinforcement learning with movement primitives , 2011, 2011 11th IEEE-RAS International Conference on Humanoid Robots.
[36] Jan Peters,et al. Reinforcement Learning to Adjust Robot Movements to New Situations , 2010, IJCAI.
[37] Jan Peters,et al. Learning concurrent motor skills in versatile solution spaces , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[38] Jan Peters,et al. Hierarchical Relative Entropy Policy Search , 2014, AISTATS.
[39] Scott Niekum,et al. Learning and generalization of complex tasks from unstructured demonstrations , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[40] Olivier Sigaud,et al. Path Integral Policy Improvement with Covariance Matrix Adaptation , 2012, ICML.
[41] Bruno Castro da Silva,et al. Learning Parameterized Skills , 2012, ICML.
[42] Jan Peters,et al. Probabilistic Movement Primitives , 2013, NIPS.
[43] Carme Torras,et al. Learning Collaborative Impedance-Based Robot Behaviors , 2013, AAAI.
[44] Oliver Kroemer,et al. Learning sequential motor tasks , 2013, 2013 IEEE International Conference on Robotics and Automation.