Estimation via Markov chain Monte Carlo

Markov chain Monte Carlo (MCMC) is a powerful means for generating random samples that can be used in computing statistical estimates, numerical integrals, and marginal and joint probabilities. The approach is especially useful in applications where one is forming an estimate based on a multivariate probability distribution or density function that would be hopeless to obtain analytically. In particular, MCMC provides a means for generating samples from joint distributions based on easier sampling from conditional distributions. Over the last 10 to 15 years, the approach has had a large impact on the theory and practice of statistical modeling. On the other hand, MCMC has had relatively little impact (yet) on estimation problems in control. The paper is a survey of popular implementations of MCMC, focusing especially on the two most popular specific implementations of MCMC: Metropolis-Hastings and Gibbs sampling.

[1]  Jun S. Liu,et al.  Monte Carlo strategies in scientific computing , 2001 .

[2]  Donald Geman,et al.  Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images , 1984 .

[3]  Joseph G. Ibrahim,et al.  Monte Carlo Methods in Bayesian Computation , 2000 .

[4]  M. Evans,et al.  Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems , 1995 .

[5]  James C. Spall,et al.  The Kantorovich inequality for error analysis of the Kalman filter with unknown noise distributions , 1995, Proceedings of 1995 34th IEEE Conference on Decision and Control.

[6]  Donald Geman,et al.  Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  J. Geweke,et al.  Bayesian estimation of state-space models using the Metropolis-Hastings algorithm within Gibbs sampling , 2001 .

[8]  H. Sorenson,et al.  Nonlinear Bayesian estimation using Gaussian sum approximations , 1972 .

[9]  A. Shapiro Monte Carlo Sampling Methods , 2003 .

[10]  N. Gordon,et al.  Sequential simulation-based estimation of jump Markov linear systems , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[11]  J. Besag,et al.  Bayesian Computation and Stochastic Systems , 1995 .

[12]  J. L. Maryak,et al.  Use of the Kalman filter for inference in state-space models with unknown noise distributions , 1997, Proceedings of the 1997 American Control Conference (Cat. No.97CH36041).

[13]  S. Chib Marginal Likelihood from the Gibbs Output , 1995 .

[14]  Nicholas G. Polson,et al.  A Monte Carlo Approach to Nonnormal and Nonlinear State-Space Modeling , 1992 .

[15]  N. Gordon,et al.  Novel approach to nonlinear/non-Gaussian Bayesian state estimation , 1993 .

[16]  G. Roberts,et al.  Adaptive Markov Chain Monte Carlo through Regeneration , 1998 .

[17]  D. Mayne,et al.  Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linear filtering† , 1969 .

[18]  S. D. Hill,et al.  Least-informative Bayesian prior distributions for finite samples based on information theory , 1990 .

[19]  S. Chib,et al.  Understanding the Metropolis-Hastings Algorithm , 1995 .

[20]  W. K. Hastings,et al.  Monte Carlo Sampling Methods Using Markov Chains and Their Applications , 1970 .

[21]  W. Wong,et al.  The calculation of posterior distributions by data augmentation , 1987 .

[22]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[23]  Adrian F. M. Smith,et al.  Sampling-Based Approaches to Calculating Marginal Densities , 1990 .

[24]  R. Kohn,et al.  On Gibbs sampling for state space models , 1994 .