Recurrent sampling models

Hierarchical probabilistic synthesis and analysis models have recently been suggested as architectures for performing density estimation. Strict hierarchies makes it easy to evaluate generative or synthetic probabilities. However, both theoretical and neurobiological considerations weigh in favour of integrating lateral influences within a layer together with top-down and bottom up influences from lower and higher layers. This is known to be computationally tricky. We suggest a new recurrent sampling model and show that has the appropriate structure and behaviour for the analysis model for linear and Gaussian factor analysis. Then we extend this model to the case of binary stochastic units. Finally, we comment on the more general use of this model.

[1]  David Mumford,et al.  Neuronal Architectures for Pattern-theoretic Problems , 1995 .

[2]  Michael I. Jordan,et al.  A Mean Field Learning Algorithm for Unsupervised Neural Networks , 1999, Learning in Graphical Models.

[3]  José L. Marroquín,et al.  Stochastic cellular automata with Gibbsian invariant measures , 1991, IEEE Trans. Inf. Theory.

[4]  Geoffrey E. Hinton,et al.  The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.

[5]  Geoffrey E. Hinton,et al.  Generative models for discovering sparse distributed representations. , 1997, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[6]  Peter Dayan,et al.  Factor Analysis Using Delta-Rule Wake-Sleep Learning , 1997, Neural Computation.

[7]  Geoffrey E. Hinton,et al.  Learning and relearning in Boltzmann machines , 1986 .

[8]  S. Amari Natural Gradient Works Eciently in Learning , 2022 .

[9]  A. Burkhalter Development of forward and feedback connections between areas V1 and V2 of human visual cortex. , 1993, Cerebral cortex.

[10]  Rajesh P. N. Rao,et al.  Dynamic Model of Visual Memory Predicts Neural Response Properties in the Visual Cortex , 1995 .

[11]  Michael I. Jordan,et al.  MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES , 1996 .

[12]  Geoffrey E. Hinton,et al.  Autoencoders, Minimum Description Length and Helmholtz Free Energy , 1993, NIPS.

[13]  Michael I. Jordan,et al.  Mean Field Theory for Sigmoid Belief Networks , 1996, J. Artif. Intell. Res..

[14]  Donald Geman,et al.  Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images , 1984 .

[15]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[16]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems , 1988 .

[17]  Brendan J. Frey,et al.  Does the Wake-sleep Algorithm Produce Good Density Estimators? , 1995, NIPS.

[18]  Dorothy T. Thayer,et al.  EM algorithms for ML factor analysis , 1982 .

[19]  Brian Everitt,et al.  An Introduction to Latent Variable Models , 1984 .

[20]  S. Thorpe,et al.  Speed of processing in the human visual system , 1996, Nature.

[21]  Geoffrey E. Hinton,et al.  The Helmholtz Machine , 1995, Neural Computation.

[22]  R. Zemel A minimum description length framework for unsupervised learning , 1994 .