From Examples to Bayesian Inference

We show how to build an associative memory from a finite list of ex- amples. By means of a fully-blown example, we demostrate how a probabilistic Bayesian factor graph can integrate naturally the discrete information contained in the list with smooth inference.

[1]  H.-A. Loeliger,et al.  An introduction to factor graphs , 2004, IEEE Signal Processing Magazine.

[2]  David Maxwell Chickering,et al.  Learning Bayesian Networks: The Combination of Knowledge and Statistical Data , 1994, Machine Learning.

[3]  J. Hawkins,et al.  On Intelligence , 2004 .

[4]  Francesco Palmieri Notes on Factor Graphs , 2008, WIRN.

[5]  Nir Friedman,et al.  Learning Module Networks , 2002, J. Mach. Learn. Res..

[6]  C. N. Liu,et al.  Approximating discrete probability distributions with dependence trees , 1968, IEEE Trans. Inf. Theory.

[7]  Francesco Palmieri,et al.  Building a bayesian factor tree from examples , 2010, 2010 2nd International Workshop on Cognitive Information Processing.

[8]  Antony K. Haynes Learning Hidden Structure from Data: A Method for Marginalizing Joint Distributions Using Minimum Cross-Correlation Error. , 1997 .

[9]  G. Forney,et al.  Codes on graphs: normal realizations , 2000, 2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060).

[10]  Michael I. Jordan,et al.  Learning with Mixtures of Trees , 2001, J. Mach. Learn. Res..

[11]  E. Jaynes Probability theory : the logic of science , 2003 .

[12]  Alberto Maria Segre,et al.  Programs for Machine Learning , 1994 .

[13]  Weiru Liu,et al.  Learning belief networks from data: an information theory based approach , 1997, CIKM '97.

[14]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems , 1988 .