Aalborg Universitet Conditional Density Approximations with Mixtures of Polynomials

Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn oneand multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.

[1]  Concha Bielza,et al.  Learning mixtures of polynomials of multidimensional probability densities from data using B-spline interpolation , 2014, Int. J. Approx. Reason..

[2]  Antonio Salmerón,et al.  Learning mixtures of truncated basis functions from data , 2014, Int. J. Approx. Reason..

[3]  Concha Bielza,et al.  Learning Mixtures of Polynomials of Conditional Densities from Data , 2013, CAEPIA.

[4]  Prakash P. Shenoy Two issues in using mixtures of polynomials for inference in hybrid Bayesian networks , 2012, Int. J. Approx. Reason..

[5]  Rafael Rumí,et al.  Mixtures of truncated basis functions , 2012, Int. J. Approx. Reason..

[6]  Prakash P. Shenoy,et al.  Inference in hybrid Bayesian networks using mixtures of polynomials , 2011, Int. J. Approx. Reason..

[7]  R. Yuste,et al.  Comparison Between Supervised and Unsupervised Classifications of Neuronal Cell Types: A Case Study , 2010, Developmental neurobiology.

[8]  Lawrence A. Harris,et al.  Bivariate Lagrange interpolation at the Chebyshev nodes , 2010 .

[9]  Rafael Rumí,et al.  Parameter estimation and model selection for mixtures of truncated exponentials , 2010, Int. J. Approx. Reason..

[10]  Alvise Sommariva,et al.  Padua2DM: fast interpolation and cubature at the Padua points in Matlab/Octave , 2010, Numerical Algorithms.

[11]  Rafael Rumí,et al.  Maximum Likelihood Learning of Conditional MTE Distributions , 2009, ECSQARU.

[12]  Rafael Rumí,et al.  Learning hybrid Bayesian networks using mixtures of truncated exponentials , 2006, Int. J. Approx. Reason..

[13]  Prakash P. Shenoy,et al.  Inference in hybrid Bayesian networks with mixtures of truncated exponentials , 2006, Int. J. Approx. Reason..

[14]  Zhi Zong Information-theoretic methods for estimating complicated probability distributions , 2006 .

[15]  Prakash P. Shenoy,et al.  Approximating Probability Density Functions with Mixtures of Truncated Exponentials , 2004 .

[16]  W. Boehm,et al.  Bezier and B-Spline Techniques , 2002 .

[17]  Serafín Moral,et al.  Estimating Mixtures of Truncated Exponentials from Data , 2002, Probabilistic Graphical Models.

[18]  Serafín Moral,et al.  Mixtures of Truncated Exponentials in Hybrid Bayesian Networks , 2001, ECSQARU.

[19]  K. Lam,et al.  Estimation of complicated distributions using B-spline functions , 1998 .

[20]  Prakash P. Shenoy,et al.  Probability propagation , 1990, Annals of Mathematics and Artificial Intelligence.

[21]  I. Faux,et al.  Computational Geometry for Design and Manufacture , 1979 .

[22]  G. Schwarz Estimating the Dimension of a Model , 1978 .

[23]  I. J. Schoenberg Contributions to the problem of approximation of equidistant data by analytic functions. Part A. On the problem of smoothing or graduation. A first class of analytic approximation formulae , 1946 .