Sparse Identification ofPosynomialModels

Posynomials are nonnegative combinations of monomials with possibly fractional and both positive and negative exponents. Posynomial models are widely used in various engineering design endeavors, such as circuits, aerospace and structural design, mainly due to the fact that design problems cast in terms of posynomial objectives and constraints can be solved efficiently by means of a convex optimization technique known as geometric programming (GP). However, while quite a vast literature exists on GP-based design, very few contributions can yet be found on the problem of identifying posynomial models from experimental data. Posynomial identification amounts to determining not only the coefficients of the combination, but also the exponents in the monomials, which renders the identification problem numerically hard. In this paper, we propose an approach to the identification of multivariate posynomial models, based on the expansion on a given large-scale basis of monomials. The model is then identified by seeking coefficients of the combination that minimize a mixed objective, composed by a term representing the fitting error and a term inducing sparsity in the representation, which results in a problem formulation of the “square-root LASSO” type, with nonnegativity constraints on the variables. We propose to solve the problem via a sequential coordinate-minimization scheme, which is suitable for large-scale implementations. A numerical example is finally presented, dealing with the identification of a posynomial model for a NACA 4412 airfoil.

[1]  Petre Stoica,et al.  Connection between SPICE and Square-Root LASSO for sparse parameter estimation , 2014, Signal Process..

[2]  Pieter Abbeel,et al.  Geometric Programming for Aircraft Design Optimization , 2012, AIAA Journal.

[3]  Carlo Novara,et al.  Sparse Identification of Nonlinear Functions and Parametric Set Membership Optimality Analysis , 2011, IEEE Transactions on Automatic Control.

[4]  Carlo Novara,et al.  Parametric identification of structured nonlinear systems , 2011, Autom..

[5]  A. Belloni,et al.  Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming , 2010, 1009.5689.

[6]  Laurent El Ghaoui,et al.  Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems , 2010, 1009.4219.

[7]  J. Doyle,et al.  Finding globally optimum solutions in antenna optimization problems , 2010, 2010 IEEE Antennas and Propagation Society International Symposium.

[8]  L. Piroddi,et al.  NARX model selection based on simulation error minimisation and LASSO , 2010 .

[9]  Trevor Hastie,et al.  Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.

[10]  Yoram Singer,et al.  Efficient Online and Batch Learning Using Forward Backward Splitting , 2009, J. Mach. Learn. Res..

[11]  Lorenzo Rosasco,et al.  Elastic-net regularization in learning theory , 2008, J. Complex..

[12]  Yin Zhang,et al.  Fixed-Point Continuation for l1-Minimization: Methodology and Convergence , 2008, SIAM J. Optim..

[13]  Patrick L. Combettes,et al.  Proximal Thresholding Algorithm for Minimization over Orthonormal Bases , 2007, SIAM J. Optim..

[14]  L. Piroddi,et al.  A cluster selection approach to polynomial NARX identification , 2007, 2007 American Control Conference.

[15]  H. Zou The Adaptive Lasso and Its Oracle Properties , 2006 .

[16]  L. Piroddi,et al.  A two-stage algorithm for structure identification of polynomial NARX models , 2006, 2006 American Control Conference.

[17]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[18]  Sunil L. Kukreja,et al.  A LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR (LASSO) FOR NONLINEAR SYSTEM IDENTIFICATION , 2006 .

[19]  Michael Elad,et al.  Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.

[20]  Stephen P. Boyd,et al.  Digital Circuit Optimization via Geometric Programming , 2005, Oper. Res..

[21]  Mung Chiang,et al.  Geometric Programming for Communication Systems , 2005, Found. Trends Commun. Inf. Theory.

[22]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[23]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[24]  I. Daubechies,et al.  An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.

[25]  Georges G. E. Gielen,et al.  Simulation-based generation of posynomial performance models for the sizing of analog integrated circuits , 2003, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst..

[26]  P. Tseng Convergence of a Block Coordinate Descent Method for Nondifferentiable Minimization , 2001 .

[27]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[28]  Sung-Mo Kang,et al.  An exact solution to the transistor sizing problem for CMOS circuits using convex optimization , 1993, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst..

[29]  H. Komiya Elementary proof for Sion's minimax theorem , 1988 .

[30]  Douglass J. Wilde,et al.  Globally optimal design , 1978 .

[31]  R. A. Cuninghame-Green,et al.  Applied geometric programming , 1976 .

[32]  M. Sion On general minimax theorems , 1958 .