H 1 Optimality of the LMS Algorithm 1

We show that the celebrated LMS (Least-Mean Squares) adaptive algorithm is H 1 optimal. The LMS algorithm has been long regarded as an approximate solution to either a stochastic or a deterministic least-squares problem, and it essentially amounts to updating the weight vector estimates along the direction of the instantaneous gradient of a quadratic cost function. In this paper we show that LMS can be regarded as the exact solution to a minimization problem in its own right. Namely, we establish that it is a minimax lter: it minimizes the maximum energy gain from the disturbances to the predicted errors, while the closely related so-called normalized LMS algorithm minimizes the maximum energy gain from the disturbances to the ltered errors. Moreover, since these algorithms are central H 1 lters, they minimize a certain exponential cost function and are thus also risk-sensitive optimal. We discuss the various implications of these results, and show how they provide theoretical justiication for the widely observed excellent robustness properties of the LMS lter. manuscript is submitted for publication with the understanding that the US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon.

[1]  Thomas Kailath,et al.  Mixed H^2/H^∞ Estimation: Preliminary Analytic Characterization And A Numerical Solution , 1996 .

[2]  B. Widrow,et al.  Stationary and nonstationary learning characteristics of the LMS adaptive filter , 1976, Proceedings of the IEEE.

[3]  Ali H. Sayed,et al.  Linear Estimation in Krein Spaces - Part I: Theory , 1996 .

[4]  H. Kwakernaak A polynomial approach to minimax frequency domain optimization of multivariable feedback systems , 1986 .

[5]  K. Glover,et al.  State-space formulae for all stabilizing controllers that satisfy and H ∞ norm bound and relations to risk sensitivity , 1988 .

[6]  J. Speyer,et al.  Optimal stochastic estimation with exponential cost criteria , 1992, [1992] Proceedings of the 31st IEEE Conference on Decision and Control.

[7]  Ali H. Sayed,et al.  Optimality Criteria for LMS and Backpropagation , 1993, NIPS.

[8]  P. Khargonekar,et al.  State-space solutions to standard H2 and H∞ control problems , 1988, 1988 American Control Conference.

[9]  Ali H. Sayed,et al.  Time-domain feedback analysis of adaptive gradient algorithms via the small gain theorem , 1995, Optics & Photonics.

[10]  S. Thomas Alexander,et al.  Adaptive Signal Processing , 1986, Texts and Monographs in Computer Science.

[11]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[12]  P. Whittle Risk-Sensitive Optimal Control , 1990 .

[13]  David J. N. Limebeer,et al.  Linear Robust Control , 1994 .

[14]  Tamer Başar,et al.  H1-Optimal Control and Related Minimax Design Problems , 1995 .

[15]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[16]  A. Sayed,et al.  A CLASS OF ADAPTIVE NONLINEAR H ∞ –FILTERS WITH GUARANTEED l 2 -STABILITY , 1995 .

[17]  Ali H. Sayed,et al.  Recursive linear estimation in Krein spaces. II. Applications , 1993, Proceedings of 32nd IEEE Conference on Decision and Control.

[18]  Thomas Kailath,et al.  Hoo Optimality Criteria for LMS and Backpropagation , 1993, NIPS 1993.

[19]  Steve Rogers,et al.  Adaptive Filter Theory , 1996 .

[20]  Pramod P. Khargonekar,et al.  FILTERING AND SMOOTHING IN AN H" SETTING , 1991 .

[21]  Petros G. Voulgaris,et al.  On optimal ℓ∞ to ℓ∞ filtering , 1995, Autom..

[22]  U. Shaked,et al.  H,-OPTIMAL ESTIMATION: A TUTORIAL , 1992 .

[23]  Keith Glover,et al.  Derivation of the maximum entropy H ∞-controller and a state-space formula for its entropy , 1989 .

[24]  D. Jacobson,et al.  Optimization of stochastic linear systems with additive measurement and process noise using exponential performance criteria , 1974 .