Stanford neural network research

The purpose of this paper is to present an overview of the learning algorithms that are used in both linear and nonlinear adaptive filters, and to describe a number of significant applications for these filters. The basic building block of adaptive filters is the adaptive linear combiner shown in Fig. 1. A set of input signals, represented at time k by the vector Xk [xOk, Xlk, X2k, . . . , xnk]T, are multiplied by variable coefficients or weights, represented by the vector Wk [wok, Wjj,, W2k, . . . , Wnk]T, to form the output signal Yk = XWk = W'Xk. The output signal is compared with a "desired response" signal dk which is supplied during the training process. The error signal is defined as the difference between the desired output and the actual output, k dk — Yk . When the X-input and the corresponding desired-response input are applied to the linear combiner during training, the weights are adjusted to minimize mean-square-error. Thus, the linear combiner learns to produce an output which is the best linear least squares estimate of the desired response. These ideas are fundamental to learning in an adaptive filter. They are also fundamental to learning in neural networks.