Generalising the nodes of the error propagation network

Summary form only given, as follows. Gradient descent has been used with much success to train connectionist models in the form of the error propagation networks of Rumelhart, Hinton, and Williams. In these nets the output of a node is a nonlinear function of the weighted sum of the activations of other nodes. This type of node defines a hyperplane in the input space, but other types of nodes are possible. For example, the Kanerva model, the modified Kanerva model, networks of spherical graded units, networks of localized receptive fields, and the method of radial basis functions all use nodes which define volumes in the input space. It is shown that the error propagation algorithm can be used to train general types of nodes. The example of a Gaussian node is given, and this is compared with other connectionist models for the problem of recognition of steady-state vowels from multiple speakers.<<ETX>>