The Back-Prop and No-Prop Training Algorithms

Back-Prop and No-Prop, two training algorithms for multi-layer neural networks, are compared in design and performance. With Back-Prop, all layers of the network receive least squares training. With No-Prop, only the output layer receives least squares training, whereas the hidden layer weights are chosen randomly and then fixed. No-Prop is much simpler than Back-Prop. No-Prop can deliver equal performance to BackProp when the number of training patterns is less than or equal to the number of neurons in the final hidden layer When the number of training patterns is increased beyond this, the performance of Back-Prop can be slightly better than that of No-Prop. However, the performance of No-Prop can be made equal to or better than the performance of Back-Prop by increasing the number of neurons in the final hidden layer. These algorithms are compared with respect to training time, minimum mean square error for the training patterns, and classification accuracy for the testing patterns. These algorithms are applied to pattern classification and nonlinear adaptive filtering.