A fast multilayer neural-network training algorithm based on the layer-by-layer optimizing procedures

A faster new learning algorithm to adjust the weights of the multilayer feedforward neural network is proposed. In this new algorithm, the weight matrix (W(2)) of the output layer and the output vector (Y) of the previous layer are treated as two variable sets. An optimal solution pair (W(2)*,Y(P)*) is found to minimize the sum-square-error of the patterns input. Y(P)* is then used as the desired output of the previous layer. The optimal weight matrix and layer output vector of the hidden layers in the network is found with the same method as that used for the output layer. In addition, the dynamic forgetting factors method makes the proposed new algorithm even more powerful in dynamic system identification. Computer simulation shows that the new algorithm outmatches other learning algorithms both in converging speed and in computation time required.