VFSR trained artificial neural networks

Artificial neural networks are most often trained using backward error propagation (BEP), which works quite well for network training problems having a single minimum in the error function. Although BEP has been successful in many applications, there can be substantial problems in convergence because of the existence of local minima and network paralysis. We describe a method for avoiding local minima by combining very fast simulated reannealing (VFSR) with BEP. While convergence to the best training weights can be slower than gradient descent methods, it is faster than other SA network training methods. More importantly, convergence to the optimal weight set is guaranteed. We demonstrate VFSR network training on a variety of test problems, such as the exclusive-or and parity problems, and compare performances of VFSR network training with conjugate gradient trained backpropagation networks.