Accelerated Backpropagation Learning: Two Optimization Methods

Two methods for incr easing performance of th e backpropagat ion learning algorithm are present ed and their result s are compared with those obtained by optimi zing par ameters in the standard method . The first method requires adaptation of a scalar learning rat e in order to decrease th e energy value along the gradient direction in a close-to-optimal way. Th e second is derived from the conjugate gradient method with inexact linear searches . The strict locality requirement is relaxed but parallelism of computation is maintained, allowing efficient use of concurrent computation. For medium-size probl ems, typical speedups of one order of magnitude are obtained.