A method of training multi-layer networks with heaviside characteristics using internal representations

A learning algorithm is presented that uses internal representations, which are continuous random variables, for the training of multilayer networks whose neurons have Heaviside characteristics. This algorithm is an improvement in that it is applicable to networks with any number of layers of variable weights and does not require 'bit flipping' on the internal representations to reduce output error. The algorithm is extended to apply to recurrent networks. Some illustrative results are given.<<ETX>>