Automatic Scaling using Gamma Learning for Feedforward Neural Networks

Standard error back-propagation requites output data that is scaled to lie within the active area of the activation function. We show that normalizing data to conform to this requirement is not only a time-consuming process, but can also introduce inaccuracies in modelling of the data. In this paper we propose the gamma learning rule for feedforward neural networks which eliminates the need to scale output data before training. We show that the utilization of “self-scaling” units results in faster convergence and more accurate results compared to the rescaled results of standard back-propagation.

[1]  Jacek M. Zurada Lambda learning rule for feedforward neural networks , 1993, IEEE International Conference on Neural Networks.

[2]  Jacek M. Zurada,et al.  Introduction to artificial neural systems , 1992 .

[3]  Thomas P. Vogl,et al.  Rescaling of variables in back propagation learning , 1991, Neural Networks.