Visualizing the learning process for neural networks

In this paper we present some visualization techniques which assist in understanding the iteration process of learning algorithms for neural networks. In the case of perceptron learning, we show that the algorithm can be visualized as a search on the surface of what we call the boolean sphere. In the case of backpropagation, we show that the iteration path is not just random noise, but that under certain circumstances it exhibits an interesting structure. Examples of on-line and off-line backpropagation iteration paths show that they are fractals.

[1]  Paul F. M. J. Verschure,et al.  A note on chaotic behavior in simple neural networks , 1990, Neural Networks.

[2]  Don R. Hush,et al.  Error surfaces for multilayer perceptrons , 1992, IEEE Trans. Syst. Man Cybern..

[3]  Raúl Rojas,et al.  Theorie der neuronalen Netze , 1993 .

[4]  Raúl Rojas,et al.  The fractal geometry of backpropagation , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).