An analytical framework for optimizing neural networks

There has recently been much research interest in the use of feedback neural networks to solve combinatorial optimization problems. Although initial results were disappointing, it has since been demonstrated how modified network dynamics and better problem mapping can greatly improve the solution quality. The aim of this paper is to build on this progress by presenting a new analytical framework in which problem mappings can be evaluated without recourse to purely experimental means. A linearized analysis of the Hopfield network's dynamics forms the main theory of the paper, followed by a series of experiments in which some problem mappings are investigated in the context of these dynamics. The experimental results are seen to be compatible with the linearized theory, and observed weaknesses in the mappings are fully explained within the framework. What emerges is a largely analytical technique for evaluating candidate problem mappings, without recourse to the more usual trial and error.

[1]  Richard W. Prager,et al.  Neural networks and combinatorial optimization problems - the key to a successful mapping , 1991 .

[2]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[3]  David E. van den Bout,et al.  Graph partitioning using annealed neural networks , 1990, International 1989 Joint Conference on Neural Networks.

[4]  Alexander Graham,et al.  Kronecker Products and Matrix Calculus: With Applications , 1981 .

[5]  E Bienenstock,et al.  Elastic matching and pattern recognition in neural networks. , 1989 .

[6]  Carsten Peterson,et al.  "Teachers and Classes" with Neural Networks , 1991, Int. J. Neural Syst..

[7]  Carsten Peterson,et al.  Neural Networks and NP-complete Optimization Problems; A Performance Study on the Graph Bisection Problem , 1988, Complex Syst..

[8]  F. Fallside,et al.  A Hopfield network implementation of the Viterbi algorithm for hidden Markov models , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[9]  Mahesan Niranjan,et al.  A theoretical investigation into the performance of the Hopfield model , 1990, IEEE Trans. Neural Networks.

[10]  Timothy X. Brown,et al.  Competitive neural architecture for hardware solution to the assignment problem , 1991, Neural Networks.

[11]  Behrooz Kamgar-Parsi,et al.  On problem solving with Hopfield neural networks , 1990, International 1989 Joint Conference on Neural Networks.

[12]  N. M. Nasrabadi,et al.  Object recognition based on graph matching implemented by a Hopfield-style neural network , 1989, International 1989 Joint Conference on Neural Networks.

[13]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[14]  R. G. Ogier,et al.  Neural network solution to the link scheduling problem using convex relaxation , 1990, [Proceedings] GLOBECOM '90: IEEE Global Telecommunications Conference and Exhibition.

[15]  H. Rosenbrock,et al.  Mathematics of dynamical systems , 1970 .

[16]  Andrew H. Gee,et al.  A subspace approach to invariant pattern recognition using Hopfield networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[17]  Andrew H. Gee,et al.  Polyhedral Combinatorics and Neural Networks , 1994, Neural Computation.

[18]  Carsten Peterson,et al.  A New Method for Mapping Optimization Problems Onto Neural Networks , 1989, Int. J. Neural Syst..