Neural Networks With Real Weights: Analog Computational Complexity

We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a xed structure, invariant in time, corresponding to an unchanging number of \neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work 17].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality \p = np " in our model implies the almost complete collapse of the standard polynomial hierarchy. In contrast to classical computational models, the models studied here exhibit at least some robustness with respect to noise and implementation errors. ABSTRACT We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a xed structure, invariant in time, corresponding to an unchanging number of \neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work 17].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality \p = np " in our model implies the almost complete collapse of the standard polynomial hierarchy. In contrast to classical computational models, the models studied here exhibit at least some robustness with respect to noise and implementation errors.

[1]  Michael Sipser,et al.  Parity, circuits, and the polynomial-time hierarchy , 1981, 22nd Annual Symposium on Foundations of Computer Science (sfcs 1981).

[2]  S. Smale,et al.  On a theory of computation and complexity over the real numbers; np-completeness , 1989 .

[3]  Timothy X. Brown,et al.  Competitive neural architecture for hardware solution to the assignment problem , 1991, Neural Networks.

[4]  Michael Barr,et al.  The Emperor's New Mind , 1989 .

[5]  David E. Muller,et al.  Complexity in Electronic Switching Circuits , 1956, IRE Trans. Electron. Comput..

[6]  Georg Schnitger,et al.  On the computational power of sigmoid versus Boolean threshold circuits , 1991, [1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science.

[7]  B. MacLennan Continuous Symbol Systems: The Logic of Connectionism , 1991 .

[8]  Alan F. Murray,et al.  International Joint Conference on Neural Networks , 1993 .

[9]  B. Dickinson,et al.  The complexity of analog computation , 1986 .

[10]  Uzi Vishkin,et al.  Constant Depth Reducibility , 1984, SIAM J. Comput..