Fixed-weight networks can learn

A theorem describing how fixed-weight recurrent neural networks can approximate adaptive-weight learning algorithms is proved. The theorem applies to most networks and learning algorithms currently in use. It is concluded from the theorem that a system which exhibits learning behavior may exhibit no synaptic weight modifications. This idea is demonstrated by transforming a backward error propagation network into a fixed-weight system