Perceptron learning with sign-constrained weights

The authors study neural network models in which the synaptic efficacies are restricted to have a prescribed set of signs. It is proved that such neural networks can learn a set of random patterns by a perceptron-like algorithm which respects the synaptic restrictions at every step. In particular, it shows that learning can take place iteratively in a network which obeys Dale's rule, i.e. in which neurons are exclusively excitatory or inhibitory. The learning algorithm as well as its convergence theorem are stated in perceptron language and it is proved that the algorithm converges under the same conditions as required for an unconstrained perceptron. Numerical experiments show that these necessary conditions can actually be met for relatively large sets of patterns to be learned. They then argue that the results are invariant under the distribution of the signs, due to gauge invariance for random patterns. As a consequence the same sets of random patterns can be learned by networks which have any fixed distribution of synaptic signs, ranging from fully inhibitory to fully excitatory.