A Self-Organizing Network for Principal-Component Analysis

We present a two-layered network of linear neurons that organizes itself in response to a set of presented patterns. After completion of the learning process, the net transforms the complete information contained in a pattern into mutually independent features. The synaptic weights between layers obey a Hebbian learning rule. We propose a local anti-Hebbian rule for lateral, hierarchically organized weights within the output layer. For a proper choice of the learning parameters, the rule forces the activities of the output units to becomes uncorrelated and the lateral weights to vanish. The weights between the two layers converge to the eigenvectors of the covariance matrix of input patterns, i.e. the network performs a principal-component analysis of the input information.