We present a two-layered network of linear neurons that organizes itself as to extract the complete information contained in a set of presented patterns. The weights between layers obey a Hebbian rule. We propose a local anti-Hebbian rule for lateral, hierarchically organized weights within the output layer. This rule forces the activities of the output units to become uncorrelated and the lateral weights to vanish. The weights between layers converge to the eigenvectors of the covariance matrix of input patterns, i.e., the network performs a principal component analysis, yielding all principal components. As a consequence of the proposed learning scheme, the output units become detectors of orthogonal features, similar to ones found in the brain of mammals.