LOCAL LEARNING RULES AND SPARSE CODING IN NEURAL NETWORKS

In technical applications of associative neural network memories the following adjustments are optimal, if the neuron output is binary: - the neuron output should be 0 or 1 - the local learning rule should be the Hebb rule - the pattern vectors should be sparse, i.e. most components should be zeroe. If these conditions are fulfilled, one can achieve an asymptotic memory capacity of 1/(2 ln 2) % 72% and of ln 2 ≈ 69% with a binary memory matrix.