Networks based on Kanerva's sparse distributed memory: results showing their strengths and limitations and a new algorithm to design the location matching layer

Kanerva's sparse distributed memory model consists of a fixed nonlinear mapping, called location mapping, followed by a single layer of adaptive dot-product step-threshold links. Various networks of this type are tested on three tasks in order to discover the circumstances in which this type of network provides an efficient solution. The networks provide more competitive performance when the dimensionality of the input patterns is fairly low. A location pruning technique is reported which improves the design of the location matching mappings. The resulting network is extensively tested on large pattern classification tasks to demonstrate the benefits of the algorithm. The experiments show that the main benefit of location matching networks is their ability, using the location pruning algorithm, to train in roughly 1/2 to 1/10 of the training iterations required by single or double layer adaptive networks on the same tasks.<<ETX>>