Nerve net models are designed that can learn up to γ of αn possible sequences of stimuli, where γ is large but αn much larger still. The models proposed store information in modifiable synapses. Their connexions need to be specified only in a general way, a large part being random. They resist destruction of a good many cells. When built with Hebb synapses (or any other class B or C synapses whose modification depends on the conjunction of activities in two cells) they demand a number of inputs to each cell that agrees well with known anatomy. The number of cells required, for performing tasks of the kind considered as well as the human brain can perform them, is only a small fraction of the number of cells in the brain. It is suggested that the models proposed are likely to be the most economical possible for their tasks, components and constructional constraints, and that any others that approach them in economy must share with them certain observable features, in particular an abundance of cells with many independent inputs and low thresholds.
[1]
E. Hilgard,et al.
Conditioning and Learning
,
1940
.
[2]
A. Mirsky,et al.
THE DESOXYRIBONUCLEIC ACID CONTENT OF ANIMAL CELLS AND ITS EVOLUTIONARY SIGNIFICANCE
,
1951,
The Journal of general physiology.
[3]
J. S. GRIFFITH,et al.
A Theory of the Nature of Memory
,
1966,
Nature.
[4]
W. Burke.
Neuronal Models for Conditioned Reflexes
,
1966,
Nature.
[5]
G. Brindley.
The classification of modifiable synapses and their use in models for conditioning
,
1967,
Proceedings of the Royal Society of London. Series B. Biological Sciences.
[6]
B. Cragg.
The density of synapses and neurones in the motor and visual areas of the cerebral cortex.
,
1967,
Journal of anatomy.