A Self-Organizing Neural Network Model of the Primary Visual Cortex

This work is aimed at modeling and analyzing the computational processes by which sensory information is learned and represented in the brain. First, a general self-organizing neural network architecture that forms efficient representations of visual inputs is presented. Two kinds of visual knowledge is stored in the cortical network: information about the principal feature dimensions of the visual world (such as line orientation and ocularity) is stored in the afferent connections, and correlations between these features in the lateral connections. During visual processing, the cortical network filters out these correlations, generating a redundancy-reduced sparse coding of the visual input. Through massively parallel computational simulations, this architecture is shown to give rise to structures similar to those in the primary visual cortex, such as (1) receptive fields, (2) topographic maps, (3) ocular dominance, orientation and size preference columns, and (4) patterned lateral connections between neurons. The same computational process is shown to account for many of the dynamic processes in the visual cortex, such as reorganization following retinal and cortical lesions, and perceptual shifts following dynamic receptive field changes. These results suggest that a single self-organizing process underlies development, plasticity and visual functions in the primary visual cortex.