Subspace Locally Competitive Algorithms

We introduce subspace locally competitive algorithms (SLCAs), a family of novel network architectures for modeling latent representations of natural signals with group sparse structure. SLCA first layer neurons are derived from locally competitive algorithms, which produce responses and learn representations that are well matched to both the linear and non-linear properties observed in simple cells in layer 4 of primary visual cortex (area V1). SLCA incorporates a second layer of neurons which produce approximately invariant responses to signal variations that are linear in their corresponding subspaces, such as phase shifts, resembling a hallmark characteristic of complex cells in V1. We provide a practical analysis of training parameter settings, explore the features and invariances learned, and finally compare the model to single-layer sparse coding and to independent subspace analysis.

[1]  Matthias Bethge,et al.  Slowness and Sparseness Have Diverging Effects on Complex Cell Learning , 2014, PLoS Comput. Biol..

[2]  Tai Sing Lee,et al.  Hierarchical Bayesian inference in the visual cortex. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[3]  Aapo Hyvärinen,et al.  Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces , 2000, Neural Computation.

[4]  Christopher J. Rozell,et al.  Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System , 2013, PLoS Comput. Biol..

[5]  D. Tolhurst,et al.  On the distinctness of simple and complex cells in the visual cortex of the cat. , 1983, The Journal of physiology.

[6]  David J. Field,et al.  Wavelets, vision and the statistics of natural scenes , 1999, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[7]  E H Adelson,et al.  Spatiotemporal energy models for the perception of motion. , 1985, Journal of the Optical Society of America. A, Optics and image science.

[8]  Christopher J. Rozell,et al.  Analog Sparse Approximation with Applications to Compressed Sensing , 2011, 1111.4118.

[9]  Eero P. Simoncelli,et al.  Random Cascades on Wavelet Trees and Their Use in Analyzing and Modeling Natural Images , 2001 .

[10]  Dylan M Paiton Analysis and applications of the Locally Competitive Algorithm , 2019 .

[11]  J L Gallant,et al.  Sparse coding and decorrelation in primary visual cortex during natural vision. , 2000, Science.

[12]  RussLL L. Ds Vnlos,et al.  SPATIAL FREQUENCY SELECTIVITY OF CELLS IN MACAQUE VISUAL CORTEX , 2022 .

[13]  D. Heeger Half-squaring in responses of cat striate cells , 1992, Visual Neuroscience.

[14]  Terrence J. Sejnowski,et al.  Slow Feature Analysis: Unsupervised Learning of Invariances , 2002, Neural Computation.

[15]  Daniel A. Pollen,et al.  Visual cortical neurons as localized spatial frequency filters , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[16]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[17]  Eero P. Simoncelli,et al.  Natural image statistics and divisive normalization: Modeling nonlinearity and adaptation in cortical neurons , 2002 .

[18]  David J. Field,et al.  Sparse Coding of Natural Images Produces Localized, Oriented, Bandpass Receptive Fields , 1995 .

[19]  Bruno A. Olshausen,et al.  PROBABILISTIC FRAMEWORK FOR THE ADAPTATION AND COMPARISON OF IMAGE CODES , 1999 .

[20]  Kedarnath P Vilankar,et al.  Selectivity, hyperselectivity, and the tuning of V1 neurons. , 2017, Journal of vision.

[21]  Kedarnath P. Vilankar,et al.  Conjectures regarding the nonlinear geometry of visual neurons , 2016, Vision Research.

[22]  Aapo Hyvärinen,et al.  Natural Image Statistics - A Probabilistic Approach to Early Computational Vision , 2009, Computational Imaging and Vision.

[23]  J. V. van Hateren,et al.  Independent component filters of natural images compared with simple cells in primary visual cortex , 1998, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[24]  D. Ringach,et al.  On the classification of simple and complex cells , 2002, Vision Research.

[25]  Bruno A. Olshausen,et al.  Group Sparse Coding with a Laplacian Scale Mixture Prior , 2010, NIPS.

[26]  M. Elad,et al.  $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation , 2006, IEEE Transactions on Signal Processing.

[27]  Eero P. Simoncelli,et al.  Natural signal statistics and sensory gain control , 2001, Nature Neuroscience.

[28]  Matthias Bethge,et al.  Natural Image Coding in V1: How Much Use Is Orientation Selectivity? , 2008, PLoS Comput. Biol..

[29]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[30]  Richard G. Baraniuk,et al.  Sparse Coding via Thresholding and Local Competition in Neural Circuits , 2008, Neural Computation.

[31]  A. Bruckstein,et al.  K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation , 2005 .

[32]  Aapo Hyvärinen,et al.  Fast and robust fixed-point algorithms for independent component analysis , 1999, IEEE Trans. Neural Networks.

[33]  Michael S. Lewicki,et al.  A Hierarchical Bayesian Model for Learning Nonlinear Statistical Regularities in Nonstationary Natural Signals , 2005, Neural Computation.

[34]  Terrence J. Sejnowski,et al.  Learning Overcomplete Representations , 2000, Neural Computation.