Local Unsupervised Learning for Image Analysis

Local Hebbian learning is believed to be inferior in performance to end-to-end training using a backpropagation algorithm. We question this popular belief by designing a local algorithm that can learn convolutional filters at scale on large image datasets. These filters combined with patch normalization and very steep non-linearities result in a good classification accuracy for shallow networks trained locally, as opposed to end-to-end. The filters learned by our algorithm contain both orientation selective units and unoriented color units, resembling the responses of pyramidal neurons located in the cytochrome oxidase 'interblob' and 'blob' regions in the primary visual cortex of primates. It is shown that convolutional networks with patch normalization significantly outperform standard convolutional networks on the task of recovering the original classes when shadows are superimposed on top of standard CIFAR-10 images. Patch normalization approximates the retinal adaptation to the mean light intensity, important for human vision. We also demonstrate a successful transfer of learned representations between CIFAR-10 and ImageNet 32x32 datasets. All these results taken together hint at the possibility that local unsupervised training might be a powerful tool for learning general representations (without specifying the task) directly from unlabeled data.

[1]  D. Hubel,et al.  Anatomy and physiology of a color system in the primate visual cortex , 1984, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[2]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[3]  Kunihiko Fukushima,et al.  Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position , 1980, Biological Cybernetics.

[4]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[5]  F. Rieke,et al.  Light adaptation in cone vision involves switching between receptor and post-receptor sites , 2007, Nature.

[6]  R. Shapley,et al.  The Orientation Selectivity of Color-Responsive Neurons in Macaque V1 , 2008, The Journal of Neuroscience.

[7]  Honglak Lee,et al.  An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.

[8]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[9]  Fukushima Kunihiko Training Multi-layered Neural Network Neocognitron , 2012 .

[10]  M. Carandini,et al.  Normalization as a canonical neural computation , 2013, Nature Reviews Neuroscience.

[11]  Thomas Brox,et al.  Discriminative Unsupervised Feature Learning with Convolutional Neural Networks , 2014, NIPS.

[12]  Tao Hu,et al.  A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data , 2015, Neural Computation.

[13]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Alan L. Yuille,et al.  Unsupervised Learning Using Generative Adversarial Training And Clustering , 2016 .

[15]  John J. Hopfield,et al.  Dense Associative Memory for Pattern Recognition , 2016, NIPS.

[16]  Yuwei Cui,et al.  Continuous Online Sequence Learning with an Unsupervised Neural Network Model , 2015, Neural Computation.

[17]  Trevor Darrell,et al.  Adversarial Feature Learning , 2016, ICLR.

[18]  Renjie Liao,et al.  Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes , 2016, ICLR.

[19]  Andrea Soltoggio,et al.  Online Representation Learning with Single and Multi-layer Hebbian Networks for Image Classification , 2017, ICANN.

[20]  H. Sebastian Seung,et al.  A correlation game for unsupervised learning yields computational interpretations of Hebbian excitation, anti-Hebbian inhibition, and synapse elimination , 2017, ArXiv.

[21]  Frank Hutter,et al.  A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets , 2017, ArXiv.

[22]  Ibm Power Npu team Functionality and performance of NVLink with IBM POWER9 processors , 2018, IBM J. Res. Dev..

[23]  John J. Hopfield,et al.  Dense Associative Memory Is Robust to Adversarial Inputs , 2017, Neural Computation.

[24]  Anirvan M. Sengupta,et al.  Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks? , 2017, Neural Computation.

[25]  John J. Hopfield,et al.  Unsupervised learning by competing hidden units , 2018, Proceedings of the National Academy of Sciences.