Probing a Deep Neural Network

We report a number of experiments on a deep convolutional network in order to gain a better understanding of the transformations that emerge from learning at the various layers. We analyze the backward flow and the reconstructed images, using an adaptive masking approach in which pooling and nonlinearities at the various layers are represented by data-dependent binary masks. We focus on the field of view of specific neurons, also using random parameters, in order to understand the nature of the information that flows through the activation’s“holes" that emerge in the multi-layer structure when an image is presented at the input. We show how the peculiarity of the multi-layer structure is not so much in the learned parameters, but in the patterns of connectivity that are partly imposed and then learned. Furthermore, a deep network appears to focus more on statistics, such as gradient-like transformations, rather than on filters matched to image patterns. Our probes seem to explain why classical image processing algorithms, such as the famous SIFT, have provided robust, although limited, solutions to image recognition tasks.

[1]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[2]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[3]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[4]  Graham W. Taylor,et al.  Adaptive deconvolutional networks for mid and high level feature learning , 2011, 2011 International Conference on Computer Vision.

[5]  Hod Lipson,et al.  Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.

[6]  Lorenzo Rosasco,et al.  Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review , 2016, International Journal of Automation and Computing.

[7]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[8]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[9]  Razvan Pascanu,et al.  On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.