Empirical Explorations in Training Networks with Discrete Activations

We present extensive experiments training and testing hidden units in deep networks that emit only a predefined, static, number of discretized values. These units provide benefits in real-world deployment in systems in which memory and/or computation may be limited. Additionally, they are particularly well suited for use in large recurrent network models that require the maintenance of large amounts of internal state in memory. Surprisingly, we find that despite reducing the number of values that can be represented in the output activations from $2^{32}-2^{64}$ to between 64 and 256, there is little to no degradation in network performance across a variety of different settings. We investigate simple classification and regression tasks, as well as memorization and compression problems. We compare the results with more standard activations, such as tanh and relu. Unlike previous discretization studies which often concentrate only on binary units, we examine the effects of varying the number of allowed activation levels. Compared to existing approaches for discretization, the approach presented here is both conceptually and programatically simple, has no stochastic component, and allows the training, testing, and usage phases to be treated in exactly the same manner.

[1]  Tapani Raiko,et al.  Techniques for Learning Binary Stochastic Feedforward Neural Networks , 2014, ICLR.

[2]  Yoshua Bengio,et al.  Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.

[3]  Ronald L. Rivest,et al.  Training a 3-node neural network is NP-complete , 1988, COLT '88.

[4]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[5]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[6]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[7]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[8]  D. F. Specht,et al.  Probabilistic neural networks for classification, mapping, or associative memory , 1988, IEEE 1988 International Conference on Neural Networks.

[9]  Bin Liu,et al.  Ternary Weight Networks , 2016, ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[10]  Yee Whye Teh,et al.  The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.

[11]  Andreas Weigend,et al.  On overfitting and the effective number of hidden units , 1993 .

[12]  Günther Palm,et al.  Neural associative memories and sparse coding , 2013, Neural Networks.

[13]  J.A. Anderson,et al.  Neural Network Models for Pattern Recognition and Associative Memory , 2002 .

[14]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[15]  James T. Kwok,et al.  Loss-aware Binarization of Deep Networks , 2016, ICLR.

[16]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[17]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[18]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[19]  Pierre Baldi,et al.  Learning Activation Functions to Improve Deep Neural Networks , 2014, ICLR.

[20]  Ruslan Salakhutdinov,et al.  Learning Stochastic Feedforward Neural Networks , 2013, NIPS.

[21]  George D. Magoulas,et al.  Training multilayer networks with discrete activation functions , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[22]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, NIPS.

[23]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.