A new way to learn acoustic events

Most speech recognition systems still use Mel Frequency Cepstral Coefficients (MFCC’s) or Perceptual Linear Prediction Coefficients because these preserve a lot of the information required for recognition while being much more compact than a high-resolution spectrogram. As computers get faster and methods of modeling high-dimensional data improve, however, high-resolution spectrograms or other very high-dimensional representations of the sound wave become more attractive. They have already surpassed MFCC’s for some tasks [1]. Psychologists have argued that high-quality recognition would be facilitated by finding acoustic events or landmarks that have well-defined onset times, amplitudes and rates in addition to being present or absent. We introduce a new way of learning such acoustic events by using a new type of autoencoder that is given both a spectrogram and a desired global transformation and learns to output the transformed spectrogram. By specifying the global transformation in the appropriate way, we can force the autoencoder to extract accoustic events that, in addition to a probability of being present, have explicit onset times, amplitudes and rates. This makes it much easier to compute relationships between acoustic events.

[1]  L. Chistovich,et al.  The ‘center of gravity’ effect in vowel spectra and critical distance between the formants: Psychoacoustical study of the perception of vowel-like stimuli , 1979, Hearing Research.

[2]  B. Delgutte,et al.  Speech coding in the auditory nerve: IV. Sounds with consonant-like dynamic characteristics. , 1984, The Journal of the Acoustical Society of America.

[3]  George N. Clements,et al.  The geometry of phonological features , 1985, Phonology Yearbook.

[4]  Geoffrey E. Hinton,et al.  Distributed Representations , 1986, The Philosophy of Artificial Intelligence.

[5]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[6]  Ho-Young Jung,et al.  Speech feature extraction using independent component analysis , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[7]  Michael S. Lewicki,et al.  Efficient coding of natural sounds , 2002, Nature Neuroscience.

[8]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[9]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[10]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[11]  Geoffrey E. Hinton,et al.  Phone Recognition with the Mean-Covariance Restricted Boltzmann Machine , 2010, NIPS.

[12]  Geoffrey E. Hinton,et al.  Transforming Auto-Encoders , 2011, ICANN.

[13]  Geoffrey E. Hinton,et al.  Learning a better representation of speech soundwaves using restricted boltzmann machines , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[14]  Aren Jansen Whole word discriminative point process models , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).