Simultaneous Learning of Trees and Representations for Extreme Classification, with Application to Language Modeling

This paper addresses the problem of multi-class classification with an extremely large number of classes, where the class predictor is learned jointly with the data representation, as is the case in language modeling problems. The predictor admits a hierarchical structure, which allows for efficient handling of settings that deal with a very large number of labels. The predictive power of the model however can heavily depend on the structure of the tree. We address this problem with an algorithm for tree construction and training that is based on a new objective function which favors balanced and easily-separable node partitions. We describe theoretical properties of this objective function and show that it gives rise to a boosting algorithm for which we provide a bound on classification error, i.e. we show that if the objective is weakly optimized in the internal nodes of the tree, then our algorithm will amplify this weak advantage to build a tree achieving any desired level of accuracy. We apply the algorithm to the task of language modeling by re-framing conditional density estimation as a variant of the hierarchical classification problem. We empirically demonstrate on text data that the proposed approach leads to high-quality trees in terms of perplexity and computational running time compared to its non-hierarchical counterpart.

[1]  Yoshua Bengio,et al.  Word Representations: A Simple and General Method for Semi-Supervised Learning , 2010, ACL.

[2]  Andrew Y. Ng,et al.  Parsing Natural Scenes and Natural Language with Recursive Neural Networks , 2011, ICML.

[3]  Christopher D. Manning,et al.  Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.

[4]  H. Schwenk,et al.  Efficient training of large neural networks for language modeling , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[5]  Andreas Vlachos,et al.  Dependency Recurrent Neural Language Models for Sentence Completion , 2015, ACL.

[6]  Manik Varma,et al.  FastXML: a fast, accurate and stable tree-classifier for extreme multi-label learning , 2014, KDD.

[7]  Jean-Luc Gauvain,et al.  Connectionist language modeling for large vocabulary continuous speech recognition , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[8]  Naftali Tishby,et al.  Distributional Clustering of English Words , 1993, ACL.

[9]  Yee Whye Teh,et al.  A fast and simple algorithm for training neural probabilistic language models , 2012, ICML.

[10]  Thomas Niesler,et al.  Comparison of part-of-speech and automatically derived category-based language models for speech recognition , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[11]  Parikshit Ram,et al.  Density estimation trees , 2011, KDD.

[12]  Jean-Luc Gauvain,et al.  Training Neural Network Language Models on Very Large Corpora , 2005, HLT.

[13]  Shai Shalev-Shwartz,et al.  Online Learning and Online Convex Optimization , 2012, Found. Trends Mach. Learn..