Decision Trees and Random Subwindows for Object Recognition

In this paper, we compare five tree-based machine learning methods within our recent generic image-classification framework based on random extraction and classification of subwindows. We evaluate them on three publicly available object-recognition datasets (COIL-100, ETH-80, and ZuBuD). Our comparison shows that this general and conceptually simple framework yields good results when combined with ensembles of decision trees, especially when using Tree Boosting or Extra-Trees. The latter is particularly attractive in terms of computational efficiency.

[1]  Cordelia Schmid,et al.  A Comparison of Affine Region Detectors , 2005, International Journal of Computer Vision.

[2]  Pierre Geurts,et al.  Extremely randomized trees , 2006, Machine Learning.

[3]  Pierre Geurts,et al.  Contributions to decision tree induction: bias/variance tradeoff and time series classification , 2002 .

[4]  Hiroshi Murase,et al.  Visual learning and recognition of 3-d objects from appearance , 2005, International Journal of Computer Vision.

[5]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[6]  Raphaël Marée,et al.  Random subwindows for robust image classification , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[7]  Louis Wehenkel,et al.  Automatic Learning Techniques in Power Systems , 1997 .

[8]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[9]  Stepán Obdrzálek,et al.  Object recognition methods based on transformation covariant features , 2004, 2004 12th European Signal Processing Conference.

[10]  Cordelia Schmid,et al.  A performance evaluation of local descriptors , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.