Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms

This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These test sare compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have high probability of type I error in certain situations and should never be used: a test for the difference of two proportions and a paired-differences t test based on taking several random train-test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of type I error. A fourth test, McNemar's test, is shown to have low type I error. The fifth test is a new test, 5 2 cv, based on five iterations of twofold cross-validation. Experiments show that this test also has acceptable type I error. The article also measures the power (ability to detect algorithm differences when they do exist) of these tests. The cross-validated t test is the most powerful. The 52 cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, Mc-Nemar's test is the only test with acceptable type I error. For algorithms that can be executed 10 times, the 5 2 cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set.

[1]  W. J. Langford Statistical Methods , 1959, Nature.

[2]  James Joseph Biundo,et al.  Analysis of Contingency Tables , 1969 .

[3]  John F. Kolen,et al.  Backpropagation is Sensitive to Initial Conditions , 1990, Complex Syst..

[4]  Belur V. Dasarathy,et al.  Nearest neighbor (NN) norms: NN pattern classification techniques , 1991 .

[5]  Harold R. Lindman Analysis of Variance in Experimental Design , 1991 .

[6]  P. W. Frey,et al.  Letter recognition using Holland-style adaptive classifiers , 2004, Machine Learning.

[7]  D. Haussler,et al.  Rigorous Learning Curve Bounds from Statistical Mechanics , 1994, COLT '94.

[8]  Thomas G. Dietterich,et al.  Error-Correcting Output Coding Corrects Bias and Variance , 1995, ICML.

[9]  Ron Kohavi,et al.  Wrappers for performance enhancement and oblivious decision graphs , 1995 .

[10]  Christopher J. Merz,et al.  UCI Repository of Machine Learning Databases , 1996 .

[11]  L. Breiman Heuristics of instability and stabilization in model selection , 1996 .

[12]  Geoffrey E. Hinton,et al.  Evaluation of Gaussian processes and other methods for non-linear regression , 1997 .

[13]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[14]  M. Kubat,et al.  Decision trees can initialize radial-basis function networks , 1998, IEEE Trans. Neural Networks.

[15]  Matthew Brand,et al.  Structure Learning in Conditional Probability Models via an Entropic Prior and Parameter Extinction , 1999, Neural Computation.