A Comparative Study of the Practical Characteristics of Neural Network and Conventional Pattern Classifiers
暂无分享,去创建一个
Seven different pattern classifiers were implemented on a serial computer and compared using artificial and speech recognition tasks. Two neural network (radial basis function and high order polynomial GMDH network) and five conventional classifiers (Gaussian mixture, linear tree, K nearest neighbor, KD-tree, and condensed K nearest neighbor) were evaluated. Classifiers were chosen to be representative of different approaches to pattern classification and to complement and extend those evaluated in a previous study (Lee and Lippmann, 1989). This and the previous study both demonstrate that classification error rates can be equivalent across different classifiers when they are powerful enough to form minimum error decision regions, when they are properly tuned, and when sufficient training data is available. Practical characteristics such as training time, classification time, and memory requirements, however, can differ by orders of magnitude. These results suggest that the selection of a classifier for a particular task should be guided not so much by small differences in error rate, but by practical considerations concerning memory usage, computational resources, ease of implementation, and restrictions on training and classification times.
[1] R. Lippmann. Pattern classification using neural networks , 1989, IEEE Communications Magazine.
[2] Yuchun Lee,et al. Classifiers : adaptive modules in pattern recognition systems , 1989 .
[3] Richard Lippmann,et al. Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems , 1989, NIPS.
[4] Richard Lippmann,et al. Using Genetic Algorithms to Improve Pattern Classification Performance , 1990, NIPS.