Variability regularization in large-margin classification

This paper introduces a novel regularization strategy to address the generalization issues for large-margin classifiers from the Empirical Risk Minimization (ERM) perspective. First, the ERM principle is argued to be more flexible than the Structural Risk Minimization (SRM) principle by reviewing the difference between the two strategies as the fundamental principles for large-margin classifier design. Second, after studying the large-margin classifier design based on the SRM principle, a realization of the ERM principle is proposed in the form of a bias-variance criterion instead of the conventional expected error criterion. The bias-variance criterion is shown to have the regularization capability needed by a large-margin classifier designed according to the ERM principle. Finally, a mathematical programming procedure is used to efficiently achieve the best regularization policy. The new regularization strategy based on the ERM principle is evaluated on a set of machine learning experiments. Experimental results clearly demonstrate the strength of the proposed regularization strategy to achieve the minimum error rate performance measure.