In this paper we consider the problem of novelty detection, presenting an algorithm that aims to find a minimal region in input space containing a fraction α of the probability mass underlying a data set. This algorithm—the "single-class minimax probability machine (MPM)"—is built on a distribution-free methodology that minimizes the worst-case probability of a data point falling outside of a convex set, given only the mean and covariance matrix of the distribution and making no further distributional assumptions. We present a robust approach to estimating the mean and covariance matrix within the general two-class MPM setting, and show how this approach specializes to the single-class problem. We provide empirical results comparing the single-class MPM to the single-class SVM and a two-class SVM method.
[1]
Shai Ben-David,et al.
Learning Distributions by Their Density Levels: A Paradigm for Learning without a Teacher
,
1997,
J. Comput. Syst. Sci..
[2]
L. Breiman.
Arcing Classifiers
,
1998
.
[3]
Alexander J. Smola,et al.
Learning with kernels
,
1998
.
[4]
Robert P. W. Duin,et al.
Data domain description using support vectors
,
1999,
ESANN.
[5]
Robert P. W. Duin,et al.
Support vector domain description
,
1999,
Pattern Recognit. Lett..
[6]
Michael I. Jordan,et al.
Minimax Probability Machine
,
2001,
NIPS.
[7]
Michael I. Jordan,et al.
A Robust Minimax Approach to Classification
,
2003,
J. Mach. Learn. Res..