Unlabeled Compression Schemes for Maximum Classes,

We give a compression scheme for any maximum class of VC dimension d that compresses any sample consistent with a concept in the class to at most d unlabeled points from the domain of the sample.

[1]  N. S. Barnett,et al.  Private communication , 1969 .

[2]  Vladimir Vapnik,et al.  Chervonenkis: On the uniform convergence of relative frequencies of events to their probabilities , 1971 .

[3]  Norbert Sauer,et al.  On the Density of Families of Sets , 1972, J. Comb. Theory, Ser. A.

[4]  Herbert Edelsbrunner,et al.  Algorithms in Combinatorial Geometry , 1987, EATCS Monographs in Theoretical Computer Science.

[5]  David Haussler,et al.  Predicting {0,1}-functions on randomly drawn points , 1988, COLT '88.

[6]  Sally Floyd,et al.  Space-bounded learning and the Vapnik-Chervonenkis dimension , 1989, COLT '89.

[7]  Manfred K. Warmuth,et al.  Learning integer lattices , 1990, COLT '90.

[8]  Leonid Gurvits Linear Algebraic Proofs of VC-Dimension Based Inequalities , 1997, EuroCOLT.

[9]  S. Ben-David,et al.  Combinatorial Variability of Vapnik-chervonenkis Classes with Applications to Sample Compression Schemes , 1998, Discrete Applied Mathematics.

[10]  Yi Li,et al.  The one-inclusion graph algorithm is near-optimal for the prediction model of learning , 2001, IEEE Trans. Inf. Theory.

[11]  John Shawe-Taylor,et al.  The Set Covering Machine , 2003, J. Mach. Learn. Res..

[12]  John Shawe-Taylor,et al.  The Decision List Machine , 2002, NIPS.

[13]  Manfred K. Warmuth,et al.  Relating Data Compression and Learnability , 2003 .

[14]  Manfred K. Warmuth Compressing to VC Dimension Many Points , 2003, COLT.

[15]  Manfred K. Warmuth,et al.  Sample Compression, Learnability, and the Vapnik-Chervonenkis Dimension , 1995, Machine Learning.

[16]  Bernhard Schölkopf,et al.  A Compression Approach to Support Vector Model Selection , 2004, J. Mach. Learn. Res..

[17]  Manfred K. Warmuth The Optimal PAC Algorithm , 2004, COLT.

[18]  J. Langford Tutorial on Practical Prediction Theory for Classification , 2005, J. Mach. Learn. Res..

[19]  D. Shasha,et al.  Sparse solutions for linear prediction problems , 2006 .

[20]  Peter L. Bartlett,et al.  Shifting, One-Inclusion Mistake Bounds and Tight Multiclass Expected Risk Bounds , 2006, NIPS.

[21]  V. Vapnik Estimation of Dependences Based on Empirical Data , 2006 .

[22]  Peter L. Bartlett,et al.  Shifting: One-inclusion mistake bounds and sample compression , 2009, J. Comput. Syst. Sci..