Sample Compression, Learnability, and the Vapnik-Chervonenkis Dimension

[1]  V. Vapnik Estimation of Dependences Based on Empirical Data , 2006 .

[2]  Sally A. Goldman,et al.  The Power of Self-Directed Learning , 1994, Machine Learning.

[3]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[4]  Manfred K. Warmuth,et al.  Relating Data Compression and Learnability , 2003 .

[5]  Manfred K. Warmuth,et al.  On Weak Learning , 1995, J. Comput. Syst. Sci..

[6]  David Haussler,et al.  How to use expert advice , 1993, STOC.

[7]  John Shawe-Taylor,et al.  Bounding Sample Size with the Vapnik-Chervonenkis Dimension , 1993, Discrete Applied Mathematics.

[8]  Kenneth L. Clarkson,et al.  RANDOMIZED GEOMETRIC ALGORITHMS , 1992 .

[9]  David Haussler,et al.  Efficient Learning Algorithms. , 1990 .

[10]  Manfred K. Warmuth,et al.  Learning integer lattices , 1990, COLT '90.

[11]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1990, COLT '90.

[12]  Gerhard J. Woeginger,et al.  Some new bounds for Epsilon-nets , 1990, SCG '90.

[13]  N. Littlestone Mistake bounds and logarithmic linear-threshold learning algorithms , 1990 .

[14]  Manfred K. Warmuth,et al.  Learning Nested Differences of Intersection-Closed Concept Classes , 1989, COLT '89.

[15]  Sally Floyd,et al.  Space-bounded learning and the Vapnik-Chervonenkis dimension , 1989, COLT '89.

[16]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[17]  Ronald L. Rivest,et al.  Inferring Decision Trees Using the Minimum Description Length Principle , 1989, Inf. Comput..

[18]  Anselm Blumer,et al.  Learning faster than promised by the Vapnik-Chervonenkis dimension , 1989, Discret. Appl. Math..

[19]  David Haussler,et al.  Predicting {0,1}-functions on randomly drawn points , 1988, COLT '88.

[20]  Leslie G. Valiant,et al.  A general lower bound on the number of examples needed for learning , 1988, COLT '88.

[21]  Leslie G. Valiant,et al.  Computational limitations on learning from examples , 1988, JACM.

[22]  Dana Angluin,et al.  Queries and concept learning , 1988, Machine Learning.

[23]  David Haussler,et al.  ɛ-nets and simplex range queries , 1987, Discret. Comput. Geom..

[24]  N. Littlestone Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).

[25]  J. Rissanen Stochastic Complexity and Modeling , 1986 .

[26]  David Haussler,et al.  Epsilon-nets and simplex range queries , 1986, SCG '86.

[27]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[28]  Temple F. Smith Occam's razor , 1980, Nature.

[29]  Tom M. Mitchell,et al.  Version Spaces: A Candidate Elimination Approach to Rule Learning , 1977, IJCAI.

[30]  Norbert Sauer,et al.  On the Density of Families of Sets , 1972, J. Comb. Theory, Ser. A.

[31]  Vladimir Vapnik,et al.  Chervonenkis: On the uniform convergence of relative frequencies of events to their probabilities , 1971 .

[32]  Journal of the Association for Computing Machinery , 1961, Nature.