Some weak learning results

An algorithm is a weak learner if with some small probability itoutputs a hypothesis with error slightly below 50%. This paper presentssufficient conditions for weak learning. Our main result requires a “consistency oracle” for theconcept class <inline-equation><f><ge>F</ge></f></inline-equation> which decides for a given set of examples whetherthere is a concept in <inline-equation><f><ge>F</ge></f></inline-equation> consistent with the examples. We show that such anoracle can be used to construct a computationally efficient weaklearning algorithm for <inline-equation><f><rm><ge>F</ge></rm></f></inline-equation> if<inline-equation><f><ge>F</ge></f><?Pub Caret></inline-equation> is learnable at all. We considerconsistency oracles which are allowed to give wrong answers anddiscusses how the number of incorrect answers effects the oracle's usein computationally efficient weak learning algorihms. We also define “weak Occam algorithms” which, when given a set of <?Pub Fmt italic>m<?Pub Fmt /italic> examples, select aconsistent hypothesis from some class of2<?Pub Fmt italic><supscrpt>m-(1/p(m))</supscrpt><?Pub Fmt /italic>possible hypotheses. We show that these weak Occam algorithms are alsoweak learners. In contrast, we show that an Occam style algorithm whichselects a consistent hypothesis from a class of2<?Pub Fmt italic><supscrpt>m+1</supscrpt><?Pub Fmt /italic>-2hypotheses is not a weak learner.