On-line versus Off-line Learning from Random Examples: General Results.

I propose a general model of on-line learning from random examples which, when applied to a smooth realizable stochastic rule, yields the same asymptotic generalization error rate as optimal batch algorithms. The approach is based on an iterative Gaussian approximation to the posterior Gibbs distribution of rule parameters.