New bounds for correct generalization

A theoretical approach to the determination of the number of training examples for a neural network architecture is provided by the theory of Vapnik and Chervonenkis on the minimization of the empirical risk. We report here a new bound on the joint probability that both the approximation error between the binary function learned by the input/output examples and the target binary function is larger than /spl epsiv/ and the empirical error on the examples is smaller than a fixed non-null fraction of /spl epsiv/. The given bounds are independent of the probability distribution on the input space and improve some existing results on the generalization abilities of an adaptive binary function.