Generalization performance of complex adaptive tasks.

Optimal strategies for predicting correctly the output of a few new random inputs, when various feedforward networks are trained by noise-free random training examples, are examined analytically and numerically. The existence of a universal strategy for various generalization tasks is discussed, and indicates that the Bayes algorithm is not always the optimal strategy.