Prediction error and consistent parameter area in neural learning

The more the number of training examples increases, the better a learning machine will behave. It is an important problem to know how fast and how well the behavior is improved. The average prediction error is one of the most popular criteria to see the behavior. From the geometrical point of view, a training example encircles a region in the parameter space in which the true parameter should be included. The set of parameters is called the consistent area when any machine in the set can explain the input-output relation of the given examples. It is dual to the convex hull of examples in the input signal space. We have studied the stochastic geometrical features of the convex hull and derived the upper and lower bounds of the average prediction error of the simple perceptron network.