Three unfinished works on the optimal storage capacity of networks

The optimal storage properties of three different neural network models are studied. For two of these models the architecture of the network is a perceptron with +or-J interactions, whereas for the third model the output can be an arbitrary function of the inputs. Analytic bounds and numerical estimates of the optimal capacities and of the minimal fraction of errors are obtained for the first two models. The third model can be solved exactly and the exact solution is compared to the bounds and to the results of numerical simulations used for the two other models.