The Hopfield model of a neural network is extended to allow for the storage and retrieval of biased patterns, {${\ensuremath{\xi}}_{i}^{\ensuremath{\mu}}$}, where ${N}^{\mathrm{\ensuremath{-}}1}$${\mathcal{J}}_{i}^{N}$${\ensuremath{\xi}}_{i}^{\ensuremath{\mu}}$=a is arbitrary. Such patterns represent levels of activity (i.e., percentage of firing neurons) equal to (1/2)(1+a), -1lal1. If the coupling constants (synaptic efficacies) are constructed as in the original Hopfield model, the system can retrieve at most a very small number of patterns (p1+${a}^{\mathrm{\ensuremath{-}}2}$). This is due to the finite correlations (overlaps) between the patterns. The model is modified by subtracting the bias a from each pattern as it enters into the couplings. This modification restores the ability of the model to store a macroscopic number of patterns. Yet spurious states are found to plague the dynamics. It is then argued that the dynamics of the network should be consistent with the levels of activity of the stored patterns. This is implemented by adding a global constraint, which restricts the configuration space to states whose mean activity is in the neighborhood of (1/2)(1+a). The consequences of the restricted dynamics are analyzed in the replica symmetric mean-field theory. The global constraint suppresses spurious states and leads to the unexpected result that the storage capacity is higher than that of the unbiased network, up to very high values of the bias (\ensuremath{\Vert}a\ensuremath{\Vert}\ensuremath{\simeq}0.99). However, the information content of such networks is shown to be a monotonically decreasing function of a.