Probabilistic Models of the Brain

829 The notion that the brain performs inferences dates back at least to Helmholtz, who observed that most perceptual problems are inherently ill posed—our sensory input vastly underdetermines the structure of the world that we perceive. The brain must therefore resort to making a guess as to what lies out in the world, based on prior knowledge of what the world is like. Helmholtz coined the term unconscious inference to refer to the myriad perceptual processes that faithfully operate beneath our awareness, and his ideas form the foundation of modern perceptual science. Despite widespread acceptance of the idea that perception involves inference, probabilistic models of these inferences have only recently entered mainstream neuroscience and psychology. Early neural network models, for instance, were not described in the language of probability, although many of them, in retrospect, have probabilistic interpretations. But given the uncertainty present in everything the brain does, both due to the ill-posed nature of its tasks and to internal and external sources of noise, probabilities are unavoidable. Thus it is now standard for models of the brain to make probabilities explicit. Probabilistic Models of The Brain, edited by Rao, Olshausen and Lewicki, surveys this next generation of models. The book, which grew out of a 1998 NIPS workshop on the same topic, consists of 16 invited chapters written by leaders in computational vision and neuroscience. It is split into two parts. The first deals with relatively abstract models of what the brain does, and the second which neurons are an explicit component, often showing how neural networks may instantiate particular computational principles. For instance, Simoncelli and colleagues show that adaptation effects in cortical neurons are predicted by contrast normalization mechanisms if the nor-malization parameters are set to make the resulting neural responses as independent as possible. This provides new support for Horace Barlow's well-known proposal that one goal of cortical computation is to produce neural responses that are statistically independent. Similarly, Olshausen shows that if neural responses to natural movies are forced to be sparse and independent , the receptive fields that result resemble those found physiologically in V1. These two lines of research provide paradigmatic examples of models providing support for theories where the relevant experimental evidence is hard to come by; experimental tests of independence would require multielectrode recordings, which are relatively uncommon due to their technical difficulty. Other chapters explore general mechanisms of neural computation. Topics …