Parallel integration of vision modules.

Computer algorithms have been developed for several early vision processes, such as edge detection, stereopsis, motion, texture, and color, that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. Not surprisingly, biological vision systems still greatly outperform computer vision programs. One of the keys to the reliability, flexibility, and robustness of biological vision systems is their ability to integrate several visual cues. A computational technique for integrating different visual cues has now been developed and implemented with encouraging results on a parallel supercomputer.

[1]  N. Metropolis,et al.  Equation of State Calculations by Fast Computing Machines , 1953, Resonance.

[2]  Donald Geman,et al.  Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Tomaso Poggio,et al.  Computational vision and regularization theory , 1985, Nature.

[4]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Andrew Blake,et al.  Visual Reconstruction , 1987, Deep Learning for EEG-Based Brain–Computer Interfaces.

[6]  Tomaso Poggio,et al.  Computing texture boundaries from images , 1988, Nature.