Psychophysical support for a two-dimensional view interpolation theory of object recognition.

Does the human brain represent objects for recognition by storing a series of two-dimensional snapshots, or are the object models, in some sense, three-dimensional analogs of the objects they represent? One way to address this question is to explore the ability of the human visual system to generalize recognition from familiar to unfamiliar views of three-dimensional objects. Three recently proposed theories of object recognition--viewpoint normalization or alignment of three-dimensional models [Ullman, S. (1989) Cognition 32, 193-254], linear combination of two-dimensional views [Ullman, S. & Basri, R. (1990) Recognition by Linear Combinations of Models (Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge), A. I. Memo No. 1152], and view approximation [Poggio, T. & Edelman, S. (1990) Nature (London) 343, 263-266]--predict different patterns of generalization to unfamiliar views. We have exploited the conflicting predictions to test the three theories directly in a psychophysical experiment involving computer-generated three-dimensional objects. Our results suggest that the human visual system is better described as recognizing these objects by two-dimensional view interpolation than by alignment or other methods that rely on object-centered three-dimensional models.