Independent component representations for face recognition

In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. A number of face recognition algorithms employ principal component analysis (PCA), which is based on the second-order statistics of the image set, and does not address high-order statistical dependencies such as the relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which separates the high-order moments of the input in addition to the second-order moments. ICA was performed on a set of face images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons. The algorithm maximizes the mutual information between the input and the output, which produces statistically independent outputs under certain conditions. ICA was performed on the face images under two different architectures. The first architecture provided a statistically independent basis set for the face images that can be viewed as a set of independent facial features. The second architecture provided a factorial code, in which the probability of any combination of features can be obtained from the product of their individual probabilities. Both ICA representations were superior to representations based on principal components analysis for recognizing faces across sessions and changes in expression.

[1]  S. Laughlin A Simple Coding Procedure Enhances a Neuron's Information Capacity , 1981, Zeitschrift fur Naturforschung. Section C, Biosciences.

[2]  M. V. Rossum,et al.  In Neural Computation , 2022 .

[3]  Garrison W. Cottrell,et al.  EMPATH: Face, Emotion, and Gender Recognition Using Holons , 1990, NIPS.

[4]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[5]  Alice J. O'Toole,et al.  Low-dimensional representation of faces in higher dimensions of the face space , 1993 .

[6]  J. Nadal Non linear neurons in the low noise limit : a factorial code maximizes information transferJean , 1994 .

[7]  J. Nadal,et al.  Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer Network 5 , 1994 .

[8]  Pierre Comon,et al.  Independent component analysis, A new concept? , 1994, Signal Process..

[9]  Alex Pentland,et al.  View-based and modular eigenspaces for face recognition , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[11]  Tzyy-Ping Jung,et al.  Independent Component Analysis of Electroencephalographic Data , 1995, NIPS.

[12]  D. Field,et al.  Natural Image Statistics and Eecient Coding , 1996 .

[13]  Penio S. Penev,et al.  Local feature analysis: A general statistical theory for object representation , 1996 .

[14]  D. Field,et al.  Natural image statistics and efficient coding. , 1996, Network.

[15]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[16]  Harry Wechsler,et al.  The FERET database and evaluation procedure for face-recognition algorithms , 1998, Image Vis. Comput..

[17]  Terrence J. Sejnowski,et al.  Unsupervised Learning , 2018, Encyclopedia of GIS.