Directional Principal Component Analysis for Image Matrix

In this paper, we present a novel approach, namely directional multi-mode principal component analysis, which efficiently avoids the small sample size problem and preserves the spatial information embed in among pixels of image, by encoding the input high-dimensional image as a tensor. In the proposed scheme, the mode-k matrix of the image is re-sampled and re-arranged to form a mode-k directional image for better exploiting the local structure information in training. An algorithm called mode-k direction Principal Component Analysis (PCA) is then presented to learn the multiple interrelated lower-dimensional subspaces without iterative step. Compared with conventional and other subspace analysis algorithms, the proposed method can greatly alleviate the small sample size problem, avoid the curse of dimensionality, reduce the computational cost in the learning stage by representing the data in lower dimension, and simultaneously exploit the local structural information embedded in the high dimensional dataset. Experimental results on well-known AR and UMIST databases show that the proposed method has higher recognition accuracy than many traditional subspace learning algorithms while using a low dimension of features.

[1]  David Zhang,et al.  Directional independent component analysis with tensor representation , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Daoqiang Zhang,et al.  Diagonal principal component analysis for face recognition , 2006, Pattern Recognit..

[3]  Michael J. Black,et al.  A Framework for Robust Subspace Learning , 2003, International Journal of Computer Vision.

[4]  David Zhang,et al.  Independent components extraction from image matrix , 2010, Pattern Recognit. Lett..

[5]  Gao Quan,et al.  Face Recognition Based on Expressive Features , 2006 .

[6]  Joshua B. Tenenbaum,et al.  Global Versus Local Methods in Nonlinear Dimensionality Reduction , 2002, NIPS.

[7]  Horst Bischof,et al.  Weighted and robust learning of subspace representations , 2007, Pattern Recognit..

[8]  Jieping Ye,et al.  Generalized Low Rank Approximations of Matrices , 2004, Machine Learning.

[9]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[10]  Alejandro F. Frangi,et al.  Two-dimensional PCA: a new approach to appearance-based face representation and recognition , 2004 .

[11]  Joos Vandewalle,et al.  A Multilinear Singular Value Decomposition , 2000, SIAM J. Matrix Anal. Appl..

[12]  Stephen Lin,et al.  Graph Embedding and Extensions: A General Framework for Dimensionality Reduction , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.