A family of modified projective Nonnegative Matrix Factorization algorithms

We propose here new variants of the Non-negative Matrix Factorization (NMF) method for learning spatially localized, sparse, part-based subspace representations of visual or other patterns. The algorithms are based on positively constrained projections and are related both to NMF and to the conventional SVD or PCA decomposition. A crucial question is how to measure the difference between the original data and its positive linear approximation. Each difference measure gives a different solution. Several iterative positive projection algorithms are suggested here, one based on minimizing Euclidean distance and the others on minimizing the divergence of the original data matrix and its non-negative approximation. Several versions of divergence such as the Kullback-Leibler, Csiszar, and Amari divergence are considered, as well as the Hellinger and Pearson distances. Experimental results show that versions of P-NMF derive bases which are somewhat better suitable for a localized and sparse representation than NMF, as well as being more orthogonal.