Matrix completion based on feature vector and function approximation

In this paper, we explained the current matrix completion theory and proposed a new matrix completion framework called Feature Vector and Function Approximating Based Matrix Completion (FVFABMC) which extended low-rank matrix completion theory. The new matrix completion problem can be decomposed into two learning problems, feature vector learning problem and synthetic function learning problem based on the feature vector matrix. The global optimal solution for feature vectors can be obtained by only assuming synthetic function is smoothing locally which makes first-order approximation of feature vector learning problem as convex semi-definite programming problem. To solve the large-scale feature vector learning problem, we also proposed a stochastic parallel gradient descent blocks algorithm. For the matrix synthetic function learning problem, according to local linear hypothesis, the problem can be formalized in to an unconstrained least squares problem based on local neighboring coefficients which avoid model selection and parameter learning difficulties. Numerical experiments show that the feasibility of FVFABMC method in learning feature vectors and had a good prediction performance on missing elements of utility matrix.

[1]  Yaoliang Yu,et al.  Accelerated Training for Matrix-norm Regularization: A Boosting Approach , 2012, NIPS.

[2]  Andrea Montanari,et al.  Matrix completion from a few entries , 2009, 2009 IEEE International Symposium on Information Theory.

[3]  Inderjit S. Dhillon,et al.  Provable Inductive Matrix Completion , 2013, ArXiv.

[4]  Ohad Shamir,et al.  Large-Scale Convex Minimization with a Low-Rank Constraint , 2011, ICML.

[5]  Yi Ma,et al.  Robust principal component analysis? , 2009, JACM.

[6]  Prateek Jain,et al.  Low-rank matrix completion using alternating minimization , 2012, STOC '13.

[7]  Robert Tibshirani,et al.  Spectral Regularization Algorithms for Learning Large Incomplete Matrices , 2010, J. Mach. Learn. Res..

[8]  Yoram Bresler,et al.  Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint , 2009, ArXiv.

[9]  Pradeep Ravikumar,et al.  Greedy Algorithms for Structurally Constrained High Dimensional Problems , 2011, NIPS.

[10]  Francis R. Bach,et al.  A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization , 2008, J. Mach. Learn. Res..

[11]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2009, Found. Comput. Math..

[12]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[13]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[14]  Inderjit S. Dhillon,et al.  Guaranteed Rank Minimization via Singular Value Projection , 2009, NIPS.

[15]  David Gross,et al.  Recovering Low-Rank Matrices From Few Coefficients in Any Basis , 2009, IEEE Transactions on Information Theory.

[16]  Martin Jaggi,et al.  A Simple Algorithm for Nuclear Norm Regularized Problems , 2010, ICML.

[17]  Philip S. Yu,et al.  Limitations of matrix completion via trace norm minimization , 2011, SKDD.

[18]  A. Willsky,et al.  Sparse and low-rank matrix decompositions , 2009 .