Fundamental tensor operations for large-scale data analysis using tensor network formats

We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac, Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also provide a brief review of mathematical properties of the TT decomposition as a low-rank approximation technique. With the aim of breaking the curse-of-dimensionality in large-scale numerical analysis, we describe basic operations on large-scale vectors, matrices, and high-order tensors represented by TT decomposition. The proposed representations can be used for describing numerical methods based on TT decomposition for solving large-scale optimization problems such as systems of linear equations and symmetric eigenvalue problems.

[1]  Andrzej Cichocki,et al.  Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems , 2014, ArXiv.

[2]  L. Tucker,et al.  Some mathematical notes on three-mode factor analysis , 1966, Psychometrika.

[3]  Andrzej Cichocki,et al.  Regularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions , 2015, SIAM J. Matrix Anal. Appl..

[4]  Ivan V. Oseledets,et al.  Approximation of 2d˟2d Matrices Using Tensor Decomposition , 2010, SIAM J. Matrix Anal. Appl..

[5]  Antonio Falcó,et al.  On Minimal Subspaces in Tensor Representations , 2012, Found. Comput. Math..

[6]  B. Khoromskij,et al.  DMRG+QTT approach to computation of the ground state for the molecular Schrödinger operator , 2010 .

[7]  Andrzej Cichocki,et al.  Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions , 2014, ArXiv.

[8]  VLADIMIR A. KAZEEV,et al.  Low-Rank Explicit QTT Representation of the Laplace Operator and Its Inverse , 2012, SIAM J. Matrix Anal. Appl..

[9]  Andrzej Cichocki,et al.  Nonnegative Matrix and Tensor Factorization T , 2007 .

[10]  Andrzej Cichocki,et al.  Big Data Matrix Singular Value Decomposition Based on Low-Rank Tensor Train Decomposition , 2014, ISNN.

[11]  Tamara G. Kolda,et al.  Tensor Decompositions and Applications , 2009, SIAM Rev..

[12]  Jennifer Seberry,et al.  The Strong Kronecker Product , 1994, J. Comb. Theory, Ser. A.

[13]  U. Schollwoeck The density-matrix renormalization group in the age of matrix product states , 2010, 1008.3477.

[14]  B. Khoromskij O(dlog N)-Quantics Approximation of N-d Tensors in High-Dimensional Numerical Modeling , 2011 .

[15]  Reinhold Schneider,et al.  Optimization problems in contracted tensor networks , 2011, Comput. Vis. Sci..

[16]  Daniel Kressner,et al.  A literature survey of low‐rank tensor approximation techniques , 2013, 1302.7121.

[17]  W. Hackbusch,et al.  A New Scheme for the Tensor Representation , 2009 .

[18]  Reinhold Schneider,et al.  On manifolds of tensors of fixed TT-rank , 2012, Numerische Mathematik.

[19]  White,et al.  Density-matrix algorithms for quantum renormalization groups. , 1993, Physical review. B, Condensed matter.

[20]  Andrzej Cichocki,et al.  Very Large-Scale Singular Value Decomposition Using Tensor Train Networks , 2014, ArXiv.

[21]  Vladimir A. Kazeev,et al.  Multilevel Toeplitz Matrices Generated by Tensor-Structured Vectors and Convolution with Logarithmic Complexity , 2013, SIAM J. Sci. Comput..

[22]  Nico Vervliet,et al.  Breaking the Curse of Dimensionality Using Decompositions of Incomplete Tensors: Tensor-based scientific computing in big data analysis , 2014, IEEE Signal Processing Magazine.

[23]  B. Khoromskij Tensors-structured Numerical Methods in Scientific Computing: Survey on Recent Advances , 2012 .

[24]  Lieven De Lathauwer,et al.  Stochastic and Deterministic Tensorization for Blind Signal Separation , 2015, LVA/ICA.

[25]  Daniel Kressner,et al.  Low-Rank Tensor Methods with Subspace Correction for Symmetric Eigenvalue Problems , 2014, SIAM J. Sci. Comput..

[26]  Eugene E. Tyrtyshnikov,et al.  Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions , 2009, SIAM J. Sci. Comput..

[27]  Reinhold Schneider,et al.  The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format , 2012, SIAM J. Sci. Comput..

[28]  I. Johnstone,et al.  Ideal spatial adaptation by wavelet shrinkage , 1994 .

[29]  Boris N. Khoromskij,et al.  Computation of extreme eigenvalues in higher dimensions using block tensor train format , 2013, Comput. Phys. Commun..

[30]  Kishore Kumar Naraparaju,et al.  A note on tensor chain approximation , 2012, Comput. Vis. Sci..

[31]  Lieven De Lathauwer,et al.  A survey of tensor methods , 2009, 2009 IEEE International Symposium on Circuits and Systems.

[32]  T. Kolda Multilinear operators for higher-order decompositions , 2006 .

[33]  Ivan V. Oseledets,et al.  Solution of Linear Systems and Matrix Inversion in the TT-Format , 2012, SIAM J. Sci. Comput..

[34]  Lars Grasedyck,et al.  Hierarchical Singular Value Decomposition of Tensors , 2010, SIAM J. Matrix Anal. Appl..

[35]  W. Hackbusch Tensor Spaces and Numerical Tensor Calculus , 2012, Springer Series in Computational Mathematics.

[36]  Joos Vandewalle,et al.  A Multilinear Singular Value Decomposition , 2000, SIAM J. Matrix Anal. Appl..

[37]  Ivan Oseledets,et al.  Tensor-Train Decomposition , 2011, SIAM J. Sci. Comput..

[38]  S. V. Dolgov,et al.  ALTERNATING MINIMAL ENERGY METHODS FOR LINEAR SYSTEMS IN HIGHER DIMENSIONS∗ , 2014 .

[39]  Hans-Joachim Bungartz,et al.  Acta Numerica 2004: Sparse grids , 2004 .