Sensitivity in Tensor Decomposition

Canonical polyadic (CP) tensor decomposition is an important task in many applications. Many times, the true tensor rank is not known, or noise is present, and in such situations, different existing CP decomposition algorithms provide very different results. In this letter, we introduce a notion of sensitivity of CP decomposition and suggest to use it as a side criterion (besides the fitting error) to evaluate different CP decomposition results. Next, we propose a novel variant of a Krylov-Levenberg-Marquardt CP decomposition algorithm which may serve for CP decomposition with a constraint on the sensitivity. In simulations, we decompose order-4 tensors that come from convolutional neural networks. We show that it is useful to combine the CP decomposition algorithms with an error-preserving correction.

[1]  Laurence T. Yang,et al.  An Improved Deep Computation Model Based on Canonical Polyadic Decomposition , 2018, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[2]  Andrzej Cichocki,et al.  Numerical CP decomposition of some difficult tensors , 2016, J. Comput. Appl. Math..

[3]  Tamara G. Kolda,et al.  Poblano v1.0: A Matlab Toolbox for Gradient-Based Optimization , 2010 .

[4]  H. B. Nielsen DAMPING PARAMETER IN MARQUARDT ’ S METHOD , 1999 .

[5]  Lieven De Lathauwer,et al.  Optimization-Based Algorithms for Tensor Decompositions: Canonical Polyadic Decomposition, Decomposition in Rank-(Lr, Lr, 1) Terms, and a New Generalization , 2013, SIAM J. Optim..

[6]  Andrzej Cichocki,et al.  Error Preserving Correction: A Method for CP Decomposition at a Target Error Bound , 2019, IEEE Transactions on Signal Processing.

[7]  Andrzej Cichocki,et al.  Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations , 2013, IEEE Transactions on Signal Processing.

[8]  Tao Zhang,et al.  Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges , 2018, IEEE Signal Processing Magazine.

[9]  Ivan V. Oseledets,et al.  Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition , 2014, ICLR.

[10]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[11]  Athanasios P. Liavas,et al.  Nesterov-Based Alternating Optimization for Nonnegative Tensor Factorization: Algorithm and Parallel Implementation , 2018, IEEE Transactions on Signal Processing.

[12]  Andrew Zisserman,et al.  Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.

[13]  Nico Vervliet,et al.  Numerical Optimization-Based Algorithms for Data Fusion , 2019, Data Handling in Science and Technology.

[14]  Andrzej Cichocki,et al.  A further improvement of a fast damped Gauss-Newton algorithm for candecomp-parafac tensor decomposition , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[15]  Rio Yokota,et al.  Accelerating Matrix Multiplication in Deep Learning by Using Low-Rank Approximation , 2017, 2017 International Conference on High Performance Computing & Simulation (HPCS).

[16]  Fanhua Shang,et al.  A Unified Approximation Framework for Deep Neural Networks , 2018, ArXiv.

[17]  Daniel Povey,et al.  Krylov Subspace Descent for Deep Learning , 2011, AISTATS.

[18]  Nadav Cohen,et al.  On the Expressive Power of Deep Learning: A Tensor Analysis , 2015, COLT 2016.

[19]  P. Comon,et al.  Tensor decompositions, alternating least squares and other tales , 2009 .

[20]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.