Distributed Solution of Large-Scale Linear Systems Via Accelerated Projection-Based Consensus

Solving a large-scale system of linear equations is a key step at the heart of many algorithms in scientific computing, machine learning, and beyond. When the problem dimension is large, computational and/or memory constraints make it desirable, or even necessary, to perform the task in a distributed fashion. In this paper, we consider a common scenario in which a taskmaster intends to solve a large-scale system of linear equations by distributing subsets of the equations among a number of computing machines/cores. We propose a new algorithm called Accelerated Projection-based Consensus (APC) for this problem. The convergence behavior of the proposed algorithm is analyzed in detail and analytically shown to compare favorably with the convergence rate of alternative distributed methods, namely distributed gradient descent, distributed versions of Nesterov's accelerated gradient descent and heavy-ball method, the block Cimmino method, and ADMM. On randomly chosen linear systems, as well as on real-world data sets, the proposed method offers significant speed-up relative to all the aforementioned methods.

[1]  Mohammad Ali Maddah-Ali,et al.  A Unified Coding Framework for Distributed Computing with Straggling Servers , 2016, 2016 IEEE Globecom Workshops (GC Wkshps).

[2]  Bingsheng He,et al.  On the O(1/n) Convergence Rate of the Douglas-Rachford Alternating Direction Method , 2012, SIAM J. Numer. Anal..

[3]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.

[4]  Qing Ling,et al.  On the Convergence of Decentralized Gradient Descent , 2013, SIAM J. Optim..

[5]  Ahmed H. Sameh,et al.  Row Projection Methods for Large Nonsymmetric Linear Systems , 1992, SIAM J. Sci. Comput..

[6]  Fridrich Sloboda,et al.  A projection method of the Cimmino type for linear algebraic systems , 1991, Parallel Comput..

[7]  Shaoshuai Mou,et al.  A distributed algorithm for solving a linear algebraic equation , 2015, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[8]  Farshad Lahouti,et al.  Analysis of distributed ADMM algorithm for consensus optimization in presence of error , 2017, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[9]  Babak Hassibi,et al.  Distributed Solution of Large-Scale Linear Systems via Accelerated Projection-Based Consensus , 2019, IEEE Transactions on Signal Processing.

[10]  Shaoshuai Mou,et al.  An asynchronous distributed algorithm for solving a linear algebraic equation , 2013, 52nd IEEE Conference on Decision and Control.

[11]  Boris Polyak Some methods of speeding up the convergence of iteration methods , 1964 .

[12]  Ronan Guivarch,et al.  The Augmented Block Cimmino Distributed Method , 2015, SIAM J. Sci. Comput..

[13]  Wotao Yin,et al.  On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers , 2016, J. Sci. Comput..

[14]  James T. Kwok,et al.  Asynchronous Distributed ADMM for Consensus Optimization , 2014, ICML.

[15]  Benjamin Recht,et al.  Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints , 2014, SIAM J. Optim..

[16]  Iain S. Duff,et al.  A Block Projection Method for Sparse Matrices , 1992, SIAM J. Sci. Comput..

[17]  João M. F. Xavier,et al.  D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization , 2012, IEEE Transactions on Signal Processing.

[18]  Y. Nesterov A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .

[19]  Qing Ling,et al.  On the Linear Convergence of the ADMM in Decentralized Consensus Optimization , 2013, IEEE Transactions on Signal Processing.

[20]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[21]  Alexander J. Smola,et al.  Parallelized Stochastic Gradient Descent , 2010, NIPS.