Improving Distributed Gradient Descent Using Reed-Solomon Codes

Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as stragglers. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adopt the framework of Tandon et al. [1] and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^{2})$ decoding algorithm. The idea is based on a suitably designed Reed-Solomon code that has a sparsest and balanced generator matrix. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.

[1]  Alexandros G. Dimakis,et al.  Gradient Coding From Cyclic MDS Codes and Expander Graphs , 2017, IEEE Transactions on Information Theory.

[2]  Kerstin Vännman,et al.  Estimators Based on Order Statistics from a Pareto Distribution , 1976 .

[3]  Yurii Nesterov,et al.  Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.

[4]  Pulkit Grover,et al.  “Short-Dot”: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products , 2017, IEEE Transactions on Information Theory.

[5]  Alexandros G. Dimakis,et al.  Gradient Coding , 2016, ArXiv.

[6]  A. Salman Avestimehr,et al.  A Fundamental Tradeoff Between Computation and Communication in Distributed Computing , 2016, IEEE Transactions on Information Theory.

[7]  Samy Bengio,et al.  Revisiting Distributed Synchronous SGD , 2016, ArXiv.

[8]  Stephen J. Wright,et al.  Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.

[9]  Ulas C. Kozat,et al.  TOFEC: Achieving optimal throughput-delay trade-off of cloud storage using erasure codes , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[10]  Alexander J. Smola,et al.  Parallelized Stochastic Gradient Descent , 2010, NIPS.

[11]  Mohammad Ali Maddah-Ali,et al.  Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication , 2017, NIPS.

[12]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[13]  F. Moore,et al.  Polynomial Codes Over Certain Finite Fields , 2017 .

[14]  Babak Hassibi,et al.  Balanced Reed-Solomon codes for all parameters , 2016, 2016 IEEE Information Theory Workshop (ITW).

[15]  Kannan Ramchandran,et al.  Speeding Up Distributed Machine Learning Using Codes , 2015, IEEE Transactions on Information Theory.

[16]  Å. Björck,et al.  Solution of Vandermonde Systems of Equations , 1970 .

[17]  Peter J. Haas,et al.  Large-scale matrix factorization with distributed stochastic gradient descent , 2011, KDD.

[18]  Mor Harchol-Balter,et al.  Exploiting process lifetime distributions for dynamic load balancing , 1995, SIGMETRICS.

[19]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[20]  Mor Harchol-Balter The Effect of Heavy-Tailed Job Size Distributions on Computer System Design , 1999 .

[21]  Teunis J. Ott,et al.  Load-balancing heuristics and process behavior , 1986, SIGMETRICS '86/PERFORMANCE '86.

[22]  Babak Hassibi,et al.  Balanced Reed-Solomon codes , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).