92¢ /MFlops/s, Ultra-Large-Scale Neural-Network Training on a PIII Cluster
暂无分享,去创建一个
Douglas Aberdeen | Robert Edwards | Jonathan Baxter | Jonathan Baxter | R. Edwards | Douglas Aberdeen
[1] V. Strassen. Gaussian elimination is not optimal , 1969 .
[2] B. Greer,et al. High Performance Software on Intel Pentium Pro Processors or Micro-Ops to TeraFLOPS , 1997, ACM/IEEE SC 1997 Conference (SC'97).
[3] Jack J. Dongarra,et al. Automated empirical optimizations of software and the ATLAS project , 2001, Parallel Comput..
[4] James Demmel,et al. Using PHiPAC to speed error back-propagation learning , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[5] Yuefan Deng,et al. New trends in high performance computing , 2001, Parallel Computing.
[6] Terrence L. Fine,et al. Feedforward Neural Network Methodology , 1999, Information Science and Statistics.
[7] K. Asanovi. Experimental Determination of Precision Requirements for Back-propagation Training of Artiicial Neural Networks , 1991 .
[8] Mithuna Thottethodi,et al. Tuning Strassen's Matrix Multiplication for Memory Efficiency , 1998, Proceedings of the IEEE/ACM SC98 Conference.
[9] Jack J. Dongarra,et al. Automatically Tuned Linear Algebra Software , 1998, Proceedings of the IEEE/ACM SC98 Conference.