Generalised matrix-matrix multiplication forms the kernel of many mathematical algorithms. A faster matrix-matrix multiply immediately benefits these algorithms. In this paper we implement efficient matrix multiplication for large matrices using the floating point Intel SIMD (Single Instruction Multiple Data) architecture. A description of the issues and our solution is presented, paying attention to all levels of the memory hierarchy. Our results demonstrate an average performance of 2.09 times faster than the leading public domain matrix-matrix multiply routines.
Optimizing matrix multiply using PHiPAC: a portable, high-performance, ANSI C coding methodology
Greg Henry,et al.
High Performance Software on Intel Pentium Pro Processors or Micro-Ops to TeraFLOPS
Mithuna Thottethodi,et al.
Tuning Strassen's Matrix Multiplication for Memory Efficiency
Proceedings of the IEEE/ACM SC98 Conference.
Automated Empiri al Optimization of Software and theATLAS Proje t
Douglas Aberdeen,et al.
92¢ /MFlops/s, Ultra-Large-Scale Neural-Network Training on a PIII Cluster
ACM/IEEE SC 2000 Conference (SC'00).
Yuefan Deng,et al.
New trends in high performance computing
Jack J. Dongarra,et al.
Automated empirical optimizations of software and the ATLAS project