TY - GEN
T1 - A scalable sparse matrix-vector multiplication kernel for energy-efficient sparse-blas on FPGAs
AU - Dorrance, Richard
AU - Ren, Fengbo
AU - Marković, Dejan
N1 - Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 2014
Y1 - 2014
N2 - Sparse Matrix-Vector Multiplication (SpMxV) is a widely used mathematical operation in many high-performance scientific and engineering applications. In recent years, tuned software libraries for multi-core microprocessors (CPUs) and graphics processing units (GPUs) have become the status quo for computing SpMxV. However, the computational throughput of these libraries for sparse matrices tends to be significantly lower than that of dense matrices, mostly due to the fact that the compression formats required to efficiently store sparse matrices mismatches traditional computing architectures. This paper describes an FPGA-based SpMxV kernel that is scalable to efficiently utilize the available memory bandwidth and computing resources. Benchmarking on a Virtex-5 SX95T FPGA demonstrates an average computational efficiency of 91.85%. The kernel achieves a peak computational efficiency of 99.8%, a >50x improvement over two Intel Core i7 processors (i7-2600 and i7-4770) and showing a >300x improvement over two NVIDA GPUs (GTX 660 and GTX Titan), when running the MKL and cuSPARSE sparse-BLAS libraries, respectively. In addition, the SpMxV FPGA kernel is able to achieve higher performance than its CPU and GPU counterparts, while using only 64 single-precision processing elements, with an overall 38-50x improvement in energy efficiency.
AB - Sparse Matrix-Vector Multiplication (SpMxV) is a widely used mathematical operation in many high-performance scientific and engineering applications. In recent years, tuned software libraries for multi-core microprocessors (CPUs) and graphics processing units (GPUs) have become the status quo for computing SpMxV. However, the computational throughput of these libraries for sparse matrices tends to be significantly lower than that of dense matrices, mostly due to the fact that the compression formats required to efficiently store sparse matrices mismatches traditional computing architectures. This paper describes an FPGA-based SpMxV kernel that is scalable to efficiently utilize the available memory bandwidth and computing resources. Benchmarking on a Virtex-5 SX95T FPGA demonstrates an average computational efficiency of 91.85%. The kernel achieves a peak computational efficiency of 99.8%, a >50x improvement over two Intel Core i7 processors (i7-2600 and i7-4770) and showing a >300x improvement over two NVIDA GPUs (GTX 660 and GTX Titan), when running the MKL and cuSPARSE sparse-BLAS libraries, respectively. In addition, the SpMxV FPGA kernel is able to achieve higher performance than its CPU and GPU counterparts, while using only 64 single-precision processing elements, with an overall 38-50x improvement in energy efficiency.
KW - Benchmarking
KW - CPU
KW - Computational efficiency
KW - Energy-efficiency
KW - FPGA
KW - GPU
KW - SpMxV
KW - Sparse-BLAS
UR - http://www.scopus.com/inward/record.url?scp=84898959963&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84898959963&partnerID=8YFLogxK
U2 - 10.1145/2554688.2554785
DO - 10.1145/2554688.2554785
M3 - Conference contribution
AN - SCOPUS:84898959963
SN - 9781450326711
T3 - ACM/SIGDA International Symposium on Field Programmable Gate Arrays - FPGA
SP - 161
EP - 169
BT - FPGA 2014 - Proceedings of the 2014 ACM/SIGDA International Symposium on Field Programmable Gate Arrays
PB - Association for Computing Machinery
T2 - 2014 ACM/SIGDA International Symposium on Field Programmable Gate Arrays, FPGA 2014
Y2 - 26 February 2014 through 28 February 2014
ER -