TY - JOUR
T1 - Automatic Compilation of Diverse CNNs Onto High-Performance FPGA Accelerators
AU - Ma, Yufei
AU - Cao, Yu
AU - Vrudhula, Sarma
AU - Seo, Jae-sun
N1 - Funding Information:
Manuscript received April 24, 2018; revised July 16, 2018 and September 27, 2018; accepted November 18, 2018. Date of publication December 4, 2018; date of current version January 18, 2020. This work was supported in part by the NSF I/UCRC Center for Embedded Systems through NSF under Grant 1230401, Grant 1237856, Grant 1701241, Grant 1361926, and Grant 1535669, in part by NSF under Grant 1652866 and Grant 1715443, in part by Intel Labs, and in part by C-BRIC, one of six centers in JUMP, an SRC program sponsored by DARPA. This paper was recommended by Associate Editor W. Zhang. (Corresponding author: Yufei Ma.) Y. Ma, Y. Cao, and J.-S. Seo are with the School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287 USA (e-mail: yufeima@asu.edu; yu.cao@asu.edu; jaesun.seo@asu.edu).
Publisher Copyright:
© 2018 IEEE.
PY - 2020/2
Y1 - 2020/2
N2 - A broad range of applications are increasingly benefiting from the rapid and flourishing development of convolutional neural networks (CNNs). The FPGA-based CNN inference accelerator is gaining popularity due to its high-performance and low-power as well as FPGA's conventional advantage of reconfigurability and flexibility. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this paper, we present an register-transfer level (RTL)-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g., GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g., NiN, VGG, GoogLeNet, and ResNet, on two standalone Intel FPGAs, Arria 10, and Stratix 10, achieving end-to-end inference throughputs of 969 GOPS and 1604 GOPS, respectively, with batch size of one.
AB - A broad range of applications are increasingly benefiting from the rapid and flourishing development of convolutional neural networks (CNNs). The FPGA-based CNN inference accelerator is gaining popularity due to its high-performance and low-power as well as FPGA's conventional advantage of reconfigurability and flexibility. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this paper, we present an register-transfer level (RTL)-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g., GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g., NiN, VGG, GoogLeNet, and ResNet, on two standalone Intel FPGAs, Arria 10, and Stratix 10, achieving end-to-end inference throughputs of 969 GOPS and 1604 GOPS, respectively, with batch size of one.
KW - Convolutional neural networks (CNNs)
KW - FPGA
KW - neural network hardware
UR - http://www.scopus.com/inward/record.url?scp=85058091307&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85058091307&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2018.2884972
DO - 10.1109/TCAD.2018.2884972
M3 - Article
AN - SCOPUS:85058091307
SN - 0278-0070
VL - 39
SP - 424
EP - 437
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 2
M1 - 8558097
ER -