Abstract

A broad range of applications are increasingly benefiting from the rapid and flourishing development of convolutional neural networks (CNNs). The FPGA-based CNN inference accelerator is gaining popularity due to its high-performance and low-power as well as FPGA’s conventional advantage of reconfigurability and flexibility. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g. NiN, VGG, GoogLeNet and ResNet, on two standalone Intel FPGAs, Arria 10 and Stratix 10, achieving end-to-end inference throughputs of 969 GOPS and 1,604 GOPS, respectively, with batch size of one.

Fingerprint

Particle accelerators
Field programmable gate arrays (FPGA)
Neural networks
Hardware
Complex networks
Throughput
Topology

Keywords

  • Convolutional neural networks
  • FPGA
  • Neural network hardware.

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering

Cite this

@article{169af1429c47417fa492e20b071c3418,
title = "Automatic Compilation of Diverse CNNs onto High-Performance FPGA Accelerators",
abstract = "A broad range of applications are increasingly benefiting from the rapid and flourishing development of convolutional neural networks (CNNs). The FPGA-based CNN inference accelerator is gaining popularity due to its high-performance and low-power as well as FPGA’s conventional advantage of reconfigurability and flexibility. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g. NiN, VGG, GoogLeNet and ResNet, on two standalone Intel FPGAs, Arria 10 and Stratix 10, achieving end-to-end inference throughputs of 969 GOPS and 1,604 GOPS, respectively, with batch size of one.",
keywords = "Convolutional neural networks, FPGA, Neural network hardware.",
author = "Yufei Ma and Yu Cao and Sarma Vrudhula and Jae-sun Seo",
year = "2018",
month = "1",
day = "1",
doi = "10.1109/TCAD.2018.2884972",
language = "English (US)",
journal = "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems",
issn = "0278-0070",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Automatic Compilation of Diverse CNNs onto High-Performance FPGA Accelerators

AU - Ma, Yufei

AU - Cao, Yu

AU - Vrudhula, Sarma

AU - Seo, Jae-sun

PY - 2018/1/1

Y1 - 2018/1/1

N2 - A broad range of applications are increasingly benefiting from the rapid and flourishing development of convolutional neural networks (CNNs). The FPGA-based CNN inference accelerator is gaining popularity due to its high-performance and low-power as well as FPGA’s conventional advantage of reconfigurability and flexibility. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g. NiN, VGG, GoogLeNet and ResNet, on two standalone Intel FPGAs, Arria 10 and Stratix 10, achieving end-to-end inference throughputs of 969 GOPS and 1,604 GOPS, respectively, with batch size of one.

AB - A broad range of applications are increasingly benefiting from the rapid and flourishing development of convolutional neural networks (CNNs). The FPGA-based CNN inference accelerator is gaining popularity due to its high-performance and low-power as well as FPGA’s conventional advantage of reconfigurability and flexibility. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g. NiN, VGG, GoogLeNet and ResNet, on two standalone Intel FPGAs, Arria 10 and Stratix 10, achieving end-to-end inference throughputs of 969 GOPS and 1,604 GOPS, respectively, with batch size of one.

KW - Convolutional neural networks

KW - FPGA

KW - Neural network hardware.

UR - http://www.scopus.com/inward/record.url?scp=85058091307&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058091307&partnerID=8YFLogxK

U2 - 10.1109/TCAD.2018.2884972

DO - 10.1109/TCAD.2018.2884972

M3 - Article

JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

SN - 0278-0070

ER -