An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

30 Citations (Scopus)

Abstract

Convolutional neural networks (CNNs) are rapidly evolving and being applied to a broad range of applications. Given a specific application, an increasing challenge is to search the appropriate CNN algorithm and efficiently map it to the target hardware. The FPGA-based accelerator has the advantage of reconfigurability and flexibility, and has achieved high-performance and low-power. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The implementation of each module is optimized at the RTL level. Given a CNN algorithm, its structure is abstracted to a directed acyclic graph (DAG) and then complied with RTL modules in the library. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. ResNet, can be compiled. The proposed methodology is demonstrated with end-to-end FPGA implementations of various CNN algorithms (e.g. NiN, VGG-16, ResNet-50, and ResNet-152) on two standalone Intel FPGAs, Stratix V and Arria 10. The performance and overhead of the automated compilation are evaluated. The compiled FPGA accelerators exhibit superior performance compared to state-of-the-art automation-based works by >2× for various CNNs.

Original languageEnglish (US)
Title of host publication2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9789090304281
DOIs
StatePublished - Oct 2 2017
Event27th International Conference on Field Programmable Logic and Applications, FPL 2017 - Gent, Belgium
Duration: Sep 4 2017Sep 6 2017

Other

Other27th International Conference on Field Programmable Logic and Applications, FPL 2017
CountryBelgium
CityGent
Period9/4/179/6/17

Fingerprint

Field programmable gate arrays (FPGA)
Throughput
Neural networks
Hardware
Particle accelerators
Complex networks
Automation
Topology

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications
  • Hardware and Architecture
  • Software

Cite this

Ma, Y., Cao, Y., Vrudhula, S., & Seo, J. (2017). An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. In 2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017 [8056824] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.23919/FPL.2017.8056824

An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. / Ma, Yufei; Cao, Yu; Vrudhula, Sarma; Seo, Jae-sun.

2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017. Institute of Electrical and Electronics Engineers Inc., 2017. 8056824.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ma, Y, Cao, Y, Vrudhula, S & Seo, J 2017, An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. in 2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017., 8056824, Institute of Electrical and Electronics Engineers Inc., 27th International Conference on Field Programmable Logic and Applications, FPL 2017, Gent, Belgium, 9/4/17. https://doi.org/10.23919/FPL.2017.8056824
Ma Y, Cao Y, Vrudhula S, Seo J. An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. In 2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017. Institute of Electrical and Electronics Engineers Inc. 2017. 8056824 https://doi.org/10.23919/FPL.2017.8056824
Ma, Yufei ; Cao, Yu ; Vrudhula, Sarma ; Seo, Jae-sun. / An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. 2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017. Institute of Electrical and Electronics Engineers Inc., 2017.
@inproceedings{cf9a292b79824299ad9d226bfc11e00d,
title = "An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks",
abstract = "Convolutional neural networks (CNNs) are rapidly evolving and being applied to a broad range of applications. Given a specific application, an increasing challenge is to search the appropriate CNN algorithm and efficiently map it to the target hardware. The FPGA-based accelerator has the advantage of reconfigurability and flexibility, and has achieved high-performance and low-power. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The implementation of each module is optimized at the RTL level. Given a CNN algorithm, its structure is abstracted to a directed acyclic graph (DAG) and then complied with RTL modules in the library. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. ResNet, can be compiled. The proposed methodology is demonstrated with end-to-end FPGA implementations of various CNN algorithms (e.g. NiN, VGG-16, ResNet-50, and ResNet-152) on two standalone Intel FPGAs, Stratix V and Arria 10. The performance and overhead of the automated compilation are evaluated. The compiled FPGA accelerators exhibit superior performance compared to state-of-the-art automation-based works by >2× for various CNNs.",
author = "Yufei Ma and Yu Cao and Sarma Vrudhula and Jae-sun Seo",
year = "2017",
month = "10",
day = "2",
doi = "10.23919/FPL.2017.8056824",
language = "English (US)",
booktitle = "2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks

AU - Ma, Yufei

AU - Cao, Yu

AU - Vrudhula, Sarma

AU - Seo, Jae-sun

PY - 2017/10/2

Y1 - 2017/10/2

N2 - Convolutional neural networks (CNNs) are rapidly evolving and being applied to a broad range of applications. Given a specific application, an increasing challenge is to search the appropriate CNN algorithm and efficiently map it to the target hardware. The FPGA-based accelerator has the advantage of reconfigurability and flexibility, and has achieved high-performance and low-power. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The implementation of each module is optimized at the RTL level. Given a CNN algorithm, its structure is abstracted to a directed acyclic graph (DAG) and then complied with RTL modules in the library. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. ResNet, can be compiled. The proposed methodology is demonstrated with end-to-end FPGA implementations of various CNN algorithms (e.g. NiN, VGG-16, ResNet-50, and ResNet-152) on two standalone Intel FPGAs, Stratix V and Arria 10. The performance and overhead of the automated compilation are evaluated. The compiled FPGA accelerators exhibit superior performance compared to state-of-the-art automation-based works by >2× for various CNNs.

AB - Convolutional neural networks (CNNs) are rapidly evolving and being applied to a broad range of applications. Given a specific application, an increasing challenge is to search the appropriate CNN algorithm and efficiently map it to the target hardware. The FPGA-based accelerator has the advantage of reconfigurability and flexibility, and has achieved high-performance and low-power. Without a general compiler to automate the implementation, however, significant efforts and expertise are still required to customize the design for each CNN model. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The implementation of each module is optimized at the RTL level. Given a CNN algorithm, its structure is abstracted to a directed acyclic graph (DAG) and then complied with RTL modules in the library. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. ResNet, can be compiled. The proposed methodology is demonstrated with end-to-end FPGA implementations of various CNN algorithms (e.g. NiN, VGG-16, ResNet-50, and ResNet-152) on two standalone Intel FPGAs, Stratix V and Arria 10. The performance and overhead of the automated compilation are evaluated. The compiled FPGA accelerators exhibit superior performance compared to state-of-the-art automation-based works by >2× for various CNNs.

UR - http://www.scopus.com/inward/record.url?scp=85034444911&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85034444911&partnerID=8YFLogxK

U2 - 10.23919/FPL.2017.8056824

DO - 10.23919/FPL.2017.8056824

M3 - Conference contribution

AN - SCOPUS:85034444911

BT - 2017 27th International Conference on Field Programmable Logic and Applications, FPL 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -