A GPU-outperforming FPGA accelerator architecture for binary convolutional neural networks

Yixing Li, Zichuan Liu, Kai Xu, Hao Yu, Fengbo Ren

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

FPGA-based hardware accelerators for convolutional neural networks (CNNs) have received attention due to their higher energy efficiency than GPUs. However, it is challenging for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this article, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized fully mapped FPGA accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipelines stages. A key advantage of the FPGA accelerator is that its performance is insensitive to data batch size, while the performance of GPU acceleration varies largely depending on the batch size of the data. Experiment results show that the proposed accelerator architecture for binary CNNs running on a Virtex-7 FPGA is 8.3× faster and 75× more energy-efficient than a Titan X GPU for processing online individual requests in small batch sizes. For processing static data in large batch sizes, the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5× higher energy efficiency.

Original languageEnglish (US)
Article number3154839
JournalACM Journal on Emerging Technologies in Computing Systems
Volume14
Issue number2
DOIs
StatePublished - Jul 1 2018

Fingerprint

Particle accelerators
Field programmable gate arrays (FPGA)
Neural networks
Energy efficiency
Throughput
Processing
Convolution
Graphics processing unit
Pipelines
Chemical activation
Hardware
Experiments

Keywords

  • Binary neural network
  • Convolutional neural network
  • Deep learning
  • Energy efficiency
  • FPGA
  • Hardware acceleration
  • High-throughput

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Cite this

A GPU-outperforming FPGA accelerator architecture for binary convolutional neural networks. / Li, Yixing; Liu, Zichuan; Xu, Kai; Yu, Hao; Ren, Fengbo.

In: ACM Journal on Emerging Technologies in Computing Systems, Vol. 14, No. 2, 3154839, 01.07.2018.

Research output: Contribution to journalArticle

@article{0cfdaac69f6840a2b3b9308d5748287d,
title = "A GPU-outperforming FPGA accelerator architecture for binary convolutional neural networks",
abstract = "FPGA-based hardware accelerators for convolutional neural networks (CNNs) have received attention due to their higher energy efficiency than GPUs. However, it is challenging for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this article, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized fully mapped FPGA accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipelines stages. A key advantage of the FPGA accelerator is that its performance is insensitive to data batch size, while the performance of GPU acceleration varies largely depending on the batch size of the data. Experiment results show that the proposed accelerator architecture for binary CNNs running on a Virtex-7 FPGA is 8.3× faster and 75× more energy-efficient than a Titan X GPU for processing online individual requests in small batch sizes. For processing static data in large batch sizes, the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5× higher energy efficiency.",
keywords = "Binary neural network, Convolutional neural network, Deep learning, Energy efficiency, FPGA, Hardware acceleration, High-throughput",
author = "Yixing Li and Zichuan Liu and Kai Xu and Hao Yu and Fengbo Ren",
year = "2018",
month = "7",
day = "1",
doi = "10.1145/3154839",
language = "English (US)",
volume = "14",
journal = "ACM Journal on Emerging Technologies in Computing Systems",
issn = "1550-4832",
publisher = "Association for Computing Machinery (ACM)",
number = "2",

}

TY - JOUR

T1 - A GPU-outperforming FPGA accelerator architecture for binary convolutional neural networks

AU - Li, Yixing

AU - Liu, Zichuan

AU - Xu, Kai

AU - Yu, Hao

AU - Ren, Fengbo

PY - 2018/7/1

Y1 - 2018/7/1

N2 - FPGA-based hardware accelerators for convolutional neural networks (CNNs) have received attention due to their higher energy efficiency than GPUs. However, it is challenging for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this article, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized fully mapped FPGA accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipelines stages. A key advantage of the FPGA accelerator is that its performance is insensitive to data batch size, while the performance of GPU acceleration varies largely depending on the batch size of the data. Experiment results show that the proposed accelerator architecture for binary CNNs running on a Virtex-7 FPGA is 8.3× faster and 75× more energy-efficient than a Titan X GPU for processing online individual requests in small batch sizes. For processing static data in large batch sizes, the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5× higher energy efficiency.

AB - FPGA-based hardware accelerators for convolutional neural networks (CNNs) have received attention due to their higher energy efficiency than GPUs. However, it is challenging for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this article, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized fully mapped FPGA accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipelines stages. A key advantage of the FPGA accelerator is that its performance is insensitive to data batch size, while the performance of GPU acceleration varies largely depending on the batch size of the data. Experiment results show that the proposed accelerator architecture for binary CNNs running on a Virtex-7 FPGA is 8.3× faster and 75× more energy-efficient than a Titan X GPU for processing online individual requests in small batch sizes. For processing static data in large batch sizes, the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5× higher energy efficiency.

KW - Binary neural network

KW - Convolutional neural network

KW - Deep learning

KW - Energy efficiency

KW - FPGA

KW - Hardware acceleration

KW - High-throughput

UR - http://www.scopus.com/inward/record.url?scp=85053277573&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85053277573&partnerID=8YFLogxK

U2 - 10.1145/3154839

DO - 10.1145/3154839

M3 - Article

AN - SCOPUS:85053277573

VL - 14

JO - ACM Journal on Emerging Technologies in Computing Systems

JF - ACM Journal on Emerging Technologies in Computing Systems

SN - 1550-4832

IS - 2

M1 - 3154839

ER -