A fully onchip binarized convolutional neural network FPGA impelmentation with accurate inference

Li Yang, Zhezhi He, Deliang Fan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

Deep convolutional neural network has taken an important role in machine learning algorithm which has been widely used in computer vision tasks. However, its enormous model size and massive computation cost have became the main obstacle for deployment of such powerful algorithm in low power and resource limited embedded system, such as FPGA. Recent works have shown the binarized neural networks (BNN), utilizing binarized (i.e. +1 and -1) convolution kernel and binary activation function, can significantly reduce the model size and computation complexity, which paves a new road for energy-efficient FPGA implementation. In this work, we first propose a new BNN algorithm, called Parallel-Convolution BNN (i.e. PC-BNN), which replaces the original binary convolution layer in conventional BNN with two parallel binary convolution layers. PC-BNN achieves ∼86% on CIFAR-10 dataset with only 2.3Mb parameter size. We then deploy our proposed PC-BNN into the Xilinx PYNQ Z1 FPGA board with only 4.9Mb on-chip RAM. Since the ultra-small network parameter, it is feasible to store the whole network parameter into on-chip RAM, which could greatly reduce the energy and delay overhead to load network parameter from off-chip memory. Meanwhile, a new data streaming pipeline architecture is proposed in PC-BNN FPGA implementation to further improve throughput. The experiment results show that our PCBNN based FPGA implementation achieves 930 frames per second, 387.5 FPS/Watt and 396×10-4 FPS/LUT, which are among the best throughput and energy efficiency compared to most recent works.

Original languageEnglish (US)
Title of host publicationISLPED 2018 - Proceedings of the 2018 International Symposium on Low Power Electronics and Design
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Print)9781450357043
DOIs
StatePublished - Jul 23 2018
Externally publishedYes
Event23rd IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2018 - Bellevue, United States
Duration: Jul 23 2018Jul 25 2018

Publication series

NameProceedings of the International Symposium on Low Power Electronics and Design
ISSN (Print)1533-4678

Conference

Conference23rd IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2018
CountryUnited States
CityBellevue
Period7/23/187/25/18

Keywords

  • Binarized convolutional neural network (BNN)
  • Convolutional neural network (CNN)
  • Field-programmable gate array (FPGA)

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'A fully onchip binarized convolutional neural network FPGA impelmentation with accurate inference'. Together they form a unique fingerprint.

Cite this