EVA2: Exploiting temporal redundancy in live computer vision

Mark Buckler, Philip Bedoukian, Suren Jayasuriya, Adrian Sampson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54%, 62%, and 87% for three CNNs with less than 1% loss in vision accuracy.

Original languageEnglish (US)
Title of host publicationProceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages533-546
Number of pages14
ISBN (Electronic)9781538659847
DOIs
StatePublished - Jul 19 2018
Event45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018 - Los Angeles, United States
Duration: Jun 2 2018Jun 6 2018

Other

Other45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018
CountryUnited States
CityLos Angeles
Period6/2/186/6/18

Fingerprint

Computer vision
Redundancy
Neural networks
Chemical activation
Hardware
Motion compensation
Motion estimation
Image compression
Particle accelerators

Keywords

  • Application specific integrated circuits
  • Computer architecture
  • Computer vision
  • Convolutional neural networks
  • Hardware acceleration
  • Video compression

ASJC Scopus subject areas

  • Hardware and Architecture

Cite this

Buckler, M., Bedoukian, P., Jayasuriya, S., & Sampson, A. (2018). EVA2: Exploiting temporal redundancy in live computer vision. In Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018 (pp. 533-546). [8416853] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ISCA.2018.00051

EVA2 : Exploiting temporal redundancy in live computer vision. / Buckler, Mark; Bedoukian, Philip; Jayasuriya, Suren; Sampson, Adrian.

Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 533-546 8416853.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Buckler, M, Bedoukian, P, Jayasuriya, S & Sampson, A 2018, EVA2: Exploiting temporal redundancy in live computer vision. in Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018., 8416853, Institute of Electrical and Electronics Engineers Inc., pp. 533-546, 45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018, Los Angeles, United States, 6/2/18. https://doi.org/10.1109/ISCA.2018.00051
Buckler M, Bedoukian P, Jayasuriya S, Sampson A. EVA2: Exploiting temporal redundancy in live computer vision. In Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 533-546. 8416853 https://doi.org/10.1109/ISCA.2018.00051
Buckler, Mark ; Bedoukian, Philip ; Jayasuriya, Suren ; Sampson, Adrian. / EVA2 : Exploiting temporal redundancy in live computer vision. Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 533-546
@inproceedings{f0d773fed28a40259f128949e8ae4ea0,
title = "EVA2: Exploiting temporal redundancy in live computer vision",
abstract = "Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54{\%}, 62{\%}, and 87{\%} for three CNNs with less than 1{\%} loss in vision accuracy.",
keywords = "Application specific integrated circuits, Computer architecture, Computer vision, Convolutional neural networks, Hardware acceleration, Video compression",
author = "Mark Buckler and Philip Bedoukian and Suren Jayasuriya and Adrian Sampson",
year = "2018",
month = "7",
day = "19",
doi = "10.1109/ISCA.2018.00051",
language = "English (US)",
pages = "533--546",
booktitle = "Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - EVA2

T2 - Exploiting temporal redundancy in live computer vision

AU - Buckler, Mark

AU - Bedoukian, Philip

AU - Jayasuriya, Suren

AU - Sampson, Adrian

PY - 2018/7/19

Y1 - 2018/7/19

N2 - Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54%, 62%, and 87% for three CNNs with less than 1% loss in vision accuracy.

AB - Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54%, 62%, and 87% for three CNNs with less than 1% loss in vision accuracy.

KW - Application specific integrated circuits

KW - Computer architecture

KW - Computer vision

KW - Convolutional neural networks

KW - Hardware acceleration

KW - Video compression

UR - http://www.scopus.com/inward/record.url?scp=85055871992&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85055871992&partnerID=8YFLogxK

U2 - 10.1109/ISCA.2018.00051

DO - 10.1109/ISCA.2018.00051

M3 - Conference contribution

AN - SCOPUS:85055871992

SP - 533

EP - 546

BT - Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -