DA3: Dynamic Additive Attention Adaption for Memory-Efficient On-Device Multi-Domain Learning

Li Yang, Adnan Siraj Rakin, Deliang Fan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Nowadays, one practical limitation of deep neural network (DNN) is its high degree of specialization to a single task or domain (e.g., one visual domain). It motivates re-searchers to develop algorithms that can adapt DNN model to multiple domains sequentially, while still performing well on the past domains, which is known as multi-domain learning. Almost all conventional methods only focus on improving accuracy with minimal parameter update, while ignoring high computing and memory cost during training, which makes it difficult to deploy multi-domain learning into more and more widely used resource-limited edge devices, like mobile phone, IoT, embedded system, etc. During our study in multi-domain training process, we observe that large memory used for activation storage is the bottleneck that largely limits the training time and cost on edge devices. To reduce training memory usage, while keeping the domain adaption accuracy performance, we propose Dynamic Additive Attention Adaption (DA3), a novel memory-efficient on-device multi-domain learning method. DA3 learns a novel additive attention adaptor module, while freezing the weights of the pre-trained backbone model for each domain. Differentiating from prior works, our proposed DA3 module not only mitigates activation memory buffering for reducing memory usage during training, but also serves as dynamic gating mechanism to reduce the computation cost for fast inference. We validate DA3 on multiple dataset against state-of-the-art methods, which shows great improvement in both accuracy and training time. Moreover, we deploy DA3 into the popular NIVDIA Jetson Nano edge GPU, where the measured experimental results show our proposed DA3 reduces the on-device training memory consumption by 5-37×, and training time by 2×, in comparison to the baseline methods (e.g., standard fine-tuning, Parallel and Series Res. adaptor, Piggyback and TinyTL).

Original languageEnglish (US)
Title of host publicationProceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
PublisherIEEE Computer Society
Pages2618-2626
Number of pages9
ISBN (Electronic)9781665487399
DOIs
StatePublished - 2022
Event2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022 - New Orleans, United States
Duration: Jun 19 2022Jun 20 2022

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Volume2022-June
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Conference

Conference2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
Country/TerritoryUnited States
CityNew Orleans
Period6/19/226/20/22

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'DA3: Dynamic Additive Attention Adaption for Memory-Efficient On-Device Multi-Domain Learning'. Together they form a unique fingerprint.

Cite this