TY - GEN
T1 - Online Knowledge Acquisition with the Selective Inherited Model
AU - Du, Xiaocong
AU - Venkataramanaiah, Shreyas Kolala
AU - Li, Zheng
AU - Seo, Jae Sun
AU - Liu, Frank
AU - Cao, Yu
N1 - Funding Information:
This work was supported in part by the Semiconductor Research Corporation (SRC) and DARPA. It was also partially supported by National Science Foundation (NSF) under CCF #1715443.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/7
Y1 - 2020/7
N2 - Continual learning, which updates machine learning models according to streaming data, is increasingly needed in the dynamic systems. Such a scenario requires both the preservation of previous knowledge, as well as the adaptation to new observations, with high computational and memory efficiency at the edge. Previous approaches attempt to learn the knowledge class by class from scratch, using either regularization based or memory replay-based methods. However, they still suffer from severe accuracy drop, a.k.a catastrophic forgetting, during this incremental process. Moreover, as the entire model is involved in each updating, their computation cost is too expensive for edge computing. In this work, we propose a novel brain- inspired paradigm named acquisitive learning (AL). Different from previous approaches that focus only on model adaptation, AL emphasizes the importance of both knowledge inheritance and acquisition: the model is first pre-trained and selected in the cloud (the selective inherited model) and then adapted to new knowledge (the acquisition). The quality of the inherited model is monitored by the landscape of the loss function, while the acquisition is realized by segmented training. The combination of both steps reduces accuracy drop by >10× on the CIFAR- 100 dataset. Furthermore, AL benefits edge computing with 5× reduction in latency per training image on FPGA prototype and 150× reduction in training FLOPs.
AB - Continual learning, which updates machine learning models according to streaming data, is increasingly needed in the dynamic systems. Such a scenario requires both the preservation of previous knowledge, as well as the adaptation to new observations, with high computational and memory efficiency at the edge. Previous approaches attempt to learn the knowledge class by class from scratch, using either regularization based or memory replay-based methods. However, they still suffer from severe accuracy drop, a.k.a catastrophic forgetting, during this incremental process. Moreover, as the entire model is involved in each updating, their computation cost is too expensive for edge computing. In this work, we propose a novel brain- inspired paradigm named acquisitive learning (AL). Different from previous approaches that focus only on model adaptation, AL emphasizes the importance of both knowledge inheritance and acquisition: the model is first pre-trained and selected in the cloud (the selective inherited model) and then adapted to new knowledge (the acquisition). The quality of the inherited model is monitored by the landscape of the loss function, while the acquisition is realized by segmented training. The combination of both steps reduces accuracy drop by >10× on the CIFAR- 100 dataset. Furthermore, AL benefits edge computing with 5× reduction in latency per training image on FPGA prototype and 150× reduction in training FLOPs.
KW - Continual learning
KW - acquisitive learning
KW - brain inspiration
KW - deep neural networks
KW - knowledge acquisition
KW - knowledge inheritance
KW - model adaptation
UR - http://www.scopus.com/inward/record.url?scp=85093856070&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093856070&partnerID=8YFLogxK
U2 - 10.1109/IJCNN48605.2020.9206904
DO - 10.1109/IJCNN48605.2020.9206904
M3 - Conference contribution
AN - SCOPUS:85093856070
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 International Joint Conference on Neural Networks, IJCNN 2020
Y2 - 19 July 2020 through 24 July 2020
ER -