TY - JOUR
T1 - An Actor-Critic-Based Transfer Learning Framework for Experience-Driven Networking
AU - Xu, Zhiyuan
AU - Yang, Dejun
AU - Tang, Jian
AU - Tang, Yinan
AU - Yuan, Tongtong
AU - Wang, Yanzhi
AU - Xue, Guoliang
N1 - Funding Information:
Manuscript received February 14, 2020; revised September 10, 2020; accepted October 19, 2020; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor K. Chen. Date of publication December 1, 2020; date of current version February 17, 2021. This work was supported in part by the National Science Foundation (NSF) under Grant 1704662 and Grant 1704092. (Corresponding author: Jian Tang.) Zhiyuan Xu and Jian Tang are with the Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY 13244 USA (e-mail: zxu105@syr.edu; jtang02@syr.edu).
Publisher Copyright:
© 1993-2012 IEEE.
PY - 2021/2
Y1 - 2021/2
N2 - Experience-driven networking has emerged as a new and highly effective approach for resource allocation in complex communication networks. Deep Reinforcement Learning (DRL) has been shown to be a useful technique for enabling experience-driven networking. In this paper, we focus on a practical and fundamental problem for experience-driven networking: when network configurations are changed, how to train a new DRL agent to effectively and quickly adapt to the new environment. We present an Actor-Critic-based Transfer learning framework for the Traffic Engineering (TE) problem using policy distillation, which we call ACT-TE. ACT-TE effectively and quickly trains a new DRL agent to solve the TE problem in a new network environment, using both old knowledge (i.e., distilled from the existing agent) and new experience (i.e., newly collected samples). We implement ACT-TE in ns-3, and compare it with commonly-used baselines using packet-level simulations on three representative network topologies: NSFNET, ARPANET and random topology. The extensive simulation results show that 1) The existing well-trained DRL agents do not work well in new network environments; 2) ACT-TE significantly outperforms both two straightforward methods (training from scratch and fine-tuning based on an existing DRL agent) and several widely-used traditional methods in terms of network utility, throughput and delay.
AB - Experience-driven networking has emerged as a new and highly effective approach for resource allocation in complex communication networks. Deep Reinforcement Learning (DRL) has been shown to be a useful technique for enabling experience-driven networking. In this paper, we focus on a practical and fundamental problem for experience-driven networking: when network configurations are changed, how to train a new DRL agent to effectively and quickly adapt to the new environment. We present an Actor-Critic-based Transfer learning framework for the Traffic Engineering (TE) problem using policy distillation, which we call ACT-TE. ACT-TE effectively and quickly trains a new DRL agent to solve the TE problem in a new network environment, using both old knowledge (i.e., distilled from the existing agent) and new experience (i.e., newly collected samples). We implement ACT-TE in ns-3, and compare it with commonly-used baselines using packet-level simulations on three representative network topologies: NSFNET, ARPANET and random topology. The extensive simulation results show that 1) The existing well-trained DRL agents do not work well in new network environments; 2) ACT-TE significantly outperforms both two straightforward methods (training from scratch and fine-tuning based on an existing DRL agent) and several widely-used traditional methods in terms of network utility, throughput and delay.
KW - Experience-driven networking
KW - deep reinforcement learning and transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85097398447&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097398447&partnerID=8YFLogxK
U2 - 10.1109/TNET.2020.3037231
DO - 10.1109/TNET.2020.3037231
M3 - Article
AN - SCOPUS:85097398447
SN - 1063-6692
VL - 29
SP - 360
EP - 371
JO - IEEE/ACM Transactions on Networking
JF - IEEE/ACM Transactions on Networking
IS - 1
M1 - 9274515
ER -