TY - GEN
T1 - Communication and Computation Reduction for Split Learning using Asynchronous Training
AU - Chen, Xing
AU - Li, Jingtao
AU - Chakrabarti, Chaitali
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication over-head, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.
AB - Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication over-head, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.
KW - Asynchronous training
KW - Communication reduction
KW - Quantization
KW - Split learning
UR - http://www.scopus.com/inward/record.url?scp=85122895766&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85122895766&partnerID=8YFLogxK
U2 - 10.1109/SiPS52927.2021.00022
DO - 10.1109/SiPS52927.2021.00022
M3 - Conference contribution
AN - SCOPUS:85122895766
T3 - IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation
SP - 76
EP - 81
BT - Proceedings - 2021 IEEE Workshop on Signal Processing Systems, SiPS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Workshop on Signal Processing Systems, SiPS 2021
Y2 - 19 October 2021 through 21 October 2021
ER -