TY - GEN
T1 - Understanding Training Efficiency of Deep Learning Recommendation Models at Scale
AU - Acun, Bilge
AU - Murphy, Matthew
AU - Wang, Xiaodong
AU - Nie, Jade
AU - Wu, Carole Jean
AU - Hazelwood, Kim
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/2
Y1 - 2021/2
N2 - The use of GPUs has proliferated for machine learning workflows and is now considered mainstream for many deep learning models. Meanwhile, when training state-of-The-Art personal recommendation models, which consume the highest number of compute cycles at our large-scale datacenters, the use of GPUs came with various challenges due to having both compute-intensive and memory-intensive components. GPU performance and efficiency of these recommendation models are largely affected by model architecture configurations such as dense and sparse features, MLP dimensions. Furthermore, these models often contain large embedding tables that do not fit into limited GPU memory. The goal of this paper is to explain the intricacies of using GPUs for training recommendation models, factors affecting hardware efficiency at scale, and learnings from a new scale-up GPU server design, Zion.
AB - The use of GPUs has proliferated for machine learning workflows and is now considered mainstream for many deep learning models. Meanwhile, when training state-of-The-Art personal recommendation models, which consume the highest number of compute cycles at our large-scale datacenters, the use of GPUs came with various challenges due to having both compute-intensive and memory-intensive components. GPU performance and efficiency of these recommendation models are largely affected by model architecture configurations such as dense and sparse features, MLP dimensions. Furthermore, these models often contain large embedding tables that do not fit into limited GPU memory. The goal of this paper is to explain the intricacies of using GPUs for training recommendation models, factors affecting hardware efficiency at scale, and learnings from a new scale-up GPU server design, Zion.
KW - GPUs
KW - Recommendation models
KW - deep learning
KW - training efficiency
UR - http://www.scopus.com/inward/record.url?scp=85104940818&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104940818&partnerID=8YFLogxK
U2 - 10.1109/HPCA51647.2021.00072
DO - 10.1109/HPCA51647.2021.00072
M3 - Conference contribution
AN - SCOPUS:85104940818
T3 - Proceedings - International Symposium on High-Performance Computer Architecture
SP - 802
EP - 814
BT - Proceeding - 27th IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
PB - IEEE Computer Society
T2 - 27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
Y2 - 27 February 2021 through 1 March 2021
ER -