Understanding Training Efficiency of Deep Learning Recommendation Models at Scale

Bilge Acun, Matthew Murphy, Xiaodong Wang, Jade Nie, Carole Jean Wu, Kim Hazelwood

Research output: Chapter in Book/Report/Conference proceedingConference contribution

36 Scopus citations

Abstract

The use of GPUs has proliferated for machine learning workflows and is now considered mainstream for many deep learning models. Meanwhile, when training state-of-The-Art personal recommendation models, which consume the highest number of compute cycles at our large-scale datacenters, the use of GPUs came with various challenges due to having both compute-intensive and memory-intensive components. GPU performance and efficiency of these recommendation models are largely affected by model architecture configurations such as dense and sparse features, MLP dimensions. Furthermore, these models often contain large embedding tables that do not fit into limited GPU memory. The goal of this paper is to explain the intricacies of using GPUs for training recommendation models, factors affecting hardware efficiency at scale, and learnings from a new scale-up GPU server design, Zion.

Original languageEnglish (US)
Title of host publicationProceeding - 27th IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
PublisherIEEE Computer Society
Pages802-814
Number of pages13
ISBN (Electronic)9780738123370
DOIs
StatePublished - Feb 2021
Externally publishedYes
Event27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021 - Virtual, Seoul, Korea, Republic of
Duration: Feb 27 2021Mar 1 2021

Publication series

NameProceedings - International Symposium on High-Performance Computer Architecture
Volume2021-February
ISSN (Print)1530-0897

Conference

Conference27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
Country/TerritoryKorea, Republic of
CityVirtual, Seoul
Period2/27/213/1/21

Keywords

  • GPUs
  • Recommendation models
  • deep learning
  • training efficiency

ASJC Scopus subject areas

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Understanding Training Efficiency of Deep Learning Recommendation Models at Scale'. Together they form a unique fingerprint.

Cite this