MLPerf: An industry standard benchmark suite for machine learning performance

Peter Mattson, Hanlin Tang, Gu Yeon Wei, Carole Jean Wu, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, David Patterson, Guenther Schmuelling

Research output: Contribution to journalArticle

Abstract

In this article, we describe the design choices behind MLPerf, a machine learning performance benchmark that has become an industry standard. The first two rounds of the MLPerf Training benchmark helped drive improvements to software-stack performance and scalability, showing a 1.3× speedup in the top 16-chip results despite higher quality targets and a 5.5× increase in system scale. The first round of MLPerf Inference received over 500 benchmark results from 14 different organizations, showing growing adoption.

Original languageEnglish (US)
Article number9001257
Pages (from-to)8-16
Number of pages9
JournalIEEE Micro
Volume40
Issue number2
DOIs
StatePublished - Mar 1 2020
Externally publishedYes

    Fingerprint

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Cite this

Mattson, P., Tang, H., Wei, G. Y., Wu, C. J., Reddi, V. J., Cheng, C., Coleman, C., Diamos, G., Kanter, D., Micikevicius, P., Patterson, D., & Schmuelling, G. (2020). MLPerf: An industry standard benchmark suite for machine learning performance. IEEE Micro, 40(2), 8-16. [9001257]. https://doi.org/10.1109/MM.2020.2974843