The Vision Behind MLPerf

Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole Jean Wu, Brian Anderson, Maximilien Breughe, Ramesh Chukka, Cody Coleman, Itay Hubara, Thomas B. Jablin, Pankaj Kanwar, Anton Lokhmotov, Francisco Massa, Gennady Pekhimenko, Ashish Sirasao, Tom St. John, George Yuan, Dilip Sequeira

Research output: Contribution to journalArticlepeer-review

Abstract

Deep Learning has sparked a renaissance in computer systems and architecture. Despite the breakneck pace of innovation, there is a crucial issue concerning the research and industry communities at large: how to enable neutral and useful performance assessment for ML software frameworks, ML hardware accelerators, and ML systems comprising both the software stack and the hardware. The ML field needs systematic methods for evaluating performance that represents real-world use-cases and useful for making comparisons across different software and hardware implementations. MLPerf answers the call [11]. MLPerf is a machine learning benchmark standard driven by academia and industry (70+ organizations) [2]. Built out of the expertise of multiple organizations, MLPerf establishes a standard benchmark suite with proper metrics and benchmarking methodologies to level the playing field for ML system performance measurement of different ML inference hardware, software, and services.

Original languageEnglish (US)
JournalIEEE Micro
DOIs
StateAccepted/In press - 2021
Externally publishedYes

Keywords

  • Benchmarks
  • Inference
  • Machine learning

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'The Vision Behind MLPerf'. Together they form a unique fingerprint.

Cite this