TY - GEN
T1 - GEVO-ML
T2 - 2020 Genetic and Evolutionary Computation Conference, GECCO 2020
AU - Liou, Jhe Yu
AU - Wang, Xiaodong
AU - Forrest, Stephanie
AU - Wu, Carole Jean
N1 - Funding Information:
We thank F. Esponda, W. Weimer, and E. Schulte for many insights, code and helpful comments. The authors gratefully acknowledge the partial support of the National Science Foundation (CCF-1618039, SHF-1652132, and CCF 1908633); DARPA (FA8750-15-C-0118); AFRL (FA8750-19-1-0501); and the Santa Fe Institute for Jhe-Yu Liou, Stephanie Forrest, and Carole-Jean Wu at ASU.
Publisher Copyright:
© 2020 ACM.
PY - 2020/7/8
Y1 - 2020/7/8
N2 - Parallel accelerators, such as GPUs, are a key enabler of large-scale Machine Learning (ML) applications. However, programmers often lack detailed knowledge of the underlying architecture and fail to fully leverage their computational power. This paper proposes GEVO-ML, a tool for automatically discovering optimization opportunities and tuning the performance of ML kernels. GEVO-ML extends earlier work on GEVO (Gpu optimization using EVOlutionary computation) by focusing directly on ML frameworks, intermediate languages, and target architectures. It retains the multi-objective evolutionary search developed for GEVO, which searches for edits to GPU code compiled to LLVM-IR and improves performance on desired criteria while retaining required functionality. In earlier work, we studied some ML workloads in GPU settings and found that GEVO could improve kernel speeds by factors ranging from 1.7X to 2.9X, even with access to only a small portion of the overall ML framework. This workshop paper examines the limitations and constraints of GEVO for ML workloads and discusses our GEVO-ML design, which we are currently implementing.
AB - Parallel accelerators, such as GPUs, are a key enabler of large-scale Machine Learning (ML) applications. However, programmers often lack detailed knowledge of the underlying architecture and fail to fully leverage their computational power. This paper proposes GEVO-ML, a tool for automatically discovering optimization opportunities and tuning the performance of ML kernels. GEVO-ML extends earlier work on GEVO (Gpu optimization using EVOlutionary computation) by focusing directly on ML frameworks, intermediate languages, and target architectures. It retains the multi-objective evolutionary search developed for GEVO, which searches for edits to GPU code compiled to LLVM-IR and improves performance on desired criteria while retaining required functionality. In earlier work, we studied some ML workloads in GPU settings and found that GEVO could improve kernel speeds by factors ranging from 1.7X to 2.9X, even with access to only a small portion of the overall ML framework. This workshop paper examines the limitations and constraints of GEVO for ML workloads and discusses our GEVO-ML design, which we are currently implementing.
KW - Genetic improvement
KW - Machine learning
KW - Multi-objective evolutionary computation
UR - http://www.scopus.com/inward/record.url?scp=85089736702&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089736702&partnerID=8YFLogxK
U2 - 10.1145/3377929.3398139
DO - 10.1145/3377929.3398139
M3 - Conference contribution
AN - SCOPUS:85089736702
T3 - GECCO 2020 Companion - Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion
SP - 1849
EP - 1856
BT - GECCO 2020 Companion - Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion
PB - Association for Computing Machinery, Inc
Y2 - 8 July 2020 through 12 July 2020
ER -