5 Citations (Scopus)

Abstract

CUDA has successfully popularized GPU computing, and GPGPU applications are now used in various embedded systems. The CUDA programming model provides a simple interface to program on GPUs, but tuning GPGPU applications for high performance is still quite challenging. Programmers need to consider numerous architectural details, and small changes in source code, especially on the memory access pattern, can affect performance significantly. This makes it very difficult to optimize CUDA programs. This article presents CuMAPz, which is a tool to analyze and compare the memory performance of CUDA programs. CuMAPz can help programmers explore different ways of using shared and global memories, and optimize their program for efficient memory behavior. CuMAPz models several memory-performance-related factors: data reuse, global memory access coalescing, global memory latency hiding, shared memory bank conflict, channel skew, and branch divergence. Experimental results show that CuMAPz can accurately estimate performance with correlation coefficient of 0.96. By using CuMAPz to explore the memory access design space, we could improve the performance of our benchmarks by 30% more than the previous approach [Hong and Kim 2010].

Original languageEnglish (US)
Article number21
JournalTransactions on Embedded Computing Systems
Volume13
Issue number2
DOIs
StatePublished - 2013

Fingerprint

Data storage equipment
Computer programming
Embedded systems
Interfaces (computer)
Tuning

Keywords

  • CUDA
  • GPGPU
  • Memory performance
  • Performance estimation
  • Program optimization

ASJC Scopus subject areas

  • Hardware and Architecture
  • Software

Cite this

Memory performance estimation of CUDA programs. / Kim, Yooseong; Shrivastava, Aviral.

In: Transactions on Embedded Computing Systems, Vol. 13, No. 2, 21, 2013.

Research output: Contribution to journalArticle

@article{f7176c5a9153421c8a2958ca0700d47a,
title = "Memory performance estimation of CUDA programs",
abstract = "CUDA has successfully popularized GPU computing, and GPGPU applications are now used in various embedded systems. The CUDA programming model provides a simple interface to program on GPUs, but tuning GPGPU applications for high performance is still quite challenging. Programmers need to consider numerous architectural details, and small changes in source code, especially on the memory access pattern, can affect performance significantly. This makes it very difficult to optimize CUDA programs. This article presents CuMAPz, which is a tool to analyze and compare the memory performance of CUDA programs. CuMAPz can help programmers explore different ways of using shared and global memories, and optimize their program for efficient memory behavior. CuMAPz models several memory-performance-related factors: data reuse, global memory access coalescing, global memory latency hiding, shared memory bank conflict, channel skew, and branch divergence. Experimental results show that CuMAPz can accurately estimate performance with correlation coefficient of 0.96. By using CuMAPz to explore the memory access design space, we could improve the performance of our benchmarks by 30{\%} more than the previous approach [Hong and Kim 2010].",
keywords = "CUDA, GPGPU, Memory performance, Performance estimation, Program optimization",
author = "Yooseong Kim and Aviral Shrivastava",
year = "2013",
doi = "10.1145/2514641.2514648",
language = "English (US)",
volume = "13",
journal = "ACM Transactions on Embedded Computing Systems",
issn = "1539-9087",
publisher = "Association for Computing Machinery (ACM)",
number = "2",

}

TY - JOUR

T1 - Memory performance estimation of CUDA programs

AU - Kim, Yooseong

AU - Shrivastava, Aviral

PY - 2013

Y1 - 2013

N2 - CUDA has successfully popularized GPU computing, and GPGPU applications are now used in various embedded systems. The CUDA programming model provides a simple interface to program on GPUs, but tuning GPGPU applications for high performance is still quite challenging. Programmers need to consider numerous architectural details, and small changes in source code, especially on the memory access pattern, can affect performance significantly. This makes it very difficult to optimize CUDA programs. This article presents CuMAPz, which is a tool to analyze and compare the memory performance of CUDA programs. CuMAPz can help programmers explore different ways of using shared and global memories, and optimize their program for efficient memory behavior. CuMAPz models several memory-performance-related factors: data reuse, global memory access coalescing, global memory latency hiding, shared memory bank conflict, channel skew, and branch divergence. Experimental results show that CuMAPz can accurately estimate performance with correlation coefficient of 0.96. By using CuMAPz to explore the memory access design space, we could improve the performance of our benchmarks by 30% more than the previous approach [Hong and Kim 2010].

AB - CUDA has successfully popularized GPU computing, and GPGPU applications are now used in various embedded systems. The CUDA programming model provides a simple interface to program on GPUs, but tuning GPGPU applications for high performance is still quite challenging. Programmers need to consider numerous architectural details, and small changes in source code, especially on the memory access pattern, can affect performance significantly. This makes it very difficult to optimize CUDA programs. This article presents CuMAPz, which is a tool to analyze and compare the memory performance of CUDA programs. CuMAPz can help programmers explore different ways of using shared and global memories, and optimize their program for efficient memory behavior. CuMAPz models several memory-performance-related factors: data reuse, global memory access coalescing, global memory latency hiding, shared memory bank conflict, channel skew, and branch divergence. Experimental results show that CuMAPz can accurately estimate performance with correlation coefficient of 0.96. By using CuMAPz to explore the memory access design space, we could improve the performance of our benchmarks by 30% more than the previous approach [Hong and Kim 2010].

KW - CUDA

KW - GPGPU

KW - Memory performance

KW - Performance estimation

KW - Program optimization

UR - http://www.scopus.com/inward/record.url?scp=84885650614&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84885650614&partnerID=8YFLogxK

U2 - 10.1145/2514641.2514648

DO - 10.1145/2514641.2514648

M3 - Article

VL - 13

JO - ACM Transactions on Embedded Computing Systems

JF - ACM Transactions on Embedded Computing Systems

SN - 1539-9087

IS - 2

M1 - 21

ER -