CAWA: Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads

Shin Ying Lee, Akhil Arunkumar, Carole-Jean Wu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

51 Citations (Scopus)

Abstract

The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs - the warp criticality problem. To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and -2% respectively.

Original languageEnglish (US)
Title of host publicationProceedings - International Symposium on Computer Architecture
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages515-527
Number of pages13
Volume13-17-June-2015
ISBN (Print)9781450334020
DOIs
StatePublished - Jun 13 2015
Event42nd Annual International Symposium on Computer Architecture, ISCA 2015 - Portland, United States
Duration: Jun 13 2015Jun 17 2015

Other

Other42nd Annual International Symposium on Computer Architecture, ISCA 2015
CountryUnited States
CityPortland
Period6/13/156/17/15

Fingerprint

Scheduling
Data storage equipment
Graphics processing unit
Pipelines

ASJC Scopus subject areas

  • Hardware and Architecture

Cite this

Lee, S. Y., Arunkumar, A., & Wu, C-J. (2015). CAWA: Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads. In Proceedings - International Symposium on Computer Architecture (Vol. 13-17-June-2015, pp. 515-527). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1145/2749469.2750418

CAWA : Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads. / Lee, Shin Ying; Arunkumar, Akhil; Wu, Carole-Jean.

Proceedings - International Symposium on Computer Architecture. Vol. 13-17-June-2015 Institute of Electrical and Electronics Engineers Inc., 2015. p. 515-527.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, SY, Arunkumar, A & Wu, C-J 2015, CAWA: Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads. in Proceedings - International Symposium on Computer Architecture. vol. 13-17-June-2015, Institute of Electrical and Electronics Engineers Inc., pp. 515-527, 42nd Annual International Symposium on Computer Architecture, ISCA 2015, Portland, United States, 6/13/15. https://doi.org/10.1145/2749469.2750418
Lee SY, Arunkumar A, Wu C-J. CAWA: Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads. In Proceedings - International Symposium on Computer Architecture. Vol. 13-17-June-2015. Institute of Electrical and Electronics Engineers Inc. 2015. p. 515-527 https://doi.org/10.1145/2749469.2750418
Lee, Shin Ying ; Arunkumar, Akhil ; Wu, Carole-Jean. / CAWA : Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads. Proceedings - International Symposium on Computer Architecture. Vol. 13-17-June-2015 Institute of Electrical and Electronics Engineers Inc., 2015. pp. 515-527
@inproceedings{fbdf927997eb41a2806f6fc8a49633d1,
title = "CAWA: Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads",
abstract = "The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs - the warp criticality problem. To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23{\%} while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16{\%} and -2{\%} respectively.",
author = "Lee, {Shin Ying} and Akhil Arunkumar and Carole-Jean Wu",
year = "2015",
month = "6",
day = "13",
doi = "10.1145/2749469.2750418",
language = "English (US)",
isbn = "9781450334020",
volume = "13-17-June-2015",
pages = "515--527",
booktitle = "Proceedings - International Symposium on Computer Architecture",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - CAWA

T2 - Coordinated warp scheduling and cache prioritization for critical warp acceleration of GPGPU workloads

AU - Lee, Shin Ying

AU - Arunkumar, Akhil

AU - Wu, Carole-Jean

PY - 2015/6/13

Y1 - 2015/6/13

N2 - The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs - the warp criticality problem. To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and -2% respectively.

AB - The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs - the warp criticality problem. To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and -2% respectively.

UR - http://www.scopus.com/inward/record.url?scp=84960122845&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84960122845&partnerID=8YFLogxK

U2 - 10.1145/2749469.2750418

DO - 10.1145/2749469.2750418

M3 - Conference contribution

AN - SCOPUS:84960122845

SN - 9781450334020

VL - 13-17-June-2015

SP - 515

EP - 527

BT - Proceedings - International Symposium on Computer Architecture

PB - Institute of Electrical and Electronics Engineers Inc.

ER -