An infrastructure for memory management on LLM multi-core architectures

Research output: Patent

Abstract

Limited Local Memory (LLM) multi-core architectures substitute cache with scratch pad memories (SPM). As a result, SPMs have much lower power consumption compared to other multi-core architectures. However, SPMs lack an automatic memory management system which presents a challenge to programmers as heap data sizes may be variable and data dependent. The heap is a region of computer memory that is not managed automatically and is not as tightly managed by the CPU. Allocating to the heap and removing from the heap require manual calls to specific functions within a computer programming language. Managing heap data of the tasks executing in the cores of an LLM multi-core system has become an important issue. Researchers at Arizona State University have developed a fully automatic and efficient scheme for heap data management. The scheme comprises two components: (1) an optimized runtime library and (2) a modified compiler. The scheme features code transformation for automation of heap management with support for multi-level pointers as well as improved data structures to more efficiently manage unlimited heap data and unburden the programmer from the task of API function insertion. Experimental results on several benchmarks show an average of 43% performance improvement over previous approaches. Potential Applications Chip Manufacturers reduce programming overhead of implementing scratchpad memory based multi-core processors. Application Developers create motivation to efficiently utilize the multitude of multi-core processors available in the market. Benefits and Advantages Automated scheme is fully automated to efficiently manage heap data for LLM multi-core architecture. Unlimited Heap Data in Local Memory the compiler can request dynamic memory allocation in the global memory and support unlimited heap data in the local memory. Lower Overhead coarser granularity leads to lower management overhead, and therefore leads to less Direct Memory Access (DMA). Improved Performance average improvement over all benchmarks by 43%. Improved Generality can handle multi-level heap pointers and differentiate with stack pointers. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Aviral Shrivastava 's directory webpage
Original languageEnglish (US)
StatePublished - Oct 25 2012

Fingerprint

Data storage equipment
Storage allocation (computer)
Computer programming
Application programming interfaces (API)
Computer programming languages
Information management
Program processors
Data structures
Electric power utilization
Automation

Cite this

@misc{200ea4f15c4a4170b5ded96734651115,
title = "An infrastructure for memory management on LLM multi-core architectures",
abstract = "Limited Local Memory (LLM) multi-core architectures substitute cache with scratch pad memories (SPM). As a result, SPMs have much lower power consumption compared to other multi-core architectures. However, SPMs lack an automatic memory management system which presents a challenge to programmers as heap data sizes may be variable and data dependent. The heap is a region of computer memory that is not managed automatically and is not as tightly managed by the CPU. Allocating to the heap and removing from the heap require manual calls to specific functions within a computer programming language. Managing heap data of the tasks executing in the cores of an LLM multi-core system has become an important issue. Researchers at Arizona State University have developed a fully automatic and efficient scheme for heap data management. The scheme comprises two components: (1) an optimized runtime library and (2) a modified compiler. The scheme features code transformation for automation of heap management with support for multi-level pointers as well as improved data structures to more efficiently manage unlimited heap data and unburden the programmer from the task of API function insertion. Experimental results on several benchmarks show an average of 43{\%} performance improvement over previous approaches. Potential Applications Chip Manufacturers reduce programming overhead of implementing scratchpad memory based multi-core processors. Application Developers create motivation to efficiently utilize the multitude of multi-core processors available in the market. Benefits and Advantages Automated scheme is fully automated to efficiently manage heap data for LLM multi-core architecture. Unlimited Heap Data in Local Memory the compiler can request dynamic memory allocation in the global memory and support unlimited heap data in the local memory. Lower Overhead coarser granularity leads to lower management overhead, and therefore leads to less Direct Memory Access (DMA). Improved Performance average improvement over all benchmarks by 43{\%}. Improved Generality can handle multi-level heap pointers and differentiate with stack pointers. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Aviral Shrivastava 's directory webpage",
author = "Aviral Shrivastava",
year = "2012",
month = "10",
day = "25",
language = "English (US)",
type = "Patent",

}

TY - PAT

T1 - An infrastructure for memory management on LLM multi-core architectures

AU - Shrivastava, Aviral

PY - 2012/10/25

Y1 - 2012/10/25

N2 - Limited Local Memory (LLM) multi-core architectures substitute cache with scratch pad memories (SPM). As a result, SPMs have much lower power consumption compared to other multi-core architectures. However, SPMs lack an automatic memory management system which presents a challenge to programmers as heap data sizes may be variable and data dependent. The heap is a region of computer memory that is not managed automatically and is not as tightly managed by the CPU. Allocating to the heap and removing from the heap require manual calls to specific functions within a computer programming language. Managing heap data of the tasks executing in the cores of an LLM multi-core system has become an important issue. Researchers at Arizona State University have developed a fully automatic and efficient scheme for heap data management. The scheme comprises two components: (1) an optimized runtime library and (2) a modified compiler. The scheme features code transformation for automation of heap management with support for multi-level pointers as well as improved data structures to more efficiently manage unlimited heap data and unburden the programmer from the task of API function insertion. Experimental results on several benchmarks show an average of 43% performance improvement over previous approaches. Potential Applications Chip Manufacturers reduce programming overhead of implementing scratchpad memory based multi-core processors. Application Developers create motivation to efficiently utilize the multitude of multi-core processors available in the market. Benefits and Advantages Automated scheme is fully automated to efficiently manage heap data for LLM multi-core architecture. Unlimited Heap Data in Local Memory the compiler can request dynamic memory allocation in the global memory and support unlimited heap data in the local memory. Lower Overhead coarser granularity leads to lower management overhead, and therefore leads to less Direct Memory Access (DMA). Improved Performance average improvement over all benchmarks by 43%. Improved Generality can handle multi-level heap pointers and differentiate with stack pointers. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Aviral Shrivastava 's directory webpage

AB - Limited Local Memory (LLM) multi-core architectures substitute cache with scratch pad memories (SPM). As a result, SPMs have much lower power consumption compared to other multi-core architectures. However, SPMs lack an automatic memory management system which presents a challenge to programmers as heap data sizes may be variable and data dependent. The heap is a region of computer memory that is not managed automatically and is not as tightly managed by the CPU. Allocating to the heap and removing from the heap require manual calls to specific functions within a computer programming language. Managing heap data of the tasks executing in the cores of an LLM multi-core system has become an important issue. Researchers at Arizona State University have developed a fully automatic and efficient scheme for heap data management. The scheme comprises two components: (1) an optimized runtime library and (2) a modified compiler. The scheme features code transformation for automation of heap management with support for multi-level pointers as well as improved data structures to more efficiently manage unlimited heap data and unburden the programmer from the task of API function insertion. Experimental results on several benchmarks show an average of 43% performance improvement over previous approaches. Potential Applications Chip Manufacturers reduce programming overhead of implementing scratchpad memory based multi-core processors. Application Developers create motivation to efficiently utilize the multitude of multi-core processors available in the market. Benefits and Advantages Automated scheme is fully automated to efficiently manage heap data for LLM multi-core architecture. Unlimited Heap Data in Local Memory the compiler can request dynamic memory allocation in the global memory and support unlimited heap data in the local memory. Lower Overhead coarser granularity leads to lower management overhead, and therefore leads to less Direct Memory Access (DMA). Improved Performance average improvement over all benchmarks by 43%. Improved Generality can handle multi-level heap pointers and differentiate with stack pointers. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Aviral Shrivastava 's directory webpage

M3 - Patent

ER -