Driving cache replacement with ML-based LeCaR

Giuseppe Vietri, Liana V. Rodriguez, Wendy A. Martinez, Steven Lyons, Jason Liu, Raju Rangaswami, Ming Zhao, Giri Narasimhan

Research output: Contribution to conferencePaperpeer-review

62 Scopus citations

Abstract

Can machine learning (ML) be used to improve on existing cache replacement strategies? We propose a general framework called LeCaR that uses the ML technique of regret minimization to answer the question in the affirmative. We show that the LeCaR framework outperforms ARC using only two fundamental eviction policies, LRU and LFU, by more than 18x when the cache size is small relative to the size of the working set.

Original languageEnglish (US)
StatePublished - 2018
Event10th USENIX Workshop on Hot Topics in Storage and File Systems, HotStorage 2018, co-located with USENIX ATC 2018 - Boston, United States
Duration: Jul 9 2018Jul 10 2018

Conference

Conference10th USENIX Workshop on Hot Topics in Storage and File Systems, HotStorage 2018, co-located with USENIX ATC 2018
Country/TerritoryUnited States
CityBoston
Period7/9/187/10/18

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Driving cache replacement with ML-based LeCaR'. Together they form a unique fingerprint.

Cite this