K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM

Jyotishman Saikia, Shihui Yin, Zhewei Jiang, Mingoo Seok, Jae Sun Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The k-nearest neighbor (kNN) is one of the most popular algorithms in machine learning owing to its simplicity, versatility, and implementation viability without any assumptions about the data. However, for large-scale data, it incurs a large amount of memory access and computational complexity, resulting in long latency and high power consumption. In this paper, we present a kNN hardware accelerator in 65nm CMOS. This accelerator combines in-memory computing SRAM that is recently developed for binarized deep neural networks and digital hardware that performs top-k sorting. We designed and simulated the kNN accelerator, which performs up to 17.9 million query vectors per second while consuming 11.8 mW, demonstrating >4.8X energy improvement over prior works.

Original languageEnglish (US)
Title of host publicationInternational Symposium on Low Power Electronics and Design, ISLPED 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728129549
DOIs
StatePublished - Jul 2019
Event2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019 - Lausanne, Switzerland
Duration: Jul 29 2019Jul 31 2019

Publication series

NameProceedings of the International Symposium on Low Power Electronics and Design
Volume2019-July
ISSN (Print)1533-4678

Conference

Conference2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019
CountrySwitzerland
CityLausanne
Period7/29/197/31/19

Fingerprint

Static random access storage
Computer hardware
Particle accelerators
Data storage equipment
Hardware
Sorting
Learning systems
Computational complexity
Electric power utilization

Keywords

  • content addressable memory
  • hardware accelerator
  • in-memory computing
  • k-nearest neighbor

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Saikia, J., Yin, S., Jiang, Z., Seok, M., & Seo, J. S. (2019). K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM. In International Symposium on Low Power Electronics and Design, ISLPED 2019 [8824822] (Proceedings of the International Symposium on Low Power Electronics and Design; Vol. 2019-July). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ISLPED.2019.8824822

K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM. / Saikia, Jyotishman; Yin, Shihui; Jiang, Zhewei; Seok, Mingoo; Seo, Jae Sun.

International Symposium on Low Power Electronics and Design, ISLPED 2019. Institute of Electrical and Electronics Engineers Inc., 2019. 8824822 (Proceedings of the International Symposium on Low Power Electronics and Design; Vol. 2019-July).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Saikia, J, Yin, S, Jiang, Z, Seok, M & Seo, JS 2019, K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM. in International Symposium on Low Power Electronics and Design, ISLPED 2019., 8824822, Proceedings of the International Symposium on Low Power Electronics and Design, vol. 2019-July, Institute of Electrical and Electronics Engineers Inc., 2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019, Lausanne, Switzerland, 7/29/19. https://doi.org/10.1109/ISLPED.2019.8824822
Saikia J, Yin S, Jiang Z, Seok M, Seo JS. K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM. In International Symposium on Low Power Electronics and Design, ISLPED 2019. Institute of Electrical and Electronics Engineers Inc. 2019. 8824822. (Proceedings of the International Symposium on Low Power Electronics and Design). https://doi.org/10.1109/ISLPED.2019.8824822
Saikia, Jyotishman ; Yin, Shihui ; Jiang, Zhewei ; Seok, Mingoo ; Seo, Jae Sun. / K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM. International Symposium on Low Power Electronics and Design, ISLPED 2019. Institute of Electrical and Electronics Engineers Inc., 2019. (Proceedings of the International Symposium on Low Power Electronics and Design).
@inproceedings{39b747c59e5f41d5b191a3e6a90f3c2e,
title = "K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM",
abstract = "The k-nearest neighbor (kNN) is one of the most popular algorithms in machine learning owing to its simplicity, versatility, and implementation viability without any assumptions about the data. However, for large-scale data, it incurs a large amount of memory access and computational complexity, resulting in long latency and high power consumption. In this paper, we present a kNN hardware accelerator in 65nm CMOS. This accelerator combines in-memory computing SRAM that is recently developed for binarized deep neural networks and digital hardware that performs top-k sorting. We designed and simulated the kNN accelerator, which performs up to 17.9 million query vectors per second while consuming 11.8 mW, demonstrating >4.8X energy improvement over prior works.",
keywords = "content addressable memory, hardware accelerator, in-memory computing, k-nearest neighbor",
author = "Jyotishman Saikia and Shihui Yin and Zhewei Jiang and Mingoo Seok and Seo, {Jae Sun}",
year = "2019",
month = "7",
doi = "10.1109/ISLPED.2019.8824822",
language = "English (US)",
series = "Proceedings of the International Symposium on Low Power Electronics and Design",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
booktitle = "International Symposium on Low Power Electronics and Design, ISLPED 2019",

}

TY - GEN

T1 - K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM

AU - Saikia, Jyotishman

AU - Yin, Shihui

AU - Jiang, Zhewei

AU - Seok, Mingoo

AU - Seo, Jae Sun

PY - 2019/7

Y1 - 2019/7

N2 - The k-nearest neighbor (kNN) is one of the most popular algorithms in machine learning owing to its simplicity, versatility, and implementation viability without any assumptions about the data. However, for large-scale data, it incurs a large amount of memory access and computational complexity, resulting in long latency and high power consumption. In this paper, we present a kNN hardware accelerator in 65nm CMOS. This accelerator combines in-memory computing SRAM that is recently developed for binarized deep neural networks and digital hardware that performs top-k sorting. We designed and simulated the kNN accelerator, which performs up to 17.9 million query vectors per second while consuming 11.8 mW, demonstrating >4.8X energy improvement over prior works.

AB - The k-nearest neighbor (kNN) is one of the most popular algorithms in machine learning owing to its simplicity, versatility, and implementation viability without any assumptions about the data. However, for large-scale data, it incurs a large amount of memory access and computational complexity, resulting in long latency and high power consumption. In this paper, we present a kNN hardware accelerator in 65nm CMOS. This accelerator combines in-memory computing SRAM that is recently developed for binarized deep neural networks and digital hardware that performs top-k sorting. We designed and simulated the kNN accelerator, which performs up to 17.9 million query vectors per second while consuming 11.8 mW, demonstrating >4.8X energy improvement over prior works.

KW - content addressable memory

KW - hardware accelerator

KW - in-memory computing

KW - k-nearest neighbor

UR - http://www.scopus.com/inward/record.url?scp=85072652982&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072652982&partnerID=8YFLogxK

U2 - 10.1109/ISLPED.2019.8824822

DO - 10.1109/ISLPED.2019.8824822

M3 - Conference contribution

AN - SCOPUS:85072652982

T3 - Proceedings of the International Symposium on Low Power Electronics and Design

BT - International Symposium on Low Power Electronics and Design, ISLPED 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -