Preventing neural network model exfiltration in machine learning hardware accelerators

Mihailo Isakov, Lake Bu, Hai Cheng, Michel A. Kinsy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.

Original languageEnglish (US)
Title of host publicationProceedings of the 2018 Asian Hardware Oriented Security and Trust Symposium, AsianHOST 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages62-67
Number of pages6
ISBN (Electronic)9781538674710
DOIs
StatePublished - Jan 9 2019
Externally publishedYes
Event2018 Asian Hardware Oriented Security and Trust Symposium, AsianHOST 2018 - Hong Kong, Hong Kong
Duration: Dec 17 2018Dec 18 2018

Publication series

NameProceedings of the 2018 Asian Hardware Oriented Security and Trust Symposium, AsianHOST 2018

Conference

Conference2018 Asian Hardware Oriented Security and Trust Symposium, AsianHOST 2018
Country/TerritoryHong Kong
CityHong Kong
Period12/17/1812/18/18

Keywords

  • Neural network
  • hardware security
  • inference
  • memory probing
  • model exfiltration
  • model theft
  • side-channels

ASJC Scopus subject areas

  • Hardware and Architecture
  • Safety, Risk, Reliability and Quality
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Preventing neural network model exfiltration in machine learning hardware accelerators'. Together they form a unique fingerprint.

Cite this