MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

Sailik Sengupta, Tathagata Chakraborti, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for ‘boosting’ the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.

Original languageEnglish (US)
Title of host publicationDecision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings
EditorsTansu Alpcan, Yevgeniy Vorobeychik, John S. Baras, György Dán
PublisherSpringer
Pages479-491
Number of pages13
ISBN (Print)9783030324292
DOIs
StatePublished - Jan 1 2019
Event10th International Conference on Decision and Game Theory for Security, GameSec 2019 - Stockholm, Sweden
Duration: Oct 30 2019Nov 1 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11836 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference10th International Conference on Decision and Game Theory for Security, GameSec 2019
CountrySweden
CityStockholm
Period10/30/1911/1/19

Fingerprint

Neural Nets
Moving Target
Boosting
Attack
Neural networks
Neural Networks
Ensemble
Classify
Robustness
Stackelberg Game
Multi agent systems
Misclassification
Resilience
Immunity
Leverage
Multi-agent Systems
Quantify
Deep neural networks
Interaction
Range of data

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Sengupta, S., Chakraborti, T., & Kambhampati, S. (2019). MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense. In T. Alpcan, Y. Vorobeychik, J. S. Baras, & G. Dán (Eds.), Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings (pp. 479-491). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11836 LNCS). Springer. https://doi.org/10.1007/978-3-030-32430-8_28

MTDeep : Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense. / Sengupta, Sailik; Chakraborti, Tathagata; Kambhampati, Subbarao.

Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings. ed. / Tansu Alpcan; Yevgeniy Vorobeychik; John S. Baras; György Dán. Springer, 2019. p. 479-491 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11836 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sengupta, S, Chakraborti, T & Kambhampati, S 2019, MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense. in T Alpcan, Y Vorobeychik, JS Baras & G Dán (eds), Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11836 LNCS, Springer, pp. 479-491, 10th International Conference on Decision and Game Theory for Security, GameSec 2019, Stockholm, Sweden, 10/30/19. https://doi.org/10.1007/978-3-030-32430-8_28
Sengupta S, Chakraborti T, Kambhampati S. MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense. In Alpcan T, Vorobeychik Y, Baras JS, Dán G, editors, Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings. Springer. 2019. p. 479-491. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-32430-8_28
Sengupta, Sailik ; Chakraborti, Tathagata ; Kambhampati, Subbarao. / MTDeep : Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense. Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings. editor / Tansu Alpcan ; Yevgeniy Vorobeychik ; John S. Baras ; György Dán. Springer, 2019. pp. 479-491 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{db10f2efb45149158a750068bda25cbe,
title = "MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense",
abstract = "Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for ‘boosting’ the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.",
author = "Sailik Sengupta and Tathagata Chakraborti and Subbarao Kambhampati",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-32430-8_28",
language = "English (US)",
isbn = "9783030324292",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "479--491",
editor = "Tansu Alpcan and Yevgeniy Vorobeychik and Baras, {John S.} and Gy{\"o}rgy D{\'a}n",
booktitle = "Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings",

}

TY - GEN

T1 - MTDeep

T2 - Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

AU - Sengupta, Sailik

AU - Chakraborti, Tathagata

AU - Kambhampati, Subbarao

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for ‘boosting’ the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.

AB - Present attack methods can make state-of-the-art classification systems based on deep neural networks mis-classify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for ‘boosting’ the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image at test time, a constituent network is randomly selected based on a mixed policy. To obtain this policy, we formulate the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that our approach MTDeep, reduces misclassification on perturbed images for various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms alone. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.

UR - http://www.scopus.com/inward/record.url?scp=85076503039&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85076503039&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-32430-8_28

DO - 10.1007/978-3-030-32430-8_28

M3 - Conference contribution

AN - SCOPUS:85076503039

SN - 9783030324292

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 479

EP - 491

BT - Decision and Game Theory for Security - 10th International Conference, GameSec 2019, Proceedings

A2 - Alpcan, Tansu

A2 - Vorobeychik, Yevgeniy

A2 - Baras, John S.

A2 - Dán, György

PB - Springer

ER -