Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Yuxin Ma, Tiankai Xie, Jundong Li, Ross Maciejewski

Research output: Contribution to journalArticle

Abstract

Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.

Original languageEnglish (US)
Article number8812988
Pages (from-to)1075-1085
Number of pages11
JournalIEEE Transactions on Visualization and Computer Graphics
Volume26
Issue number1
DOIs
StatePublished - Jan 2020

Fingerprint

Learning systems
Artificial intelligence
Classifiers
Visualization

Keywords

  • Adversarial machine learning
  • data poisoning
  • visual analytics

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design

Cite this

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics. / Ma, Yuxin; Xie, Tiankai; Li, Jundong; Maciejewski, Ross.

In: IEEE Transactions on Visualization and Computer Graphics, Vol. 26, No. 1, 8812988, 01.2020, p. 1075-1085.

Research output: Contribution to journalArticle

@article{10e9c6b5791a445ead3d433d1fd9ebaa,
title = "Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics",
abstract = "Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.",
keywords = "Adversarial machine learning, data poisoning, visual analytics",
author = "Yuxin Ma and Tiankai Xie and Jundong Li and Ross Maciejewski",
year = "2020",
month = "1",
doi = "10.1109/TVCG.2019.2934631",
language = "English (US)",
volume = "26",
pages = "1075--1085",
journal = "IEEE Transactions on Visualization and Computer Graphics",
issn = "1077-2626",
publisher = "IEEE Computer Society",
number = "1",

}

TY - JOUR

T1 - Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

AU - Ma, Yuxin

AU - Xie, Tiankai

AU - Li, Jundong

AU - Maciejewski, Ross

PY - 2020/1

Y1 - 2020/1

N2 - Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.

AB - Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.

KW - Adversarial machine learning

KW - data poisoning

KW - visual analytics

UR - http://www.scopus.com/inward/record.url?scp=85075629564&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85075629564&partnerID=8YFLogxK

U2 - 10.1109/TVCG.2019.2934631

DO - 10.1109/TVCG.2019.2934631

M3 - Article

C2 - 31478859

AN - SCOPUS:85075629564

VL - 26

SP - 1075

EP - 1085

JO - IEEE Transactions on Visualization and Computer Graphics

JF - IEEE Transactions on Visualization and Computer Graphics

SN - 1077-2626

IS - 1

M1 - 8812988

ER -