An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack

Koosha Sadeghi, Ayan Banerjee, Sandeep Gupta

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages111-118
Number of pages8
ISBN (Electronic)9781728104928
DOIs
StatePublished - May 17 2019
Event1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019 - Newark, United States
Duration: Apr 4 2019Apr 9 2019

Publication series

NameProceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019

Conference

Conference1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019
CountryUnited States
CityNewark
Period4/4/194/9/19

Fingerprint

Artificial intelligence
Learning systems
Tuning
Learning algorithms
Electroencephalography
Testing
Experiments

Keywords

  • Artificial intelligence
  • Machine learning
  • Optimization
  • Parameters tuning
  • Perturbation attack
  • Security

ASJC Scopus subject areas

  • Artificial Intelligence
  • Safety, Risk, Reliability and Quality

Cite this

Sadeghi, K., Banerjee, A., & Gupta, S. (2019). An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack. In Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019 (pp. 111-118). [8718231] (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/AITest.2019.00012

An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack. / Sadeghi, Koosha; Banerjee, Ayan; Gupta, Sandeep.

Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 111-118 8718231 (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sadeghi, K, Banerjee, A & Gupta, S 2019, An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack. in Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019., 8718231, Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Institute of Electrical and Electronics Engineers Inc., pp. 111-118, 1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Newark, United States, 4/4/19. https://doi.org/10.1109/AITest.2019.00012
Sadeghi K, Banerjee A, Gupta S. An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack. In Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 111-118. 8718231. (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019). https://doi.org/10.1109/AITest.2019.00012
Sadeghi, Koosha ; Banerjee, Ayan ; Gupta, Sandeep. / An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack. Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 111-118 (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019).
@inproceedings{4a243c7dc6cd4fbb9f387c21648a2b59,
title = "An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack",
abstract = "Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75{\%} classification accuracy and reduces the attack success rate to 5.21{\%}.",
keywords = "Artificial intelligence, Machine learning, Optimization, Parameters tuning, Perturbation attack, Security",
author = "Koosha Sadeghi and Ayan Banerjee and Sandeep Gupta",
year = "2019",
month = "5",
day = "17",
doi = "10.1109/AITest.2019.00012",
language = "English (US)",
series = "Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "111--118",
booktitle = "Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019",

}

TY - GEN

T1 - An Analytical Framework for Security-Tuning of Artificial Intelligence Applications under Attack

AU - Sadeghi, Koosha

AU - Banerjee, Ayan

AU - Gupta, Sandeep

PY - 2019/5/17

Y1 - 2019/5/17

N2 - Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.

AB - Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.

KW - Artificial intelligence

KW - Machine learning

KW - Optimization

KW - Parameters tuning

KW - Perturbation attack

KW - Security

UR - http://www.scopus.com/inward/record.url?scp=85067090103&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067090103&partnerID=8YFLogxK

U2 - 10.1109/AITest.2019.00012

DO - 10.1109/AITest.2019.00012

M3 - Conference contribution

AN - SCOPUS:85067090103

T3 - Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019

SP - 111

EP - 118

BT - Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -