Performance comparison of machine learning platforms

Asim Roy, Shiban Qureshi, Kartikeya Pande, Divitha Nair, Kartik Gairola, Pooja Jain, Suraj Singh, Kirti Sharma, Akshay Jagadale, Yi Yang Lin, Shashank Sharma, Ramya Gotety, Yuexin Zhang, Ji Tang, Tejas Mehta, Hemanth Sindhanuru, Nonso Okafor, Santak Das, Chidambara N. Gopal, Srinivasa B. Rudraraju & 1 others Avinash V. Kakarlapudi

Research output: Contribution to journalArticle

Abstract

In this paper, we present a method for comparing and evaluating different collections of machine learning algorithms on the basis of a given performance measure (e.g., accuracy, area under the curve (AUC), F-score). Such a method can be used to compare standard machine learning platforms such as SAS, IBM SPSS, and Microsoft Azure ML. A recent trend in automation of machine learning is to exercise a collection of machine learning algorithms on a particular problem and then use the best performing algorithm. Thus, the proposed method can also be used to compare and evaluate different collections of algorithms for automation on a certain problem type and find the best collection. In the study reported here, we applied the method to compare six machine learning platforms – R, Python, SAS, IBM SPSS Modeler, Microsoft Azure ML, and Apache Spark ML. We compared the platforms on the basis of predictive performance on classification problems because a significant majority of the problems in machine learning are of that type. The general question that we addressed is the following: Are there platforms that are superior to others on some particular performance measure? For each platform, we used a collection of six classification algorithms from the following six families of algorithms – support vector machines, multilayer perceptrons, random forest (or variant), decision trees/gradient boosted trees, Naive Bayes/Bayesian networks, and logistic regression. We compared their performance on the basis of classification accuracy, F-score, and AUC. We used F-score and AUC measures to compare platforms on two-class problems only. For testing the platforms, we used a mix of data sets from (1) the University of California, Irvine (UCI) library, (2) the Kaggle competition library, and (3) high-dimensional gene expression problems. We performed some hyperparameter tuning on algorithms wherever possible.

Original languageEnglish (US)
Pages (from-to)207-225
Number of pages19
JournalINFORMS Journal on Computing
Volume31
Issue number2
DOIs
StatePublished - Jan 1 2019

Fingerprint

Learning systems
Learning algorithms
Automation
Bayesian networks
Multilayer neural networks
Decision trees
Electric sparks
Gene expression
Support vector machines
Machine learning
Logistics
Tuning
Testing
Performance measures
Learning algorithm
Microsoft

Keywords

  • Classification algorithms
  • Comparison of algorithms
  • Comparison of platforms
  • Machine learning platforms

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Computer Science Applications
  • Management Science and Operations Research

Cite this

Roy, A., Qureshi, S., Pande, K., Nair, D., Gairola, K., Jain, P., ... Kakarlapudi, A. V. (2019). Performance comparison of machine learning platforms. INFORMS Journal on Computing, 31(2), 207-225. https://doi.org/10.1287/ijoc.2018.0825

Performance comparison of machine learning platforms. / Roy, Asim; Qureshi, Shiban; Pande, Kartikeya; Nair, Divitha; Gairola, Kartik; Jain, Pooja; Singh, Suraj; Sharma, Kirti; Jagadale, Akshay; Lin, Yi Yang; Sharma, Shashank; Gotety, Ramya; Zhang, Yuexin; Tang, Ji; Mehta, Tejas; Sindhanuru, Hemanth; Okafor, Nonso; Das, Santak; Gopal, Chidambara N.; Rudraraju, Srinivasa B.; Kakarlapudi, Avinash V.

In: INFORMS Journal on Computing, Vol. 31, No. 2, 01.01.2019, p. 207-225.

Research output: Contribution to journalArticle

Roy, A, Qureshi, S, Pande, K, Nair, D, Gairola, K, Jain, P, Singh, S, Sharma, K, Jagadale, A, Lin, YY, Sharma, S, Gotety, R, Zhang, Y, Tang, J, Mehta, T, Sindhanuru, H, Okafor, N, Das, S, Gopal, CN, Rudraraju, SB & Kakarlapudi, AV 2019, 'Performance comparison of machine learning platforms', INFORMS Journal on Computing, vol. 31, no. 2, pp. 207-225. https://doi.org/10.1287/ijoc.2018.0825
Roy A, Qureshi S, Pande K, Nair D, Gairola K, Jain P et al. Performance comparison of machine learning platforms. INFORMS Journal on Computing. 2019 Jan 1;31(2):207-225. https://doi.org/10.1287/ijoc.2018.0825
Roy, Asim ; Qureshi, Shiban ; Pande, Kartikeya ; Nair, Divitha ; Gairola, Kartik ; Jain, Pooja ; Singh, Suraj ; Sharma, Kirti ; Jagadale, Akshay ; Lin, Yi Yang ; Sharma, Shashank ; Gotety, Ramya ; Zhang, Yuexin ; Tang, Ji ; Mehta, Tejas ; Sindhanuru, Hemanth ; Okafor, Nonso ; Das, Santak ; Gopal, Chidambara N. ; Rudraraju, Srinivasa B. ; Kakarlapudi, Avinash V. / Performance comparison of machine learning platforms. In: INFORMS Journal on Computing. 2019 ; Vol. 31, No. 2. pp. 207-225.
@article{855b2bcb969e42cd91acaa50352a51f0,
title = "Performance comparison of machine learning platforms",
abstract = "In this paper, we present a method for comparing and evaluating different collections of machine learning algorithms on the basis of a given performance measure (e.g., accuracy, area under the curve (AUC), F-score). Such a method can be used to compare standard machine learning platforms such as SAS, IBM SPSS, and Microsoft Azure ML. A recent trend in automation of machine learning is to exercise a collection of machine learning algorithms on a particular problem and then use the best performing algorithm. Thus, the proposed method can also be used to compare and evaluate different collections of algorithms for automation on a certain problem type and find the best collection. In the study reported here, we applied the method to compare six machine learning platforms – R, Python, SAS, IBM SPSS Modeler, Microsoft Azure ML, and Apache Spark ML. We compared the platforms on the basis of predictive performance on classification problems because a significant majority of the problems in machine learning are of that type. The general question that we addressed is the following: Are there platforms that are superior to others on some particular performance measure? For each platform, we used a collection of six classification algorithms from the following six families of algorithms – support vector machines, multilayer perceptrons, random forest (or variant), decision trees/gradient boosted trees, Naive Bayes/Bayesian networks, and logistic regression. We compared their performance on the basis of classification accuracy, F-score, and AUC. We used F-score and AUC measures to compare platforms on two-class problems only. For testing the platforms, we used a mix of data sets from (1) the University of California, Irvine (UCI) library, (2) the Kaggle competition library, and (3) high-dimensional gene expression problems. We performed some hyperparameter tuning on algorithms wherever possible.",
keywords = "Classification algorithms, Comparison of algorithms, Comparison of platforms, Machine learning platforms",
author = "Asim Roy and Shiban Qureshi and Kartikeya Pande and Divitha Nair and Kartik Gairola and Pooja Jain and Suraj Singh and Kirti Sharma and Akshay Jagadale and Lin, {Yi Yang} and Shashank Sharma and Ramya Gotety and Yuexin Zhang and Ji Tang and Tejas Mehta and Hemanth Sindhanuru and Nonso Okafor and Santak Das and Gopal, {Chidambara N.} and Rudraraju, {Srinivasa B.} and Kakarlapudi, {Avinash V.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1287/ijoc.2018.0825",
language = "English (US)",
volume = "31",
pages = "207--225",
journal = "INFORMS Journal on Computing",
issn = "1091-9856",
publisher = "INFORMS Inst.for Operations Res.and the Management Sciences",
number = "2",

}

TY - JOUR

T1 - Performance comparison of machine learning platforms

AU - Roy, Asim

AU - Qureshi, Shiban

AU - Pande, Kartikeya

AU - Nair, Divitha

AU - Gairola, Kartik

AU - Jain, Pooja

AU - Singh, Suraj

AU - Sharma, Kirti

AU - Jagadale, Akshay

AU - Lin, Yi Yang

AU - Sharma, Shashank

AU - Gotety, Ramya

AU - Zhang, Yuexin

AU - Tang, Ji

AU - Mehta, Tejas

AU - Sindhanuru, Hemanth

AU - Okafor, Nonso

AU - Das, Santak

AU - Gopal, Chidambara N.

AU - Rudraraju, Srinivasa B.

AU - Kakarlapudi, Avinash V.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this paper, we present a method for comparing and evaluating different collections of machine learning algorithms on the basis of a given performance measure (e.g., accuracy, area under the curve (AUC), F-score). Such a method can be used to compare standard machine learning platforms such as SAS, IBM SPSS, and Microsoft Azure ML. A recent trend in automation of machine learning is to exercise a collection of machine learning algorithms on a particular problem and then use the best performing algorithm. Thus, the proposed method can also be used to compare and evaluate different collections of algorithms for automation on a certain problem type and find the best collection. In the study reported here, we applied the method to compare six machine learning platforms – R, Python, SAS, IBM SPSS Modeler, Microsoft Azure ML, and Apache Spark ML. We compared the platforms on the basis of predictive performance on classification problems because a significant majority of the problems in machine learning are of that type. The general question that we addressed is the following: Are there platforms that are superior to others on some particular performance measure? For each platform, we used a collection of six classification algorithms from the following six families of algorithms – support vector machines, multilayer perceptrons, random forest (or variant), decision trees/gradient boosted trees, Naive Bayes/Bayesian networks, and logistic regression. We compared their performance on the basis of classification accuracy, F-score, and AUC. We used F-score and AUC measures to compare platforms on two-class problems only. For testing the platforms, we used a mix of data sets from (1) the University of California, Irvine (UCI) library, (2) the Kaggle competition library, and (3) high-dimensional gene expression problems. We performed some hyperparameter tuning on algorithms wherever possible.

AB - In this paper, we present a method for comparing and evaluating different collections of machine learning algorithms on the basis of a given performance measure (e.g., accuracy, area under the curve (AUC), F-score). Such a method can be used to compare standard machine learning platforms such as SAS, IBM SPSS, and Microsoft Azure ML. A recent trend in automation of machine learning is to exercise a collection of machine learning algorithms on a particular problem and then use the best performing algorithm. Thus, the proposed method can also be used to compare and evaluate different collections of algorithms for automation on a certain problem type and find the best collection. In the study reported here, we applied the method to compare six machine learning platforms – R, Python, SAS, IBM SPSS Modeler, Microsoft Azure ML, and Apache Spark ML. We compared the platforms on the basis of predictive performance on classification problems because a significant majority of the problems in machine learning are of that type. The general question that we addressed is the following: Are there platforms that are superior to others on some particular performance measure? For each platform, we used a collection of six classification algorithms from the following six families of algorithms – support vector machines, multilayer perceptrons, random forest (or variant), decision trees/gradient boosted trees, Naive Bayes/Bayesian networks, and logistic regression. We compared their performance on the basis of classification accuracy, F-score, and AUC. We used F-score and AUC measures to compare platforms on two-class problems only. For testing the platforms, we used a mix of data sets from (1) the University of California, Irvine (UCI) library, (2) the Kaggle competition library, and (3) high-dimensional gene expression problems. We performed some hyperparameter tuning on algorithms wherever possible.

KW - Classification algorithms

KW - Comparison of algorithms

KW - Comparison of platforms

KW - Machine learning platforms

UR - http://www.scopus.com/inward/record.url?scp=85070370695&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070370695&partnerID=8YFLogxK

U2 - 10.1287/ijoc.2018.0825

DO - 10.1287/ijoc.2018.0825

M3 - Article

VL - 31

SP - 207

EP - 225

JO - INFORMS Journal on Computing

JF - INFORMS Journal on Computing

SN - 1091-9856

IS - 2

ER -