Modelling classification performance for large data sets: An empirical study

Baohua Gu, Feifang Hu, Huan Liu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

For many learning algorithms, their learning accuracy will increase as the size of training data increases, forming the well-known learning curve. Usually a learning curve can be fitted by interpolating or extrapolating some points on it with a specified model. The obtained learning curve can then be used to predict the maximum achievable learning accuracy or to estimate the amount of data needed to achieve an expected learning accuracy, both of which will be especially meaningful to data mining on large data sets. Although some models have been proposed to model learning curves, most of them do not test their applicability to large data sets. In this paper, we focus on this issue. We empirically compare six potentially useful models by fitting learning curves of two typical classification algorithms¾C4.5 (decision tree) and LOG (logistic discrimination) on eight large UCI benchmark data sets. By using all available data for learning, we fit a full-length learning curve; by using a small portion of the data, we fit a part-length learning curve. The models are then compared in terms of two performances: (1) how well they fit a full-length learning curve, and (2) how well a fitted part-length learning curve can predict learning accuracy at the full length. Experimental results show that the power law (y = a-b*x-c) is the best among the six models in both the performances for the two algorithms and all the data sets. These results support the applicability of learning curves to data mining.

Original languageEnglish (US)
Title of host publicationAdvances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings
PublisherSpringer Verlag
Pages317-328
Number of pages12
Volume2118
ISBN (Print)9783540477143
StatePublished - 2001
Event2nd International Conference on Web-Age Information Management, WAIM 2001 - Xi’an, China
Duration: Jul 9 2001Jul 11 2001

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2118
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other2nd International Conference on Web-Age Information Management, WAIM 2001
CountryChina
CityXi’an
Period7/9/017/11/01

Fingerprint

Learning Curve
Large Data Sets
Empirical Study
Modeling
Data mining
Decision trees
Data Mining
Learning algorithms
Logistics
Model
Predict
Decision tree
Discrimination
Learning Algorithm
Power Law
Learning
Benchmark

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Gu, B., Hu, F., & Liu, H. (2001). Modelling classification performance for large data sets: An empirical study. In Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings (Vol. 2118, pp. 317-328). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2118). Springer Verlag.

Modelling classification performance for large data sets : An empirical study. / Gu, Baohua; Hu, Feifang; Liu, Huan.

Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings. Vol. 2118 Springer Verlag, 2001. p. 317-328 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2118).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Gu, B, Hu, F & Liu, H 2001, Modelling classification performance for large data sets: An empirical study. in Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings. vol. 2118, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 2118, Springer Verlag, pp. 317-328, 2nd International Conference on Web-Age Information Management, WAIM 2001, Xi’an, China, 7/9/01.
Gu B, Hu F, Liu H. Modelling classification performance for large data sets: An empirical study. In Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings. Vol. 2118. Springer Verlag. 2001. p. 317-328. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Gu, Baohua ; Hu, Feifang ; Liu, Huan. / Modelling classification performance for large data sets : An empirical study. Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings. Vol. 2118 Springer Verlag, 2001. pp. 317-328 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{bf579f8a922a42a6979042fe612796d5,
title = "Modelling classification performance for large data sets: An empirical study",
abstract = "For many learning algorithms, their learning accuracy will increase as the size of training data increases, forming the well-known learning curve. Usually a learning curve can be fitted by interpolating or extrapolating some points on it with a specified model. The obtained learning curve can then be used to predict the maximum achievable learning accuracy or to estimate the amount of data needed to achieve an expected learning accuracy, both of which will be especially meaningful to data mining on large data sets. Although some models have been proposed to model learning curves, most of them do not test their applicability to large data sets. In this paper, we focus on this issue. We empirically compare six potentially useful models by fitting learning curves of two typical classification algorithms¾C4.5 (decision tree) and LOG (logistic discrimination) on eight large UCI benchmark data sets. By using all available data for learning, we fit a full-length learning curve; by using a small portion of the data, we fit a part-length learning curve. The models are then compared in terms of two performances: (1) how well they fit a full-length learning curve, and (2) how well a fitted part-length learning curve can predict learning accuracy at the full length. Experimental results show that the power law (y = a-b*x-c) is the best among the six models in both the performances for the two algorithms and all the data sets. These results support the applicability of learning curves to data mining.",
author = "Baohua Gu and Feifang Hu and Huan Liu",
year = "2001",
language = "English (US)",
isbn = "9783540477143",
volume = "2118",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "317--328",
booktitle = "Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings",
address = "Germany",

}

TY - GEN

T1 - Modelling classification performance for large data sets

T2 - An empirical study

AU - Gu, Baohua

AU - Hu, Feifang

AU - Liu, Huan

PY - 2001

Y1 - 2001

N2 - For many learning algorithms, their learning accuracy will increase as the size of training data increases, forming the well-known learning curve. Usually a learning curve can be fitted by interpolating or extrapolating some points on it with a specified model. The obtained learning curve can then be used to predict the maximum achievable learning accuracy or to estimate the amount of data needed to achieve an expected learning accuracy, both of which will be especially meaningful to data mining on large data sets. Although some models have been proposed to model learning curves, most of them do not test their applicability to large data sets. In this paper, we focus on this issue. We empirically compare six potentially useful models by fitting learning curves of two typical classification algorithms¾C4.5 (decision tree) and LOG (logistic discrimination) on eight large UCI benchmark data sets. By using all available data for learning, we fit a full-length learning curve; by using a small portion of the data, we fit a part-length learning curve. The models are then compared in terms of two performances: (1) how well they fit a full-length learning curve, and (2) how well a fitted part-length learning curve can predict learning accuracy at the full length. Experimental results show that the power law (y = a-b*x-c) is the best among the six models in both the performances for the two algorithms and all the data sets. These results support the applicability of learning curves to data mining.

AB - For many learning algorithms, their learning accuracy will increase as the size of training data increases, forming the well-known learning curve. Usually a learning curve can be fitted by interpolating or extrapolating some points on it with a specified model. The obtained learning curve can then be used to predict the maximum achievable learning accuracy or to estimate the amount of data needed to achieve an expected learning accuracy, both of which will be especially meaningful to data mining on large data sets. Although some models have been proposed to model learning curves, most of them do not test their applicability to large data sets. In this paper, we focus on this issue. We empirically compare six potentially useful models by fitting learning curves of two typical classification algorithms¾C4.5 (decision tree) and LOG (logistic discrimination) on eight large UCI benchmark data sets. By using all available data for learning, we fit a full-length learning curve; by using a small portion of the data, we fit a part-length learning curve. The models are then compared in terms of two performances: (1) how well they fit a full-length learning curve, and (2) how well a fitted part-length learning curve can predict learning accuracy at the full length. Experimental results show that the power law (y = a-b*x-c) is the best among the six models in both the performances for the two algorithms and all the data sets. These results support the applicability of learning curves to data mining.

UR - http://www.scopus.com/inward/record.url?scp=84974711038&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84974711038&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84974711038

SN - 9783540477143

VL - 2118

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 317

EP - 328

BT - Advances in Web-Age Information Management - 2nd International Conference, WAIM 2001, Proceedings

PB - Springer Verlag

ER -