Error-sensitive grading for model combination

Surendra K. Singhi, Huan Liu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Ensemble learning is a powerful learning approach that combines multiple classifiers to improve prediction accuracy. An important decision while using an ensemble of classifiers is to decide upon a way of combining the prediction of its base classifiers. In this paper, we introduce a novel grading-based algorithm for model combination, which uses cost-sensitive learning in building a meta-learner. This method distinguishes between the grading error of classifying an incorrect prediction as correct, and the other-way-round, and tries to assign appropriate costs to the two types of error in order to improve performance. We study issues in error-sensitive grading, and then with extensive experiments show the empirically effectiveness of this new method in comparison with representative meta-classification techniques.

Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages724-732
Number of pages9
Volume3720 LNAI
DOIs
StatePublished - 2005
Event16th European Conference on Machine Learning, ECML 2005 - Porto, Portugal
Duration: Oct 3 2005Oct 7 2005

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3720 LNAI
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other16th European Conference on Machine Learning, ECML 2005
CountryPortugal
CityPorto
Period10/3/0510/7/05

Fingerprint

Grading
Classifiers
Learning
Prediction
Classifier
Cost-sensitive Learning
Multiple Classifiers
Costs and Cost Analysis
Ensemble Learning
Assign
Costs
Ensemble
Model
Experiment
Experiments

ASJC Scopus subject areas

  • Computer Science(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Theoretical Computer Science

Cite this

Singhi, S. K., & Liu, H. (2005). Error-sensitive grading for model combination. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3720 LNAI, pp. 724-732). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 3720 LNAI). https://doi.org/10.1007/11564096_74

Error-sensitive grading for model combination. / Singhi, Surendra K.; Liu, Huan.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 3720 LNAI 2005. p. 724-732 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 3720 LNAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Singhi, SK & Liu, H 2005, Error-sensitive grading for model combination. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 3720 LNAI, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3720 LNAI, pp. 724-732, 16th European Conference on Machine Learning, ECML 2005, Porto, Portugal, 10/3/05. https://doi.org/10.1007/11564096_74
Singhi SK, Liu H. Error-sensitive grading for model combination. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 3720 LNAI. 2005. p. 724-732. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/11564096_74
Singhi, Surendra K. ; Liu, Huan. / Error-sensitive grading for model combination. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 3720 LNAI 2005. pp. 724-732 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{f1bf18e0751f417d85f8e72255aaeedb,
title = "Error-sensitive grading for model combination",
abstract = "Ensemble learning is a powerful learning approach that combines multiple classifiers to improve prediction accuracy. An important decision while using an ensemble of classifiers is to decide upon a way of combining the prediction of its base classifiers. In this paper, we introduce a novel grading-based algorithm for model combination, which uses cost-sensitive learning in building a meta-learner. This method distinguishes between the grading error of classifying an incorrect prediction as correct, and the other-way-round, and tries to assign appropriate costs to the two types of error in order to improve performance. We study issues in error-sensitive grading, and then with extensive experiments show the empirically effectiveness of this new method in comparison with representative meta-classification techniques.",
author = "Singhi, {Surendra K.} and Huan Liu",
year = "2005",
doi = "10.1007/11564096_74",
language = "English (US)",
isbn = "3540292438",
volume = "3720 LNAI",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "724--732",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Error-sensitive grading for model combination

AU - Singhi, Surendra K.

AU - Liu, Huan

PY - 2005

Y1 - 2005

N2 - Ensemble learning is a powerful learning approach that combines multiple classifiers to improve prediction accuracy. An important decision while using an ensemble of classifiers is to decide upon a way of combining the prediction of its base classifiers. In this paper, we introduce a novel grading-based algorithm for model combination, which uses cost-sensitive learning in building a meta-learner. This method distinguishes between the grading error of classifying an incorrect prediction as correct, and the other-way-round, and tries to assign appropriate costs to the two types of error in order to improve performance. We study issues in error-sensitive grading, and then with extensive experiments show the empirically effectiveness of this new method in comparison with representative meta-classification techniques.

AB - Ensemble learning is a powerful learning approach that combines multiple classifiers to improve prediction accuracy. An important decision while using an ensemble of classifiers is to decide upon a way of combining the prediction of its base classifiers. In this paper, we introduce a novel grading-based algorithm for model combination, which uses cost-sensitive learning in building a meta-learner. This method distinguishes between the grading error of classifying an incorrect prediction as correct, and the other-way-round, and tries to assign appropriate costs to the two types of error in order to improve performance. We study issues in error-sensitive grading, and then with extensive experiments show the empirically effectiveness of this new method in comparison with representative meta-classification techniques.

UR - http://www.scopus.com/inward/record.url?scp=33646411231&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33646411231&partnerID=8YFLogxK

U2 - 10.1007/11564096_74

DO - 10.1007/11564096_74

M3 - Conference contribution

AN - SCOPUS:33646411231

SN - 3540292438

SN - 9783540292432

VL - 3720 LNAI

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 724

EP - 732

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -