Compact dual ensembles for active learning

Amit Mandvikar, Huan Liu, Hiroshi Motoda

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Generic ensemble methods can achieve excellent learning performance, but are not good candidates for active learning because of their different design purposes. We investigate how to use diversity of the member classifiers of an ensemble for efficient active learning. We empirically show, using benchmark data sets, that (1) to achieve a good (stable) ensemble, the number of classifiers needed in the ensemble varies for different data sets; (2) feature selection can be applied for classifier selection from ensembles to construct compact ensembles with high performance. Benchmark data sets and a real-world application are used to demonstrate the effectiveness of the proposed approach.

Original languageEnglish (US)
Title of host publicationLecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
EditorsH. Dai, R. Srikant, C. Zhang
Pages293-297
Number of pages5
Volume3056
StatePublished - 2004
Event8th Pacific-Asia Conference, PAKDD 2004 - Sydney, Australia
Duration: May 26 2004May 28 2004

Other

Other8th Pacific-Asia Conference, PAKDD 2004
CountryAustralia
CitySydney
Period5/26/045/28/04

Fingerprint

Classifiers
Feature extraction
Problem-Based Learning

ASJC Scopus subject areas

  • Hardware and Architecture

Cite this

Mandvikar, A., Liu, H., & Motoda, H. (2004). Compact dual ensembles for active learning. In H. Dai, R. Srikant, & C. Zhang (Eds.), Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3056, pp. 293-297)

Compact dual ensembles for active learning. / Mandvikar, Amit; Liu, Huan; Motoda, Hiroshi.

Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science). ed. / H. Dai; R. Srikant; C. Zhang. Vol. 3056 2004. p. 293-297.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Mandvikar, A, Liu, H & Motoda, H 2004, Compact dual ensembles for active learning. in H Dai, R Srikant & C Zhang (eds), Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science). vol. 3056, pp. 293-297, 8th Pacific-Asia Conference, PAKDD 2004, Sydney, Australia, 5/26/04.
Mandvikar A, Liu H, Motoda H. Compact dual ensembles for active learning. In Dai H, Srikant R, Zhang C, editors, Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science). Vol. 3056. 2004. p. 293-297
Mandvikar, Amit ; Liu, Huan ; Motoda, Hiroshi. / Compact dual ensembles for active learning. Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science). editor / H. Dai ; R. Srikant ; C. Zhang. Vol. 3056 2004. pp. 293-297
@inproceedings{a9df244bfe2a4fe09dde7e1da4a529c3,
title = "Compact dual ensembles for active learning",
abstract = "Generic ensemble methods can achieve excellent learning performance, but are not good candidates for active learning because of their different design purposes. We investigate how to use diversity of the member classifiers of an ensemble for efficient active learning. We empirically show, using benchmark data sets, that (1) to achieve a good (stable) ensemble, the number of classifiers needed in the ensemble varies for different data sets; (2) feature selection can be applied for classifier selection from ensembles to construct compact ensembles with high performance. Benchmark data sets and a real-world application are used to demonstrate the effectiveness of the proposed approach.",
author = "Amit Mandvikar and Huan Liu and Hiroshi Motoda",
year = "2004",
language = "English (US)",
volume = "3056",
pages = "293--297",
editor = "H. Dai and R. Srikant and C. Zhang",
booktitle = "Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)",

}

TY - GEN

T1 - Compact dual ensembles for active learning

AU - Mandvikar, Amit

AU - Liu, Huan

AU - Motoda, Hiroshi

PY - 2004

Y1 - 2004

N2 - Generic ensemble methods can achieve excellent learning performance, but are not good candidates for active learning because of their different design purposes. We investigate how to use diversity of the member classifiers of an ensemble for efficient active learning. We empirically show, using benchmark data sets, that (1) to achieve a good (stable) ensemble, the number of classifiers needed in the ensemble varies for different data sets; (2) feature selection can be applied for classifier selection from ensembles to construct compact ensembles with high performance. Benchmark data sets and a real-world application are used to demonstrate the effectiveness of the proposed approach.

AB - Generic ensemble methods can achieve excellent learning performance, but are not good candidates for active learning because of their different design purposes. We investigate how to use diversity of the member classifiers of an ensemble for efficient active learning. We empirically show, using benchmark data sets, that (1) to achieve a good (stable) ensemble, the number of classifiers needed in the ensemble varies for different data sets; (2) feature selection can be applied for classifier selection from ensembles to construct compact ensembles with high performance. Benchmark data sets and a real-world application are used to demonstrate the effectiveness of the proposed approach.

UR - http://www.scopus.com/inward/record.url?scp=7444251779&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=7444251779&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:7444251779

VL - 3056

SP - 293

EP - 297

BT - Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)

A2 - Dai, H.

A2 - Srikant, R.

A2 - Zhang, C.

ER -