Abstract

Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20 Newsgroups dataset show that transfer learning improves the performance of the classifier during initial iterations.

Original languageEnglish (US)
Title of host publicationKDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Pages741-749
Number of pages9
DOIs
StatePublished - 2012
Event18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2012 - Beijing, China
Duration: Aug 12 2012Aug 16 2012

Other

Other18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2012
CountryChina
CityBeijing
Period8/12/128/16/12

Fingerprint

Probability distributions
Sampling
Labeling
Classifiers
Quadratic programming
Integer programming
Linear programming
Data mining
Learning systems
Labels
Problem-Based Learning
Uncertainty

Keywords

  • active learning
  • marginal probability distribution
  • maximum mean discrepancy

ASJC Scopus subject areas

  • Software
  • Information Systems

Cite this

Chattopadhyay, R., Wang, Z., Fan, W., Davidson, I., Panchanathan, S., & Ye, J. (2012). Batch mode active sampling based on marginal probability distribution matching. In KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 741-749) https://doi.org/10.1145/2339530.2339647

Batch mode active sampling based on marginal probability distribution matching. / Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping.

KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2012. p. 741-749.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chattopadhyay, R, Wang, Z, Fan, W, Davidson, I, Panchanathan, S & Ye, J 2012, Batch mode active sampling based on marginal probability distribution matching. in KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 741-749, 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2012, Beijing, China, 8/12/12. https://doi.org/10.1145/2339530.2339647
Chattopadhyay R, Wang Z, Fan W, Davidson I, Panchanathan S, Ye J. Batch mode active sampling based on marginal probability distribution matching. In KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2012. p. 741-749 https://doi.org/10.1145/2339530.2339647
Chattopadhyay, Rita ; Wang, Zheng ; Fan, Wei ; Davidson, Ian ; Panchanathan, Sethuraman ; Ye, Jieping. / Batch mode active sampling based on marginal probability distribution matching. KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2012. pp. 741-749
@inproceedings{0e553e7cd186443e8022c455d7784779,
title = "Batch mode active sampling based on marginal probability distribution matching",
abstract = "Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20 Newsgroups dataset show that transfer learning improves the performance of the classifier during initial iterations.",
keywords = "active learning, marginal probability distribution, maximum mean discrepancy",
author = "Rita Chattopadhyay and Zheng Wang and Wei Fan and Ian Davidson and Sethuraman Panchanathan and Jieping Ye",
year = "2012",
doi = "10.1145/2339530.2339647",
language = "English (US)",
isbn = "9781450314626",
pages = "741--749",
booktitle = "KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",

}

TY - GEN

T1 - Batch mode active sampling based on marginal probability distribution matching

AU - Chattopadhyay, Rita

AU - Wang, Zheng

AU - Fan, Wei

AU - Davidson, Ian

AU - Panchanathan, Sethuraman

AU - Ye, Jieping

PY - 2012

Y1 - 2012

N2 - Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20 Newsgroups dataset show that transfer learning improves the performance of the classifier during initial iterations.

AB - Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20 Newsgroups dataset show that transfer learning improves the performance of the classifier during initial iterations.

KW - active learning

KW - marginal probability distribution

KW - maximum mean discrepancy

UR - http://www.scopus.com/inward/record.url?scp=84866039306&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84866039306&partnerID=8YFLogxK

U2 - 10.1145/2339530.2339647

DO - 10.1145/2339530.2339647

M3 - Conference contribution

AN - SCOPUS:84866039306

SN - 9781450314626

SP - 741

EP - 749

BT - KDD'12 - 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

ER -