Abstract

Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on certain criteria. In this article we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimize the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and two biomedical image databases demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. In addition, we present a joint optimization framework for performing both transfer and active learning simultaneously unlike the existing approaches of learning in two separate stages, that is, typically, transfer learning followed by active learning. We specifically minimize a common objective of reducing distribution difference between the domain adapted source, the queried and labeled samples and the rest of the unlabeled target domain data. Our empirical studies on two biomedical image databases and on a publicly available 20 Newsgroups dataset show that incorporation of uncertainty information and transfer learning further improves the performance of the proposed active learning based classifier. Our empirical studies also show that the proposed transfer-active method based on the joint optimization framework performs significantly better than a framework which implements transfer and active learning in two separate stages.

Original languageEnglish (US)
Article number13
JournalACM Transactions on Knowledge Discovery from Data
Volume7
Issue number3
DOIs
StatePublished - Sep 2013

Fingerprint

Probability distributions
Sampling
Labeling
Classifiers
Problem-Based Learning
Quadratic programming
Integer programming
Linear programming
Data mining
Learning systems
Labels

Keywords

  • Active learning
  • Marginal probability distribution
  • Maximum mean discrepancy
  • Transfer learning

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Batch mode active sampling based on marginal probability distribution matching. / Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping.

In: ACM Transactions on Knowledge Discovery from Data, Vol. 7, No. 3, 13, 09.2013.

Research output: Contribution to journalArticle

Chattopadhyay, Rita ; Wang, Zheng ; Fan, Wei ; Davidson, Ian ; Panchanathan, Sethuraman ; Ye, Jieping. / Batch mode active sampling based on marginal probability distribution matching. In: ACM Transactions on Knowledge Discovery from Data. 2013 ; Vol. 7, No. 3.
@article{fe4fe2dad6074fe28869824794da278e,
title = "Batch mode active sampling based on marginal probability distribution matching",
abstract = "Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on certain criteria. In this article we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimize the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and two biomedical image databases demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. In addition, we present a joint optimization framework for performing both transfer and active learning simultaneously unlike the existing approaches of learning in two separate stages, that is, typically, transfer learning followed by active learning. We specifically minimize a common objective of reducing distribution difference between the domain adapted source, the queried and labeled samples and the rest of the unlabeled target domain data. Our empirical studies on two biomedical image databases and on a publicly available 20 Newsgroups dataset show that incorporation of uncertainty information and transfer learning further improves the performance of the proposed active learning based classifier. Our empirical studies also show that the proposed transfer-active method based on the joint optimization framework performs significantly better than a framework which implements transfer and active learning in two separate stages.",
keywords = "Active learning, Marginal probability distribution, Maximum mean discrepancy, Transfer learning",
author = "Rita Chattopadhyay and Zheng Wang and Wei Fan and Ian Davidson and Sethuraman Panchanathan and Jieping Ye",
year = "2013",
month = "9",
doi = "10.1145/2513092.2513094",
language = "English (US)",
volume = "7",
journal = "ACM Transactions on Knowledge Discovery from Data",
issn = "1556-4681",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

TY - JOUR

T1 - Batch mode active sampling based on marginal probability distribution matching

AU - Chattopadhyay, Rita

AU - Wang, Zheng

AU - Fan, Wei

AU - Davidson, Ian

AU - Panchanathan, Sethuraman

AU - Ye, Jieping

PY - 2013/9

Y1 - 2013/9

N2 - Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on certain criteria. In this article we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimize the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and two biomedical image databases demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. In addition, we present a joint optimization framework for performing both transfer and active learning simultaneously unlike the existing approaches of learning in two separate stages, that is, typically, transfer learning followed by active learning. We specifically minimize a common objective of reducing distribution difference between the domain adapted source, the queried and labeled samples and the rest of the unlabeled target domain data. Our empirical studies on two biomedical image databases and on a publicly available 20 Newsgroups dataset show that incorporation of uncertainty information and transfer learning further improves the performance of the proposed active learning based classifier. Our empirical studies also show that the proposed transfer-active method based on the joint optimization framework performs significantly better than a framework which implements transfer and active learning in two separate stages.

AB - Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on certain criteria. In this article we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimize the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and two biomedical image databases demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. In addition, we present a joint optimization framework for performing both transfer and active learning simultaneously unlike the existing approaches of learning in two separate stages, that is, typically, transfer learning followed by active learning. We specifically minimize a common objective of reducing distribution difference between the domain adapted source, the queried and labeled samples and the rest of the unlabeled target domain data. Our empirical studies on two biomedical image databases and on a publicly available 20 Newsgroups dataset show that incorporation of uncertainty information and transfer learning further improves the performance of the proposed active learning based classifier. Our empirical studies also show that the proposed transfer-active method based on the joint optimization framework performs significantly better than a framework which implements transfer and active learning in two separate stages.

KW - Active learning

KW - Marginal probability distribution

KW - Maximum mean discrepancy

KW - Transfer learning

UR - http://www.scopus.com/inward/record.url?scp=84884991180&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84884991180&partnerID=8YFLogxK

U2 - 10.1145/2513092.2513094

DO - 10.1145/2513092.2513094

M3 - Article

VL - 7

JO - ACM Transactions on Knowledge Discovery from Data

JF - ACM Transactions on Knowledge Discovery from Data

SN - 1556-4681

IS - 3

M1 - 13

ER -