Non-monotonic feature selection

Zenglin Xu, Rong Jin, Jieping Ye, Michael R. Lyu, Irwin King

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger thanm. We refer to this property as the "monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution ofMKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.

Original languageEnglish (US)
Title of host publicationACM International Conference Proceeding Series
Volume382
DOIs
StatePublished - 2009
Event26th Annual International Conference on Machine Learning, ICML'09 - Montreal, QC, Canada
Duration: Jun 14 2009Jun 18 2009

Other

Other26th Annual International Conference on Machine Learning, ICML'09
CountryCanada
CityMontreal, QC
Period6/14/096/18/09

Fingerprint

Feature extraction
Combinatorial optimization

ASJC Scopus subject areas

  • Human-Computer Interaction

Cite this

Xu, Z., Jin, R., Ye, J., Lyu, M. R., & King, I. (2009). Non-monotonic feature selection. In ACM International Conference Proceeding Series (Vol. 382). [143] https://doi.org/10.1145/1553374.1553520

Non-monotonic feature selection. / Xu, Zenglin; Jin, Rong; Ye, Jieping; Lyu, Michael R.; King, Irwin.

ACM International Conference Proceeding Series. Vol. 382 2009. 143.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Xu, Z, Jin, R, Ye, J, Lyu, MR & King, I 2009, Non-monotonic feature selection. in ACM International Conference Proceeding Series. vol. 382, 143, 26th Annual International Conference on Machine Learning, ICML'09, Montreal, QC, Canada, 6/14/09. https://doi.org/10.1145/1553374.1553520
Xu Z, Jin R, Ye J, Lyu MR, King I. Non-monotonic feature selection. In ACM International Conference Proceeding Series. Vol. 382. 2009. 143 https://doi.org/10.1145/1553374.1553520
Xu, Zenglin ; Jin, Rong ; Ye, Jieping ; Lyu, Michael R. ; King, Irwin. / Non-monotonic feature selection. ACM International Conference Proceeding Series. Vol. 382 2009.
@inproceedings{53cfaa736f924a52b69b21598cd3606f,
title = "Non-monotonic feature selection",
abstract = "We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger thanm. We refer to this property as the {"}monotonic{"} property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution ofMKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.",
author = "Zenglin Xu and Rong Jin and Jieping Ye and Lyu, {Michael R.} and Irwin King",
year = "2009",
doi = "10.1145/1553374.1553520",
language = "English (US)",
isbn = "9781605585161",
volume = "382",
booktitle = "ACM International Conference Proceeding Series",

}

TY - GEN

T1 - Non-monotonic feature selection

AU - Xu, Zenglin

AU - Jin, Rong

AU - Ye, Jieping

AU - Lyu, Michael R.

AU - King, Irwin

PY - 2009

Y1 - 2009

N2 - We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger thanm. We refer to this property as the "monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution ofMKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.

AB - We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger thanm. We refer to this property as the "monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution ofMKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.

UR - http://www.scopus.com/inward/record.url?scp=70049098964&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70049098964&partnerID=8YFLogxK

U2 - 10.1145/1553374.1553520

DO - 10.1145/1553374.1553520

M3 - Conference contribution

SN - 9781605585161

VL - 382

BT - ACM International Conference Proceeding Series

ER -