TY - GEN
T1 - Non-monotonic feature selection
AU - Xu, Zenglin
AU - Jin, Rong
AU - Ye, Jieping
AU - Lyu, Michael R.
AU - King, Irwin
PY - 2009
Y1 - 2009
N2 - We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger thanm. We refer to this property as the "monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution ofMKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.
AB - We consider the problem of selecting a subset of m most informative features where m is the number of required features. This feature selection problem is essentially a combinatorial optimization problem, and is usually solved by an approximation. Conventional feature selection methods address the computational challenge in two steps: (a) ranking all the features by certain scores that are usually computed independently from the number of specified features m, and (b) selecting the top m ranked features. One major shortcoming of these approaches is that if a feature f is chosen when the number of specified features is m, it will always be chosen when the number of specified features is larger thanm. We refer to this property as the "monotonic" property of feature selection. In this work, we argue that it is important to develop efficient algorithms for non-monotonic feature selection. To this end, we develop an algorithm for non-monotonic feature selection that approximates the related combinatorial optimization problem by a Multiple Kernel Learning (MKL) problem. We also present a strategy that derives a discrete solution from the approximate solution ofMKL, and show the performance guarantee for the derived discrete solution when compared to the global optimal solution for the related combinatorial optimization problem. An empirical study with a number of benchmark data sets indicates the promising performance of the proposed framework compared with several state-of-the-art approaches for feature selection.
UR - http://www.scopus.com/inward/record.url?scp=70049098964&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=70049098964&partnerID=8YFLogxK
U2 - 10.1145/1553374.1553520
DO - 10.1145/1553374.1553520
M3 - Conference contribution
AN - SCOPUS:70049098964
SN - 9781605585161
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the 26th Annual International Conference on Machine Learning, ICML'09
T2 - 26th Annual International Conference on Machine Learning, ICML'09
Y2 - 14 June 2009 through 18 June 2009
ER -