TY - JOUR
T1 - Feature selection
T2 - A data perspective
AU - Li, Jundong
AU - Cheng, Kewei
AU - Wang, Suhang
AU - Morstatter, Fred
AU - Trevino, Robert P.
AU - Tang, Jiliang
AU - Liu, Huan
N1 - Funding Information:
This material is based on work supported by, or in part by, the NSF grants 1217466 and 1614576. Authors’ addresses: J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, and H. Liu, Computer Science and Engineering, Arizona State University, Tempe, AZ 85281; emails: {jundongl, kcheng18, swang187, fmorstat, rptrevin, huan.liu}@asu.edu; J. Tang, Michigan State University, East Lansing, MI 48824; email: tangjili@msu.edu. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. © 2017 ACM 0360-0300/2017/12-ART94 $15.00 https://doi.org/10.1145/3136625
Publisher Copyright:
© 2017 ACM.
PY - 2017/12
Y1 - 2017/12
N2 - Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: Similaritybased, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.
AB - Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: Similaritybased, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.
KW - Feature selection
UR - http://www.scopus.com/inward/record.url?scp=85040227605&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85040227605&partnerID=8YFLogxK
U2 - 10.1145/3136625
DO - 10.1145/3136625
M3 - Review article
AN - SCOPUS:85040227605
SN - 0360-0300
VL - 50
JO - ACM Computing Surveys
JF - ACM Computing Surveys
IS - 6
M1 - 94
ER -