Abstract

Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: Similaritybased, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.

Original languageEnglish (US)
Article number94
JournalACM Computing Surveys
Volume50
Issue number6
DOIs
StatePublished - Dec 1 2017

Fingerprint

Feature Selection
Feature extraction
Data mining
Data Mining
Streaming Data
Data Preprocessing
High-dimensional Data
Proliferation
Learning systems
Open Source
Repository
Open Problems
Machine Learning
Evaluate

Keywords

  • Feature selection

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R. P., Tang, J., & Liu, H. (2017). Feature selection: A data perspective. ACM Computing Surveys, 50(6), [94]. https://doi.org/10.1145/3136625

Feature selection : A data perspective. / Li, Jundong; Cheng, Kewei; Wang, Suhang; Morstatter, Fred; Trevino, Robert P.; Tang, Jiliang; Liu, Huan.

In: ACM Computing Surveys, Vol. 50, No. 6, 94, 01.12.2017.

Research output: Contribution to journalReview article

Li, J, Cheng, K, Wang, S, Morstatter, F, Trevino, RP, Tang, J & Liu, H 2017, 'Feature selection: A data perspective', ACM Computing Surveys, vol. 50, no. 6, 94. https://doi.org/10.1145/3136625
Li J, Cheng K, Wang S, Morstatter F, Trevino RP, Tang J et al. Feature selection: A data perspective. ACM Computing Surveys. 2017 Dec 1;50(6). 94. https://doi.org/10.1145/3136625
Li, Jundong ; Cheng, Kewei ; Wang, Suhang ; Morstatter, Fred ; Trevino, Robert P. ; Tang, Jiliang ; Liu, Huan. / Feature selection : A data perspective. In: ACM Computing Surveys. 2017 ; Vol. 50, No. 6.
@article{8a3ed0596c5a4641a1a5ba3890848d8f,
title = "Feature selection: A data perspective",
abstract = "Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: Similaritybased, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.",
keywords = "Feature selection",
author = "Jundong Li and Kewei Cheng and Suhang Wang and Fred Morstatter and Trevino, {Robert P.} and Jiliang Tang and Huan Liu",
year = "2017",
month = "12",
day = "1",
doi = "10.1145/3136625",
language = "English (US)",
volume = "50",
journal = "ACM Computing Surveys",
issn = "0360-0300",
publisher = "Association for Computing Machinery (ACM)",
number = "6",

}

TY - JOUR

T1 - Feature selection

T2 - A data perspective

AU - Li, Jundong

AU - Cheng, Kewei

AU - Wang, Suhang

AU - Morstatter, Fred

AU - Trevino, Robert P.

AU - Tang, Jiliang

AU - Liu, Huan

PY - 2017/12/1

Y1 - 2017/12/1

N2 - Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: Similaritybased, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.

AB - Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: Similaritybased, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.

KW - Feature selection

UR - http://www.scopus.com/inward/record.url?scp=85040227605&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040227605&partnerID=8YFLogxK

U2 - 10.1145/3136625

DO - 10.1145/3136625

M3 - Review article

AN - SCOPUS:85040227605

VL - 50

JO - ACM Computing Surveys

JF - ACM Computing Surveys

SN - 0360-0300

IS - 6

M1 - 94

ER -