Abstract

In the literature of feature selection, different criteria have been proposed to evaluate the goodness of features. In our investigation, we notice that a number of existing selection criteria implicitly select features that preserve sample similarity, and can be unified under a common framework. We further point out that any feature selection criteria covered by this framework cannot handle redundant features, a common drawback of these criteria. Motivated by these observations, we propose a new 'Similarity Preserving Feature Selection framework in an explicit and rigorous way. We show, through theoretical analysis, that the proposed framework not only encompasses many widely used feature selection criteria, but also naturally overcomes their common weakness in handling feature redundancy. In developing this new framework, we begin with a conventional combinatorial optimization formulation for similarity preserving feature selection, then extend it with a sparse multiple-output regression formulation to improve its efficiency and effectiveness. A set of three algorithms are devised to efficiently solve the proposed formulations, each of which has its own advantages in terms of computational complexity and selection performance. As exhibited by our extensive experimental study, the proposed framework achieves superior feature selection performance and attractive properties.

Original languageEnglish (US)
Article number6051436
Pages (from-to)619-632
Number of pages14
JournalIEEE Transactions on Knowledge and Data Engineering
Volume25
Issue number3
DOIs
StatePublished - 2013

Keywords

  • Feature selection
  • multiple output regression
  • redundancy removal
  • similarity preserving
  • sparse regularization

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Fingerprint Dive into the research topics of 'On similarity preserving feature selection'. Together they form a unique fingerprint.

Cite this