Dimensionality reduction of unsupervised data

M. Dash, Huan Liu, J. Yao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

103 Citations (Scopus)

Abstract

Dimensionality reduction is an important problem for efficient handling of large databases. Many feature selection methods exist for supervised data having class information. Little work has been done for dimensionality reduction of unsupervised data in which class information is not available. Principal Component Analysis (PCA) is often used. However, PCA creates new features. It is difficult to obtain intuitive understanding of the data using the new features only. In this paper we are concerned with the problem of determining and choosing the important original features for unsupervised data. Our method is based on the observation that removing an irrelevant feature from the feature set may not change the underlying concept of the data, but not so otherwise. We propose an entropy measure for ranking features, and conduct extensive experiments to show that our method is able to find the important features. Also it compares well with a similar feature ranking method (Relief) that requires class information unlike our method.

Original languageEnglish (US)
Title of host publicationProceedings of the International Conference on Tools with Artificial Intelligence
Editors Anon
PublisherIEEE
Pages532-539
Number of pages8
StatePublished - 1997
Externally publishedYes
EventProceedings if the 1997 IEEE 9th IEEE International Conference on Tools with Artificial Intelligence - Newport Beach, CA, USA
Duration: Nov 3 1997Nov 8 1997

Other

OtherProceedings if the 1997 IEEE 9th IEEE International Conference on Tools with Artificial Intelligence
CityNewport Beach, CA, USA
Period11/3/9711/8/97

Fingerprint

Principal component analysis
Feature extraction
Entropy
Experiments

ASJC Scopus subject areas

  • Software

Cite this

Dash, M., Liu, H., & Yao, J. (1997). Dimensionality reduction of unsupervised data. In Anon (Ed.), Proceedings of the International Conference on Tools with Artificial Intelligence (pp. 532-539). IEEE.

Dimensionality reduction of unsupervised data. / Dash, M.; Liu, Huan; Yao, J.

Proceedings of the International Conference on Tools with Artificial Intelligence. ed. / Anon. IEEE, 1997. p. 532-539.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Dash, M, Liu, H & Yao, J 1997, Dimensionality reduction of unsupervised data. in Anon (ed.), Proceedings of the International Conference on Tools with Artificial Intelligence. IEEE, pp. 532-539, Proceedings if the 1997 IEEE 9th IEEE International Conference on Tools with Artificial Intelligence, Newport Beach, CA, USA, 11/3/97.
Dash M, Liu H, Yao J. Dimensionality reduction of unsupervised data. In Anon, editor, Proceedings of the International Conference on Tools with Artificial Intelligence. IEEE. 1997. p. 532-539
Dash, M. ; Liu, Huan ; Yao, J. / Dimensionality reduction of unsupervised data. Proceedings of the International Conference on Tools with Artificial Intelligence. editor / Anon. IEEE, 1997. pp. 532-539
@inproceedings{4ded2bb23b14492a88a9c8372648bd51,
title = "Dimensionality reduction of unsupervised data",
abstract = "Dimensionality reduction is an important problem for efficient handling of large databases. Many feature selection methods exist for supervised data having class information. Little work has been done for dimensionality reduction of unsupervised data in which class information is not available. Principal Component Analysis (PCA) is often used. However, PCA creates new features. It is difficult to obtain intuitive understanding of the data using the new features only. In this paper we are concerned with the problem of determining and choosing the important original features for unsupervised data. Our method is based on the observation that removing an irrelevant feature from the feature set may not change the underlying concept of the data, but not so otherwise. We propose an entropy measure for ranking features, and conduct extensive experiments to show that our method is able to find the important features. Also it compares well with a similar feature ranking method (Relief) that requires class information unlike our method.",
author = "M. Dash and Huan Liu and J. Yao",
year = "1997",
language = "English (US)",
pages = "532--539",
editor = "Anon",
booktitle = "Proceedings of the International Conference on Tools with Artificial Intelligence",
publisher = "IEEE",

}

TY - GEN

T1 - Dimensionality reduction of unsupervised data

AU - Dash, M.

AU - Liu, Huan

AU - Yao, J.

PY - 1997

Y1 - 1997

N2 - Dimensionality reduction is an important problem for efficient handling of large databases. Many feature selection methods exist for supervised data having class information. Little work has been done for dimensionality reduction of unsupervised data in which class information is not available. Principal Component Analysis (PCA) is often used. However, PCA creates new features. It is difficult to obtain intuitive understanding of the data using the new features only. In this paper we are concerned with the problem of determining and choosing the important original features for unsupervised data. Our method is based on the observation that removing an irrelevant feature from the feature set may not change the underlying concept of the data, but not so otherwise. We propose an entropy measure for ranking features, and conduct extensive experiments to show that our method is able to find the important features. Also it compares well with a similar feature ranking method (Relief) that requires class information unlike our method.

AB - Dimensionality reduction is an important problem for efficient handling of large databases. Many feature selection methods exist for supervised data having class information. Little work has been done for dimensionality reduction of unsupervised data in which class information is not available. Principal Component Analysis (PCA) is often used. However, PCA creates new features. It is difficult to obtain intuitive understanding of the data using the new features only. In this paper we are concerned with the problem of determining and choosing the important original features for unsupervised data. Our method is based on the observation that removing an irrelevant feature from the feature set may not change the underlying concept of the data, but not so otherwise. We propose an entropy measure for ranking features, and conduct extensive experiments to show that our method is able to find the important features. Also it compares well with a similar feature ranking method (Relief) that requires class information unlike our method.

UR - http://www.scopus.com/inward/record.url?scp=0031359166&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0031359166&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0031359166

SP - 532

EP - 539

BT - Proceedings of the International Conference on Tools with Artificial Intelligence

A2 - Anon, null

PB - IEEE

ER -