Linear dimensionality reduction for multi-label classification

Shuiwang Ji, Jieping Ye

Research output: Chapter in Book/Report/Conference proceedingConference contribution

76 Citations (Scopus)

Abstract

Dimensionality reduction is an essential step in high-dimensional data analysis. Many dimensionality reduction algorithms have been applied successfully to multi-class and multi-label problems. They are commonly applied as a separate data preprocessing step before classification algorithms. In this paper, we study a joint learning framework in which we perform dimensionality reduction and multi-label classification simultaneously. We show that when the least squares loss is used in classification, this joint learning decouples into two separate components, i.e., dimensionality reduction followed by multi-label classification. This analysis partially justifies the current practice of a separate application of dimensionality reduction for classification problems. We extend our analysis using other loss functions, including the hinge loss and the squared hinge loss. We further extend the formulation to the more general case where the input data for different class labels may differ, overcoming the limitation of traditional dimensionality reduction algorithms. Experiments on benchmark data sets have been conducted to evaluate the proposed joint formulations.

Original languageEnglish (US)
Title of host publicationIJCAI International Joint Conference on Artificial Intelligence
Pages1077-1082
Number of pages6
StatePublished - 2009
Event21st International Joint Conference on Artificial Intelligence, IJCAI-09 - Pasadena, CA, United States
Duration: Jul 11 2009Jul 17 2009

Other

Other21st International Joint Conference on Artificial Intelligence, IJCAI-09
CountryUnited States
CityPasadena, CA
Period7/11/097/17/09

Fingerprint

Labels
Hinges
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Ji, S., & Ye, J. (2009). Linear dimensionality reduction for multi-label classification. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1077-1082)

Linear dimensionality reduction for multi-label classification. / Ji, Shuiwang; Ye, Jieping.

IJCAI International Joint Conference on Artificial Intelligence. 2009. p. 1077-1082.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ji, S & Ye, J 2009, Linear dimensionality reduction for multi-label classification. in IJCAI International Joint Conference on Artificial Intelligence. pp. 1077-1082, 21st International Joint Conference on Artificial Intelligence, IJCAI-09, Pasadena, CA, United States, 7/11/09.
Ji S, Ye J. Linear dimensionality reduction for multi-label classification. In IJCAI International Joint Conference on Artificial Intelligence. 2009. p. 1077-1082
Ji, Shuiwang ; Ye, Jieping. / Linear dimensionality reduction for multi-label classification. IJCAI International Joint Conference on Artificial Intelligence. 2009. pp. 1077-1082
@inproceedings{d06b12d3cfd74e2b92cd4fad74f1ade3,
title = "Linear dimensionality reduction for multi-label classification",
abstract = "Dimensionality reduction is an essential step in high-dimensional data analysis. Many dimensionality reduction algorithms have been applied successfully to multi-class and multi-label problems. They are commonly applied as a separate data preprocessing step before classification algorithms. In this paper, we study a joint learning framework in which we perform dimensionality reduction and multi-label classification simultaneously. We show that when the least squares loss is used in classification, this joint learning decouples into two separate components, i.e., dimensionality reduction followed by multi-label classification. This analysis partially justifies the current practice of a separate application of dimensionality reduction for classification problems. We extend our analysis using other loss functions, including the hinge loss and the squared hinge loss. We further extend the formulation to the more general case where the input data for different class labels may differ, overcoming the limitation of traditional dimensionality reduction algorithms. Experiments on benchmark data sets have been conducted to evaluate the proposed joint formulations.",
author = "Shuiwang Ji and Jieping Ye",
year = "2009",
language = "English (US)",
isbn = "9781577354260",
pages = "1077--1082",
booktitle = "IJCAI International Joint Conference on Artificial Intelligence",

}

TY - GEN

T1 - Linear dimensionality reduction for multi-label classification

AU - Ji, Shuiwang

AU - Ye, Jieping

PY - 2009

Y1 - 2009

N2 - Dimensionality reduction is an essential step in high-dimensional data analysis. Many dimensionality reduction algorithms have been applied successfully to multi-class and multi-label problems. They are commonly applied as a separate data preprocessing step before classification algorithms. In this paper, we study a joint learning framework in which we perform dimensionality reduction and multi-label classification simultaneously. We show that when the least squares loss is used in classification, this joint learning decouples into two separate components, i.e., dimensionality reduction followed by multi-label classification. This analysis partially justifies the current practice of a separate application of dimensionality reduction for classification problems. We extend our analysis using other loss functions, including the hinge loss and the squared hinge loss. We further extend the formulation to the more general case where the input data for different class labels may differ, overcoming the limitation of traditional dimensionality reduction algorithms. Experiments on benchmark data sets have been conducted to evaluate the proposed joint formulations.

AB - Dimensionality reduction is an essential step in high-dimensional data analysis. Many dimensionality reduction algorithms have been applied successfully to multi-class and multi-label problems. They are commonly applied as a separate data preprocessing step before classification algorithms. In this paper, we study a joint learning framework in which we perform dimensionality reduction and multi-label classification simultaneously. We show that when the least squares loss is used in classification, this joint learning decouples into two separate components, i.e., dimensionality reduction followed by multi-label classification. This analysis partially justifies the current practice of a separate application of dimensionality reduction for classification problems. We extend our analysis using other loss functions, including the hinge loss and the squared hinge loss. We further extend the formulation to the more general case where the input data for different class labels may differ, overcoming the limitation of traditional dimensionality reduction algorithms. Experiments on benchmark data sets have been conducted to evaluate the proposed joint formulations.

UR - http://www.scopus.com/inward/record.url?scp=77953179079&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77953179079&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:77953179079

SN - 9781577354260

SP - 1077

EP - 1082

BT - IJCAI International Joint Conference on Artificial Intelligence

ER -