Generalized low rank approximations of matrices

Jieping Ye

Research output: Contribution to journalArticle

269 Citations (Scopus)

Abstract

The problem of computing low rank approximations of matrices is considered. The novel aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowledge, the optimization problem proposed in this paper does not admit a closed form solution. We thus derive an iterative algorithm, namely GLRAM, which stands for the Generalized Low Rank Approximations of Matrices. GLRAM reduces the reconstruction error sequentially, and the resulting approximation is thus improved during successive iterations. Experimental results show that the algorithm converges rapidly. We have conducted extensive experiments on image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition (SVD) based methods. The comparison is based on the reconstruction error, misclassification error rate, and computation time. Results show that GLRAM is competitive with SVD for classification, while it has a much lower computation cost. However, GLRAM results in a larger reconstruction error than SVD. To further reduce the reconstruction error, we study the combination of GLRAM and SVD, namely GLRAM + SVD, where SVD is preceded by GLRAM. Results show that when using the same number of reduced dimensions, GLRAM + SVD achieves significant reduction of the reconstruction error as compared to GLRAM, while keeping the computation cost low.

Original languageEnglish (US)
Pages (from-to)167-191
Number of pages25
JournalMachine Learning
Volume61
Issue number1-3
DOIs
StatePublished - Nov 2005

Fingerprint

Singular value decomposition
Costs
Experiments

Keywords

  • Classification
  • Matrix approximation
  • Reconstruction error
  • Singular value decomposition

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Artificial Intelligence

Cite this

Generalized low rank approximations of matrices. / Ye, Jieping.

In: Machine Learning, Vol. 61, No. 1-3, 11.2005, p. 167-191.

Research output: Contribution to journalArticle

Ye, Jieping. / Generalized low rank approximations of matrices. In: Machine Learning. 2005 ; Vol. 61, No. 1-3. pp. 167-191.
@article{4eefeb2d4da3417f8620e08fb37128f5,
title = "Generalized low rank approximations of matrices",
abstract = "The problem of computing low rank approximations of matrices is considered. The novel aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowledge, the optimization problem proposed in this paper does not admit a closed form solution. We thus derive an iterative algorithm, namely GLRAM, which stands for the Generalized Low Rank Approximations of Matrices. GLRAM reduces the reconstruction error sequentially, and the resulting approximation is thus improved during successive iterations. Experimental results show that the algorithm converges rapidly. We have conducted extensive experiments on image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition (SVD) based methods. The comparison is based on the reconstruction error, misclassification error rate, and computation time. Results show that GLRAM is competitive with SVD for classification, while it has a much lower computation cost. However, GLRAM results in a larger reconstruction error than SVD. To further reduce the reconstruction error, we study the combination of GLRAM and SVD, namely GLRAM + SVD, where SVD is preceded by GLRAM. Results show that when using the same number of reduced dimensions, GLRAM + SVD achieves significant reduction of the reconstruction error as compared to GLRAM, while keeping the computation cost low.",
keywords = "Classification, Matrix approximation, Reconstruction error, Singular value decomposition",
author = "Jieping Ye",
year = "2005",
month = "11",
doi = "10.1007/s10994-005-3561-6",
language = "English (US)",
volume = "61",
pages = "167--191",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer Netherlands",
number = "1-3",

}

TY - JOUR

T1 - Generalized low rank approximations of matrices

AU - Ye, Jieping

PY - 2005/11

Y1 - 2005/11

N2 - The problem of computing low rank approximations of matrices is considered. The novel aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowledge, the optimization problem proposed in this paper does not admit a closed form solution. We thus derive an iterative algorithm, namely GLRAM, which stands for the Generalized Low Rank Approximations of Matrices. GLRAM reduces the reconstruction error sequentially, and the resulting approximation is thus improved during successive iterations. Experimental results show that the algorithm converges rapidly. We have conducted extensive experiments on image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition (SVD) based methods. The comparison is based on the reconstruction error, misclassification error rate, and computation time. Results show that GLRAM is competitive with SVD for classification, while it has a much lower computation cost. However, GLRAM results in a larger reconstruction error than SVD. To further reduce the reconstruction error, we study the combination of GLRAM and SVD, namely GLRAM + SVD, where SVD is preceded by GLRAM. Results show that when using the same number of reduced dimensions, GLRAM + SVD achieves significant reduction of the reconstruction error as compared to GLRAM, while keeping the computation cost low.

AB - The problem of computing low rank approximations of matrices is considered. The novel aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowledge, the optimization problem proposed in this paper does not admit a closed form solution. We thus derive an iterative algorithm, namely GLRAM, which stands for the Generalized Low Rank Approximations of Matrices. GLRAM reduces the reconstruction error sequentially, and the resulting approximation is thus improved during successive iterations. Experimental results show that the algorithm converges rapidly. We have conducted extensive experiments on image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition (SVD) based methods. The comparison is based on the reconstruction error, misclassification error rate, and computation time. Results show that GLRAM is competitive with SVD for classification, while it has a much lower computation cost. However, GLRAM results in a larger reconstruction error than SVD. To further reduce the reconstruction error, we study the combination of GLRAM and SVD, namely GLRAM + SVD, where SVD is preceded by GLRAM. Results show that when using the same number of reduced dimensions, GLRAM + SVD achieves significant reduction of the reconstruction error as compared to GLRAM, while keeping the computation cost low.

KW - Classification

KW - Matrix approximation

KW - Reconstruction error

KW - Singular value decomposition

UR - http://www.scopus.com/inward/record.url?scp=30044447599&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=30044447599&partnerID=8YFLogxK

U2 - 10.1007/s10994-005-3561-6

DO - 10.1007/s10994-005-3561-6

M3 - Article

VL - 61

SP - 167

EP - 191

JO - Machine Learning

JF - Machine Learning

SN - 0885-6125

IS - 1-3

ER -