An accelerated gradient method for trace norm minimization

Shuiwang Ji, Jieping Ye

Research output: Chapter in Book/Report/Conference proceedingConference contribution

45 Scopus citations

Abstract

We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O( 1/k ), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O( 1/k ). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O( 1/k2 ) for smooth problems. Experiments on multi-task learning problems demonstrate the eficiency of the proposed algorithms.

Original languageEnglish (US)
Title of host publicationProceedings of the 26th Annual International Conference on Machine Learning, ICML'09
DOIs
StatePublished - 2009
Event26th Annual International Conference on Machine Learning, ICML'09 - Montreal, QC, Canada
Duration: Jun 14 2009Jun 18 2009

Publication series

NameACM International Conference Proceeding Series
Volume382

Other

Other26th Annual International Conference on Machine Learning, ICML'09
Country/TerritoryCanada
CityMontreal, QC
Period6/14/096/18/09

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'An accelerated gradient method for trace norm minimization'. Together they form a unique fingerprint.

Cite this