Learning incoherent sparse and low-rank patterns from multiple tasks

Jianhui Chen, Liu Ji, Jieping Ye

Research output: Chapter in Book/Report/Conference proceedingConference contribution

35 Citations (Scopus)

Abstract

We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. In addition, we present two projected gradient algorithms and discuss their rates of convergence. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

Original languageEnglish (US)
Title of host publicationProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Pages1179-1187
Number of pages9
DOIs
StatePublished - 2010
Event16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD-2010 - Washington, DC, United States
Duration: Jul 25 2010Jul 28 2010

Other

Other16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD-2010
CountryUnited States
CityWashington, DC
Period7/25/107/28/10

Fingerprint

Constrained optimization

Keywords

  • Multi-task learning
  • Sparse and low-rank patterns
  • Trace norm

ASJC Scopus subject areas

  • Software
  • Information Systems

Cite this

Chen, J., Ji, L., & Ye, J. (2010). Learning incoherent sparse and low-rank patterns from multiple tasks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1179-1187) https://doi.org/10.1145/1835804.1835952

Learning incoherent sparse and low-rank patterns from multiple tasks. / Chen, Jianhui; Ji, Liu; Ye, Jieping.

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2010. p. 1179-1187.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chen, J, Ji, L & Ye, J 2010, Learning incoherent sparse and low-rank patterns from multiple tasks. in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 1179-1187, 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD-2010, Washington, DC, United States, 7/25/10. https://doi.org/10.1145/1835804.1835952
Chen J, Ji L, Ye J. Learning incoherent sparse and low-rank patterns from multiple tasks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2010. p. 1179-1187 https://doi.org/10.1145/1835804.1835952
Chen, Jianhui ; Ji, Liu ; Ye, Jieping. / Learning incoherent sparse and low-rank patterns from multiple tasks. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2010. pp. 1179-1187
@inproceedings{17574eaa010e4536b97469329c52bde1,
title = "Learning incoherent sparse and low-rank patterns from multiple tasks",
abstract = "We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. In addition, we present two projected gradient algorithms and discuss their rates of convergence. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.",
keywords = "Multi-task learning, Sparse and low-rank patterns, Trace norm",
author = "Jianhui Chen and Liu Ji and Jieping Ye",
year = "2010",
doi = "10.1145/1835804.1835952",
language = "English (US)",
isbn = "9781450300551",
pages = "1179--1187",
booktitle = "Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",

}

TY - GEN

T1 - Learning incoherent sparse and low-rank patterns from multiple tasks

AU - Chen, Jianhui

AU - Ji, Liu

AU - Ye, Jieping

PY - 2010

Y1 - 2010

N2 - We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. In addition, we present two projected gradient algorithms and discuss their rates of convergence. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

AB - We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. In addition, we present two projected gradient algorithms and discuss their rates of convergence. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

KW - Multi-task learning

KW - Sparse and low-rank patterns

KW - Trace norm

UR - http://www.scopus.com/inward/record.url?scp=77956208061&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956208061&partnerID=8YFLogxK

U2 - 10.1145/1835804.1835952

DO - 10.1145/1835804.1835952

M3 - Conference contribution

AN - SCOPUS:77956208061

SN - 9781450300551

SP - 1179

EP - 1187

BT - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

ER -