A convex formulation for learning shared structures from multiple tasks

Jianhui Chen, Lei Tang, Jun Liu, Jieping Ye

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. In this paper, we consider the problem of learning shared structures from multiple related tasks. We present an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation. We convert iASO, a non-convex formulation, into a relaxed convex one, which is, however, not scalable to large data sets due to its complex constraints. We propose an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently, and further show that cASO converges to a global optimum. In addition, we present a theoretical condition, under which cASO can find a globally optimal solution to iASO. Experiments on several benchmark data sets confirm our theoretical analysis.

Original languageEnglish (US)
Title of host publicationACM International Conference Proceeding Series
Volume382
DOIs
StatePublished - 2009
Event26th Annual International Conference on Machine Learning, ICML'09 - Montreal, QC, Canada
Duration: Jun 14 2009Jun 18 2009

Other

Other26th Annual International Conference on Machine Learning, ICML'09
CountryCanada
CityMontreal, QC
Period6/14/096/18/09

Fingerprint

Experiments

ASJC Scopus subject areas

  • Human-Computer Interaction

Cite this

Chen, J., Tang, L., Liu, J., & Ye, J. (2009). A convex formulation for learning shared structures from multiple tasks. In ACM International Conference Proceeding Series (Vol. 382). [18] https://doi.org/10.1145/1553374.1553392

A convex formulation for learning shared structures from multiple tasks. / Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping.

ACM International Conference Proceeding Series. Vol. 382 2009. 18.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chen, J, Tang, L, Liu, J & Ye, J 2009, A convex formulation for learning shared structures from multiple tasks. in ACM International Conference Proceeding Series. vol. 382, 18, 26th Annual International Conference on Machine Learning, ICML'09, Montreal, QC, Canada, 6/14/09. https://doi.org/10.1145/1553374.1553392
Chen J, Tang L, Liu J, Ye J. A convex formulation for learning shared structures from multiple tasks. In ACM International Conference Proceeding Series. Vol. 382. 2009. 18 https://doi.org/10.1145/1553374.1553392
Chen, Jianhui ; Tang, Lei ; Liu, Jun ; Ye, Jieping. / A convex formulation for learning shared structures from multiple tasks. ACM International Conference Proceeding Series. Vol. 382 2009.
@inproceedings{2b79aa3b5edc4aee9997cde2c50a35c1,
title = "A convex formulation for learning shared structures from multiple tasks",
abstract = "Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. In this paper, we consider the problem of learning shared structures from multiple related tasks. We present an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation. We convert iASO, a non-convex formulation, into a relaxed convex one, which is, however, not scalable to large data sets due to its complex constraints. We propose an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently, and further show that cASO converges to a global optimum. In addition, we present a theoretical condition, under which cASO can find a globally optimal solution to iASO. Experiments on several benchmark data sets confirm our theoretical analysis.",
author = "Jianhui Chen and Lei Tang and Jun Liu and Jieping Ye",
year = "2009",
doi = "10.1145/1553374.1553392",
language = "English (US)",
isbn = "9781605585161",
volume = "382",
booktitle = "ACM International Conference Proceeding Series",

}

TY - GEN

T1 - A convex formulation for learning shared structures from multiple tasks

AU - Chen, Jianhui

AU - Tang, Lei

AU - Liu, Jun

AU - Ye, Jieping

PY - 2009

Y1 - 2009

N2 - Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. In this paper, we consider the problem of learning shared structures from multiple related tasks. We present an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation. We convert iASO, a non-convex formulation, into a relaxed convex one, which is, however, not scalable to large data sets due to its complex constraints. We propose an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently, and further show that cASO converges to a global optimum. In addition, we present a theoretical condition, under which cASO can find a globally optimal solution to iASO. Experiments on several benchmark data sets confirm our theoretical analysis.

AB - Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. In this paper, we consider the problem of learning shared structures from multiple related tasks. We present an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm, in which all tasks are related by a shared feature representation. We convert iASO, a non-convex formulation, into a relaxed convex one, which is, however, not scalable to large data sets due to its complex constraints. We propose an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently, and further show that cASO converges to a global optimum. In addition, we present a theoretical condition, under which cASO can find a globally optimal solution to iASO. Experiments on several benchmark data sets confirm our theoretical analysis.

UR - http://www.scopus.com/inward/record.url?scp=70049106295&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70049106295&partnerID=8YFLogxK

U2 - 10.1145/1553374.1553392

DO - 10.1145/1553374.1553392

M3 - Conference contribution

SN - 9781605585161

VL - 382

BT - ACM International Conference Proceeding Series

ER -