Multi-stage multi-task feature learning

Pinghua Gong, Jieping Ye, Changshui Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

39 Citations (Scopus)

Abstract

Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an l0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
Pages1988-1996
Number of pages9
Volume3
StatePublished - 2012
Event26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012 - Lake Tahoe, NV, United States
Duration: Dec 3 2012Dec 6 2012

Other

Other26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012
CountryUnited States
CityLake Tahoe, NV
Period12/3/1212/6/12

Fingerprint

Learning algorithms
Parameter estimation
Computer vision

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Gong, P., Ye, J., & Zhang, C. (2012). Multi-stage multi-task feature learning. In Advances in Neural Information Processing Systems (Vol. 3, pp. 1988-1996)

Multi-stage multi-task feature learning. / Gong, Pinghua; Ye, Jieping; Zhang, Changshui.

Advances in Neural Information Processing Systems. Vol. 3 2012. p. 1988-1996.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Gong, P, Ye, J & Zhang, C 2012, Multi-stage multi-task feature learning. in Advances in Neural Information Processing Systems. vol. 3, pp. 1988-1996, 26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012, Lake Tahoe, NV, United States, 12/3/12.
Gong P, Ye J, Zhang C. Multi-stage multi-task feature learning. In Advances in Neural Information Processing Systems. Vol. 3. 2012. p. 1988-1996
Gong, Pinghua ; Ye, Jieping ; Zhang, Changshui. / Multi-stage multi-task feature learning. Advances in Neural Information Processing Systems. Vol. 3 2012. pp. 1988-1996
@inproceedings{58d62a347863428fb07b0cb42eb5dc46,
title = "Multi-stage multi-task feature learning",
abstract = "Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an l0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.",
author = "Pinghua Gong and Jieping Ye and Changshui Zhang",
year = "2012",
language = "English (US)",
isbn = "9781627480031",
volume = "3",
pages = "1988--1996",
booktitle = "Advances in Neural Information Processing Systems",

}

TY - GEN

T1 - Multi-stage multi-task feature learning

AU - Gong, Pinghua

AU - Ye, Jieping

AU - Zhang, Changshui

PY - 2012

Y1 - 2012

N2 - Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an l0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.

AB - Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an l0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.

UR - http://www.scopus.com/inward/record.url?scp=84877724400&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84877724400&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781627480031

VL - 3

SP - 1988

EP - 1996

BT - Advances in Neural Information Processing Systems

ER -