TY - GEN
T1 - Adaptive multi-task sparse learning with an application to fMRI study
AU - Chen, Xi
AU - He, Jinghui
AU - Lawrence, Rick
AU - Carbonell, Jaime G.
PY - 2012
Y1 - 2012
N2 - In this paper, we consider the multi-task sparse learning problem under the assumption that the dimensionality diverges with the sample size. The traditional l1/l2 multi-task lasso does not enjoy the oracle property unless a rather strong condition is enforced. Inspired by adaptive lasso, we propose a multi-stage procedure, adaptive multi-task lasso, to simultaneously conduct model estimation and variable selection across different tasks. Motivated by adaptive elastic-net, we further propose the adaptive multi-task elastic-net by adding another quadratic penalty to address the problem of collinearity. When the number of tasks is fixed, under weak assumptions, we establish the asymptotic oracle property for the proposed adaptive multi-task sparse learning methods including both adaptive multitask lasso and elastic-net. In addition to the desirable asymptotic property, we show by simulations that adaptive sparse learning methods also achieve much improved finite sample performance. As a case study, we apply adaptive multi-task elastic-net to a cognitive science problem, where one wants to discover a compact semantic basis for predicting fMRI images. We show that adaptive multi-task sparse learning methods achieve superior performance and provide some insights into how the brain represents meanings of words.
AB - In this paper, we consider the multi-task sparse learning problem under the assumption that the dimensionality diverges with the sample size. The traditional l1/l2 multi-task lasso does not enjoy the oracle property unless a rather strong condition is enforced. Inspired by adaptive lasso, we propose a multi-stage procedure, adaptive multi-task lasso, to simultaneously conduct model estimation and variable selection across different tasks. Motivated by adaptive elastic-net, we further propose the adaptive multi-task elastic-net by adding another quadratic penalty to address the problem of collinearity. When the number of tasks is fixed, under weak assumptions, we establish the asymptotic oracle property for the proposed adaptive multi-task sparse learning methods including both adaptive multitask lasso and elastic-net. In addition to the desirable asymptotic property, we show by simulations that adaptive sparse learning methods also achieve much improved finite sample performance. As a case study, we apply adaptive multi-task elastic-net to a cognitive science problem, where one wants to discover a compact semantic basis for predicting fMRI images. We show that adaptive multi-task sparse learning methods achieve superior performance and provide some insights into how the brain represents meanings of words.
UR - http://www.scopus.com/inward/record.url?scp=84875095701&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84875095701&partnerID=8YFLogxK
U2 - 10.1137/1.9781611972825.19
DO - 10.1137/1.9781611972825.19
M3 - Conference contribution
AN - SCOPUS:84875095701
SN - 9781611972320
T3 - Proceedings of the 12th SIAM International Conference on Data Mining, SDM 2012
SP - 212
EP - 223
BT - Proceedings of the 12th SIAM International Conference on Data Mining, SDM 2012
PB - Society for Industrial and Applied Mathematics Publications
T2 - 12th SIAM International Conference on Data Mining, SDM 2012
Y2 - 26 April 2012 through 28 April 2012
ER -