### Abstract

Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

Original language | English (US) |
---|---|

Title of host publication | 30th International Conference on Machine Learning, ICML 2013 |

Publisher | International Machine Learning Society (IMLS) |

Pages | 696-704 |

Number of pages | 9 |

Edition | PART 1 |

State | Published - 2013 |

Event | 30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, United States Duration: Jun 16 2013 → Jun 21 2013 |

### Other

Other | 30th International Conference on Machine Learning, ICML 2013 |
---|---|

Country | United States |

City | Atlanta, GA |

Period | 6/16/13 → 6/21/13 |

### Fingerprint

### ASJC Scopus subject areas

- Human-Computer Interaction
- Sociology and Political Science

### Cite this

*30th International Conference on Machine Learning, ICML 2013*(PART 1 ed., pp. 696-704). International Machine Learning Society (IMLS).

**A General Iterative Shrinkage and Thresholding algorithm for non-convex regularized optimization problems.** / Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z.; Ye, Jieping.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*30th International Conference on Machine Learning, ICML 2013.*PART 1 edn, International Machine Learning Society (IMLS), pp. 696-704, 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, United States, 6/16/13.

}

TY - GEN

T1 - A General Iterative Shrinkage and Thresholding algorithm for non-convex regularized optimization problems

AU - Gong, Pinghua

AU - Zhang, Changshui

AU - Lu, Zhaosong

AU - Huang, Jianhua Z.

AU - Ye, Jieping

PY - 2013

Y1 - 2013

N2 - Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

AB - Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

UR - http://www.scopus.com/inward/record.url?scp=84897545592&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84897545592&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84897545592

SP - 696

EP - 704

BT - 30th International Conference on Machine Learning, ICML 2013

PB - International Machine Learning Society (IMLS)

ER -