Incremental proximal methods for large scale convex optimization

Research output: Contribution to journalArticlepeer-review

219 Scopus citations

Abstract

We consider the minimization of a sum Σmi=1 fi (x) consisting of a large number of convex component functions fi . For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components.We also discuss applications in a few contexts, including signal processing and inference/machine learning.

Original languageEnglish (US)
Pages (from-to)163-195
Number of pages33
JournalMathematical Programming
Volume129
Issue number2
DOIs
StatePublished - Oct 2011
Externally publishedYes

Keywords

  • Convex
  • Gradient method
  • Incremental method
  • Proximal algorithm

ASJC Scopus subject areas

  • Software
  • General Mathematics

Fingerprint

Dive into the research topics of 'Incremental proximal methods for large scale convex optimization'. Together they form a unique fingerprint.

Cite this