Gradient convergence in gradient methods with errors

Dimitri P. Bertsekas, John N. Tsitsiklis

Research output: Contribution to journalArticlepeer-review

217 Scopus citations

Abstract

We consider the gradient method cursive Greek chit+1 = cursive Greek chit + γt(st + wt), where st is a descent direction of a function f : ℛn → ℛ and wt is a deterministic or stochastic error. We assume that ∇f is Lipschitz continuous, that the stepsize γt diminishes to 0, and that st and wt satisfy standard conditions. We show that either f(cursive Greek chit) → -∞ or f(cursive Greek chit) converges to a finite value and ∇ f(cursive Greek chit) → 0 (with probability 1 in the stochastic case), and in doing so, we remove various boundedness conditions that are assumed in existing results, such as boundedness from below of f, boundedness of ∇ f(cursive Greek chit), or boundedness of cursive Greek chit.

Original languageEnglish (US)
Pages (from-to)627-642
Number of pages16
JournalSIAM Journal on Optimization
Volume10
Issue number3
DOIs
StatePublished - 2000
Externally publishedYes

Keywords

  • Gradient convergence
  • Gradient methods
  • Incremental gradient methods
  • Stochastic approximation

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Gradient convergence in gradient methods with errors'. Together they form a unique fingerprint.

Cite this