A new class of incremental gradient methods for least squares problems

Research output: Contribution to journalArticlepeer-review

197 Scopus citations

Abstract

The least mean squares (LMS) method for linear least squares problems differs from the steepest descent method in that it processes data blocks one-by-one, with intermediate adjustment of the parameter vector under optimization. This mode of operation often leads to faster convergence when far from the eventual limit and to slower (sublinear) convergence when close to the optimal solution. We embed both LMS and steepest descent, as well as other intermediate methods, within a one-parameter class of algorithms, and we propose a hybrid class of methods that combine the faster early convergence rate of LMS with the faster ultimate linear convergence rate of steepest descent. These methods are well suited for neural network training problems with large data sets. Furthermore, these methods allow the effective use of scaling based, for example, on diagonal or other approximations of the Hessian matrix.

Original languageEnglish (US)
Pages (from-to)913-926
Number of pages14
JournalSIAM Journal on Optimization
Volume7
Issue number4
DOIs
StatePublished - Nov 1997
Externally publishedYes

Keywords

  • Gradient methods
  • Least squares
  • Neural networks
  • Nonlinear programming

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'A new class of incremental gradient methods for least squares problems'. Together they form a unique fingerprint.

Cite this