Theory and applications for the supervised learning method based on gradient algorithms Part I-Fundamental algorithm

Jie Si, Guian Zhou, Han Li, Yingduo Han

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

A family of gradient based on supervised training algorithm and its application is discussed. A Levenberg-Marquardt (LM) algorithm, a combination of the gradient method and the Gauss-Newton method, is under close investigation. It boasts of the local convergence of the Gauss-Newton method as well as the overall convergence of the gradient method. When μ is large, the LM algorithm looks like the gradient method; when μ is small, it changes into the Gauss-Newton method. With the aid of the approximate second derivative, the LM algorithm is more efficient than the gradient method. As to with the training process and accuracy, the LM method is superior to the conjugate gradient method and the vary-learning-rate BP method. As the main matrix is positively definite, the solution always exists. From this view point, the LM method is preferable to the Gauss-Newton method.

Original languageEnglish (US)
Pages (from-to)71-73
Number of pages3
JournalQinghua Daxue Xuebao/Journal of Tsinghua University
Volume37
Issue number7
StatePublished - 1997
Externally publishedYes

ASJC Scopus subject areas

  • General Engineering
  • Computer Science Applications
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Theory and applications for the supervised learning method based on gradient algorithms Part I-Fundamental algorithm'. Together they form a unique fingerprint.

Cite this