Convergence results for some temporal difference methods based on least squares

Research output: Contribution to journalArticlepeer-review

66 Scopus citations

Abstract

We consider finite-state Markov decision processes, and prove convergence and rate of convergence results for certain least squares policy evaluation algorithms of the type known as LSPE(λ). These are temporal difference methods for constructing a linear function approximation of the cost function of a stationary policy, within the context of infinite-horizon discounted and average cost dynamic programming. We introduce an average cost method, patterned after the known discounted cost method, and we prove its convergence for a range of constant stepsize choices. We also show that the convergence rate of both the discounted and the average cost methods is optimal within the class of temporal difference methods. Analysis and experiment indicate that our methods are substantially and often dramatically faster than TD(λ), as well as more reliable.

Original languageEnglish (US)
Pages (from-to)1515-1531
Number of pages17
JournalIEEE Transactions on Automatic Control
Volume54
Issue number7
DOIs
StatePublished - 2009
Externally publishedYes

Keywords

  • Approximation methods
  • Convergence of numerical methods
  • Dynamic programming
  • Markov processes

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Convergence results for some temporal difference methods based on least squares'. Together they form a unique fingerprint.

Cite this