Lambda-Policy Iteration: A Review and a New Implementation

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Scopus citations

Abstract

In this chapter, we discuss λ-policy iteration, a method for exact and approximate dynamic programming. It is intermediate between the classical value iteration (VI) and the policy iteration (PI) methods, and it is closely related to optimistic (also known as modified) PI, whereby each policy evaluation is done approximately, using a finite number of VI. We review the theory of the method and associated questions of bias and exploration arising in simulation-based cost function approximation. We then discuss various implementations, which offer advantages over well-established PI methods that use LSPE(λ), LSTD(λ), or TD(λ) for policy evaluation with cost function approximation. One of these implementations is based on a new simulation scheme, called geometric sampling, which uses multiple short trajectories rather than a single infinitely long trajectory.

Original languageEnglish (US)
Title of host publicationReinforcement Learning and Approximate Dynamic Programming for Feedback Control
PublisherJohn Wiley and Sons
Pages379-409
Number of pages31
ISBN (Print)9781118104200
DOIs
StatePublished - Feb 7 2013

Keywords

  • DP for complex problems, λ-PI
  • LSTD(λ) batch, simple matrix inversion
  • MDP and RL, λ-policy iteration in DP
  • λ-PI without cost function, using geometric
  • λ-policy, a new implementation

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Lambda-Policy Iteration: A Review and a New Implementation'. Together they form a unique fingerprint.

Cite this