Abstract

Approximate dynamic programming (ADP) has been widely studied from several important perspectives: algorithm development, learning efficiency measured by success or failure statistics, convergence rate, and learning error bounds. Given that many learning benchmarks used in ADP or reinforcement learning studies are control problems, it is important and necessary to examine the learning controllers from a control-theoretic perspective. This paper makes use of direct heuristic dynamic programming (direct HDP) and three typical benchmark examples to introduce a unique analytical framework that can be applied to other learning control paradigms and other complex control problems. The sensitivity analysis and the linear quadratic regulator (LQR) design are used in the paper for two purposes: to quantify direct HDP performances and to provide guidance toward designing better learning controllers. The use of LQR however does not limit the direct HDP to be a learning controller that addresses nonlinear dynamic system control issues. Toward this end, applications of the direct HDP for nonlinear control problems beyond sensitivity analysis and the confines of LQR have been developed and compared whenever appropriate to an LQR design.

Original languageEnglish (US)
Pages (from-to)177-201
Number of pages25
JournalJournal of Intelligent and Robotic Systems: Theory and Applications
Volume55
Issue number2-3
DOIs
StatePublished - Jul 2009

Fingerprint

Dynamic programming
Controllers
Sensitivity analysis
Reinforcement learning
Dynamical systems
Statistics

Keywords

  • Approximate dynamic programming (ADP)
  • Direct heuristic dynamic programming (direct HDP)
  • Linear quadratic regulator (LQR)
  • On-line learning control
  • Sensitivity and complementary sensitivity

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Industrial and Manufacturing Engineering
  • Mechanical Engineering

Cite this

@article{f15c32bc18b0471fad908a901bd8a1df,
title = "Performance evaluation of direct heuristic dynamic programming using control-theoretic measures",
abstract = "Approximate dynamic programming (ADP) has been widely studied from several important perspectives: algorithm development, learning efficiency measured by success or failure statistics, convergence rate, and learning error bounds. Given that many learning benchmarks used in ADP or reinforcement learning studies are control problems, it is important and necessary to examine the learning controllers from a control-theoretic perspective. This paper makes use of direct heuristic dynamic programming (direct HDP) and three typical benchmark examples to introduce a unique analytical framework that can be applied to other learning control paradigms and other complex control problems. The sensitivity analysis and the linear quadratic regulator (LQR) design are used in the paper for two purposes: to quantify direct HDP performances and to provide guidance toward designing better learning controllers. The use of LQR however does not limit the direct HDP to be a learning controller that addresses nonlinear dynamic system control issues. Toward this end, applications of the direct HDP for nonlinear control problems beyond sensitivity analysis and the confines of LQR have been developed and compared whenever appropriate to an LQR design.",
keywords = "Approximate dynamic programming (ADP), Direct heuristic dynamic programming (direct HDP), Linear quadratic regulator (LQR), On-line learning control, Sensitivity and complementary sensitivity",
author = "Lei Yang and Jennie Si and Konstantinos Tsakalis and Armando Rodriguez",
year = "2009",
month = "7",
doi = "10.1007/s10846-008-9307-5",
language = "English (US)",
volume = "55",
pages = "177--201",
journal = "Journal of Intelligent and Robotic Systems: Theory and Applications",
issn = "0921-0296",
publisher = "Springer Netherlands",
number = "2-3",

}

TY - JOUR

T1 - Performance evaluation of direct heuristic dynamic programming using control-theoretic measures

AU - Yang, Lei

AU - Si, Jennie

AU - Tsakalis, Konstantinos

AU - Rodriguez, Armando

PY - 2009/7

Y1 - 2009/7

N2 - Approximate dynamic programming (ADP) has been widely studied from several important perspectives: algorithm development, learning efficiency measured by success or failure statistics, convergence rate, and learning error bounds. Given that many learning benchmarks used in ADP or reinforcement learning studies are control problems, it is important and necessary to examine the learning controllers from a control-theoretic perspective. This paper makes use of direct heuristic dynamic programming (direct HDP) and three typical benchmark examples to introduce a unique analytical framework that can be applied to other learning control paradigms and other complex control problems. The sensitivity analysis and the linear quadratic regulator (LQR) design are used in the paper for two purposes: to quantify direct HDP performances and to provide guidance toward designing better learning controllers. The use of LQR however does not limit the direct HDP to be a learning controller that addresses nonlinear dynamic system control issues. Toward this end, applications of the direct HDP for nonlinear control problems beyond sensitivity analysis and the confines of LQR have been developed and compared whenever appropriate to an LQR design.

AB - Approximate dynamic programming (ADP) has been widely studied from several important perspectives: algorithm development, learning efficiency measured by success or failure statistics, convergence rate, and learning error bounds. Given that many learning benchmarks used in ADP or reinforcement learning studies are control problems, it is important and necessary to examine the learning controllers from a control-theoretic perspective. This paper makes use of direct heuristic dynamic programming (direct HDP) and three typical benchmark examples to introduce a unique analytical framework that can be applied to other learning control paradigms and other complex control problems. The sensitivity analysis and the linear quadratic regulator (LQR) design are used in the paper for two purposes: to quantify direct HDP performances and to provide guidance toward designing better learning controllers. The use of LQR however does not limit the direct HDP to be a learning controller that addresses nonlinear dynamic system control issues. Toward this end, applications of the direct HDP for nonlinear control problems beyond sensitivity analysis and the confines of LQR have been developed and compared whenever appropriate to an LQR design.

KW - Approximate dynamic programming (ADP)

KW - Direct heuristic dynamic programming (direct HDP)

KW - Linear quadratic regulator (LQR)

KW - On-line learning control

KW - Sensitivity and complementary sensitivity

UR - http://www.scopus.com/inward/record.url?scp=67349172656&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=67349172656&partnerID=8YFLogxK

U2 - 10.1007/s10846-008-9307-5

DO - 10.1007/s10846-008-9307-5

M3 - Article

AN - SCOPUS:67349172656

VL - 55

SP - 177

EP - 201

JO - Journal of Intelligent and Robotic Systems: Theory and Applications

JF - Journal of Intelligent and Robotic Systems: Theory and Applications

SN - 0921-0296

IS - 2-3

ER -