Learning generalized reactive policies using deep neural networks

Edward Groshev, Maxwell Goldstein, Aviv Tamar, Siddharth Srivastava, Pieter Abbeel

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/Hpy4e3.

Original languageEnglish (US)
Pages (from-to)408-416
Number of pages9
JournalProceedings International Conference on Automated Planning and Scheduling, ICAPS
Volume2018-June
StatePublished - Jan 1 2018
Event28th International Conference on Automated Planning and Scheduling, ICAPS 2018 - Delft, Netherlands
Duration: Jun 24 2018Jun 29 2018

Fingerprint

Planning
Feature extraction
Decision making
Deep neural networks
Neural networks
Experiments
Heuristics

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Information Systems and Management

Cite this

Learning generalized reactive policies using deep neural networks. / Groshev, Edward; Goldstein, Maxwell; Tamar, Aviv; Srivastava, Siddharth; Abbeel, Pieter.

In: Proceedings International Conference on Automated Planning and Scheduling, ICAPS, Vol. 2018-June, 01.01.2018, p. 408-416.

Research output: Contribution to journalConference article

Groshev, Edward ; Goldstein, Maxwell ; Tamar, Aviv ; Srivastava, Siddharth ; Abbeel, Pieter. / Learning generalized reactive policies using deep neural networks. In: Proceedings International Conference on Automated Planning and Scheduling, ICAPS. 2018 ; Vol. 2018-June. pp. 408-416.
@article{1e9114efc4df479ea607df4a7955f81d,
title = "Learning generalized reactive policies using deep neural networks",
abstract = "We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/Hpy4e3.",
author = "Edward Groshev and Maxwell Goldstein and Aviv Tamar and Siddharth Srivastava and Pieter Abbeel",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "2018-June",
pages = "408--416",
journal = "Proceedings International Conference on Automated Planning and Scheduling, ICAPS",
issn = "2334-0835",

}

TY - JOUR

T1 - Learning generalized reactive policies using deep neural networks

AU - Groshev, Edward

AU - Goldstein, Maxwell

AU - Tamar, Aviv

AU - Srivastava, Siddharth

AU - Abbeel, Pieter

PY - 2018/1/1

Y1 - 2018/1/1

N2 - We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/Hpy4e3.

AB - We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/Hpy4e3.

UR - http://www.scopus.com/inward/record.url?scp=85054993884&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85054993884&partnerID=8YFLogxK

M3 - Conference article

VL - 2018-June

SP - 408

EP - 416

JO - Proceedings International Conference on Automated Planning and Scheduling, ICAPS

JF - Proceedings International Conference on Automated Planning and Scheduling, ICAPS

SN - 2334-0835

ER -