Learning generalized reactive policies using deep neural networks

Edward Groshev, Aviv Tamar, Maxwell Goldstein, Siddharth Srivastava, Pieter Abbeel

Research output: Contribution to conferencePaperpeer-review

20 Scopus citations

Abstract

We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a generalized reactive policy (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Video results available at goo.gl/Hpy4e3.

Original languageEnglish (US)
Pages537-548
Number of pages12
StatePublished - 2018
Event2018 AAAI Spring Symposium - Palo Alto, United States
Duration: Mar 26 2018Mar 28 2018

Conference

Conference2018 AAAI Spring Symposium
Country/TerritoryUnited States
CityPalo Alto
Period3/26/183/28/18

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning generalized reactive policies using deep neural networks'. Together they form a unique fingerprint.

Cite this