Active Finite Reward Automaton Inference and Reinforcement Learning Using Queries and Counterexamples

Zhe Xu, Bo Wu, Aditya Ojha, Daniel Neider, Ufuk Topcu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

Despite the fact that deep reinforcement learning (RL) has surpassed human-level performances in various tasks, it still has several fundamental challenges. First, most RL methods require intensive data from the exploration of the environment to achieve satisfactory performance. Second, the use of neural networks in RL renders it hard to interpret the internals of the system in a way that humans can understand. To address these two challenges, we propose a framework that enables an RL agent to reason over its exploration process and distill high-level knowledge for effectively guiding its future explorations. Specifically, we propose a novel RL algorithm that learns high-level knowledge in the form of a finite reward automaton by using the L* learning algorithm. We prove that in episodic RL, a finite reward automaton can express any non-Markovian bounded reward functions with finitely many reward values and approximate any non-Markovian bounded reward function (with infinitely many reward values) with arbitrary precision. We also provide a lower bound for the episode length such that the proposed RL approach almost surely converges to an optimal policy in the limit. We test this approach on two RL environments with non-Markovian reward functions, choosing a variety of tasks with increasing complexity for each environment. We compare our algorithm with the state-of-the-art RL algorithms for non-Markovian reward functions, such as Joint Inference of Reward machines and Policies for RL (JIRP), Learning Reward Machine (LRM), and Proximal Policy Optimization (PPO2). Our results show that our algorithm converges to an optimal policy faster than other baseline methods.

Original languageEnglish (US)
Title of host publicationMachine Learning and Knowledge Extraction - 5th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2021, Proceedings
EditorsAndreas Holzinger, Peter Kieseberg, A Min Tjoa, Edgar Weippl
PublisherSpringer Science and Business Media Deutschland GmbH
Pages115-135
Number of pages21
ISBN (Print)9783030840594
DOIs
StatePublished - 2021
Event5th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference on Machine Learning and Knowledge Extraction, CD-MAKE 2021 - Virtual, Online
Duration: Aug 17 2021Aug 20 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12844 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference5th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference on Machine Learning and Knowledge Extraction, CD-MAKE 2021
CityVirtual, Online
Period8/17/218/20/21

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Active Finite Reward Automaton Inference and Reinforcement Learning Using Queries and Counterexamples'. Together they form a unique fingerprint.

Cite this