TY - GEN

T1 - Information-guided temporal logic inference with prior knowledge

AU - Xu, Zhe

AU - Ornik, Melkior

AU - Julius, A. Agung

AU - Topcu, Ufuk

N1 - Funding Information:
This research was partially supported by DARPA D19AP00004, NSF 1646522 and NSF 1652113. A. Agung Julius would like to acknowledge NSF 1618369.
Publisher Copyright:
© 2019 American Automatic Control Council.

PY - 2019/7

Y1 - 2019/7

N2 - This paper investigates the problem of inferring knowledge from data that is interpretable and informative to humans who have prior knowledge. Specifically, given a dataset as a collection of system trajectories, we infer parametric linear temporal logic (pLTL) formulas that are informative and satisfied by the trajectories in the dataset with high probability. The informativeness of the inferred formula is measured by the information gain with respect to given prior knowledge represented by a prior probability distribution. We first present two algorithms to compute the information gain with a focus on two types of prior probability distributions: stationary probability distributions and probability distributions governed by discrete time Markov chains. Then we provide a heuristic method to solve the inference problem for a subset of pLTL formulas with polynomial time complexity with respect to the number of Boolean connectives in the formula. We provide implementations of the proposed approach on explaining anomalous patterns, patterns changes and explaining the policies of Markov decision processes.

AB - This paper investigates the problem of inferring knowledge from data that is interpretable and informative to humans who have prior knowledge. Specifically, given a dataset as a collection of system trajectories, we infer parametric linear temporal logic (pLTL) formulas that are informative and satisfied by the trajectories in the dataset with high probability. The informativeness of the inferred formula is measured by the information gain with respect to given prior knowledge represented by a prior probability distribution. We first present two algorithms to compute the information gain with a focus on two types of prior probability distributions: stationary probability distributions and probability distributions governed by discrete time Markov chains. Then we provide a heuristic method to solve the inference problem for a subset of pLTL formulas with polynomial time complexity with respect to the number of Boolean connectives in the formula. We provide implementations of the proposed approach on explaining anomalous patterns, patterns changes and explaining the policies of Markov decision processes.

UR - http://www.scopus.com/inward/record.url?scp=85071605707&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071605707&partnerID=8YFLogxK

U2 - 10.23919/acc.2019.8815145

DO - 10.23919/acc.2019.8815145

M3 - Conference contribution

AN - SCOPUS:85071605707

T3 - Proceedings of the American Control Conference

SP - 1891

EP - 1897

BT - 2019 American Control Conference, ACC 2019

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 2019 American Control Conference, ACC 2019

Y2 - 10 July 2019 through 12 July 2019

ER -