Markov processes follow from the principle of maximum caliber

Hao Ge, Steve Pressé, Kingshuk Ghosh, Ken A. Dill

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

Markov models are widely used to describe stochastic dynamics. Here, we show that Markov models follow directly from the dynamical principle of maximum caliber (Max Cal). Max Cal is a method of deriving dynamical models based on maximizing the path entropy subject to dynamical constraints. We give three different cases. First, we show that if constraints (or data) are given in the form of singlet statistics (average occupation probabilities), then maximizing the caliber predicts a time-independent process that is modeled by identical, independently distributed random variables. Second, we show that if constraints are given in the form of sequential pairwise statistics, then maximizing the caliber dictates that the kinetic process will be Markovian with a uniform initial distribution. Third, if the initial distribution is known and is not uniform we show that the only process that maximizes the path entropy is still the Markov process. We give an example of how Max Cal can be used to discriminate between different dynamical models given data.

Original languageEnglish (US)
Article number064108
JournalJournal of Chemical Physics
Volume136
Issue number6
DOIs
StatePublished - Feb 14 2012
Externally publishedYes

ASJC Scopus subject areas

  • General Physics and Astronomy
  • Physical and Theoretical Chemistry

Fingerprint

Dive into the research topics of 'Markov processes follow from the principle of maximum caliber'. Together they form a unique fingerprint.

Cite this