Optimal Dynamic Control of Resources in a Distributed System

Kang G. Shin, C. M. Krishna, Yann Hang Lee

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

The various advantages of distributed systems can be realized only when their resources are “optimally” (in some sense) controlled and utilized. For example, distributed systems must be reconfigured dynamically to cope with component failures and workload changes. Owing to the inherent difficulty in formulating and solving resource control problems, the resource control strategies currently proposed/used for distributed systems are largely ad hoc. It is our purpose in this paper to 1) quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function, and 2) derive optimal control strategies using Markov decision theory. The control variables treated here are quite general: for example, they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of our approach is provided.

Original languageEnglish (US)
Pages (from-to)1188-1198
Number of pages11
JournalIEEE Transactions on Software Engineering
Volume15
Issue number10
DOIs
StatePublished - Oct 1989
Externally publishedYes

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Optimal Dynamic Control of Resources in a Distributed System'. Together they form a unique fingerprint.

Cite this