TY - JOUR
T1 - Optimal Dynamic Control of Resources in a Distributed System
AU - Shin, Kang G.
AU - Krishna, C. M.
AU - Lee, Yann Hang
N1 - Funding Information:
Manuscript reccived November 9. 1987: revised April 3. 1989. Recommended by B. W. Wah. This work was supported in part by NASA under Grant NAG- 1-296. by the National Science Foundation undcr Grant NSFDMC-8504971. and by the Florida High Technology and Industry Council under Gran1 YE07 I.
PY - 1989/10
Y1 - 1989/10
N2 - The various advantages of distributed systems can be realized only when their resources are “optimally” (in some sense) controlled and utilized. For example, distributed systems must be reconfigured dynamically to cope with component failures and workload changes. Owing to the inherent difficulty in formulating and solving resource control problems, the resource control strategies currently proposed/used for distributed systems are largely ad hoc. It is our purpose in this paper to 1) quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function, and 2) derive optimal control strategies using Markov decision theory. The control variables treated here are quite general: for example, they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of our approach is provided.
AB - The various advantages of distributed systems can be realized only when their resources are “optimally” (in some sense) controlled and utilized. For example, distributed systems must be reconfigured dynamically to cope with component failures and workload changes. Owing to the inherent difficulty in formulating and solving resource control problems, the resource control strategies currently proposed/used for distributed systems are largely ad hoc. It is our purpose in this paper to 1) quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function, and 2) derive optimal control strategies using Markov decision theory. The control variables treated here are quite general: for example, they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of our approach is provided.
UR - http://www.scopus.com/inward/record.url?scp=0024751996&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0024751996&partnerID=8YFLogxK
U2 - 10.1109/TSE.1989.559767
DO - 10.1109/TSE.1989.559767
M3 - Article
AN - SCOPUS:0024751996
SN - 0098-5589
VL - 15
SP - 1188
EP - 1198
JO - IEEE Transactions on Software Engineering
JF - IEEE Transactions on Software Engineering
IS - 10
ER -