Adaptive Sequential Stochastic Optimization

Craig Wilson, Venugopal V. Veeravalli, Angelia Nedich

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

A framework is introduced for sequentially solving convex minimization problems where the objective functions change slowly, in the sense that the distance between successive minimizers is bounded. The minimization problems are solved by sequentially applying a selected optimization algorithm, such as stochastic gradient descent (SGD), based on drawing a number of samples in order to carry out a desired number of iterations. Two tracking criteria are introduced to evaluate approximate minimizer quality: one based on being accurate with respect to the mean trajectory, and the other based on being accurate in high probability (IHP). Knowledge of a bound on the minimizers' change, combined with properties of the chosen optimization algorithm, is used to select the number of samples needed to meet the desired tracking criterion. A technique to estimate the change in minimizers is provided along with analysis to show that eventually the estimate upper bounds the change in minimizers. This estimate of the change in minimizers provides sample size selection rules that guarantee that the tracking criterion is met for a sufficiently large number of steps. Simulations are us

Original languageEnglish (US)
JournalIEEE Transactions on Automatic Control
DOIs
StateAccepted/In press - Mar 14 2018

Fingerprint

Trajectories

Keywords

  • gradient methods
  • stochastic optimization
  • timevarying objective
  • tracking problems

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

Adaptive Sequential Stochastic Optimization. / Wilson, Craig; Veeravalli, Venugopal V.; Nedich, Angelia.

In: IEEE Transactions on Automatic Control, 14.03.2018.

Research output: Contribution to journalArticle

@article{d68831d7d82f47549e39571401cea6bb,
title = "Adaptive Sequential Stochastic Optimization",
abstract = "A framework is introduced for sequentially solving convex minimization problems where the objective functions change slowly, in the sense that the distance between successive minimizers is bounded. The minimization problems are solved by sequentially applying a selected optimization algorithm, such as stochastic gradient descent (SGD), based on drawing a number of samples in order to carry out a desired number of iterations. Two tracking criteria are introduced to evaluate approximate minimizer quality: one based on being accurate with respect to the mean trajectory, and the other based on being accurate in high probability (IHP). Knowledge of a bound on the minimizers' change, combined with properties of the chosen optimization algorithm, is used to select the number of samples needed to meet the desired tracking criterion. A technique to estimate the change in minimizers is provided along with analysis to show that eventually the estimate upper bounds the change in minimizers. This estimate of the change in minimizers provides sample size selection rules that guarantee that the tracking criterion is met for a sufficiently large number of steps. Simulations are us",
keywords = "gradient methods, stochastic optimization, timevarying objective, tracking problems",
author = "Craig Wilson and Veeravalli, {Venugopal V.} and Angelia Nedich",
year = "2018",
month = "3",
day = "14",
doi = "10.1109/TAC.2018.2816168",
language = "English (US)",
journal = "IEEE Transactions on Automatic Control",
issn = "0018-9286",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Adaptive Sequential Stochastic Optimization

AU - Wilson, Craig

AU - Veeravalli, Venugopal V.

AU - Nedich, Angelia

PY - 2018/3/14

Y1 - 2018/3/14

N2 - A framework is introduced for sequentially solving convex minimization problems where the objective functions change slowly, in the sense that the distance between successive minimizers is bounded. The minimization problems are solved by sequentially applying a selected optimization algorithm, such as stochastic gradient descent (SGD), based on drawing a number of samples in order to carry out a desired number of iterations. Two tracking criteria are introduced to evaluate approximate minimizer quality: one based on being accurate with respect to the mean trajectory, and the other based on being accurate in high probability (IHP). Knowledge of a bound on the minimizers' change, combined with properties of the chosen optimization algorithm, is used to select the number of samples needed to meet the desired tracking criterion. A technique to estimate the change in minimizers is provided along with analysis to show that eventually the estimate upper bounds the change in minimizers. This estimate of the change in minimizers provides sample size selection rules that guarantee that the tracking criterion is met for a sufficiently large number of steps. Simulations are us

AB - A framework is introduced for sequentially solving convex minimization problems where the objective functions change slowly, in the sense that the distance between successive minimizers is bounded. The minimization problems are solved by sequentially applying a selected optimization algorithm, such as stochastic gradient descent (SGD), based on drawing a number of samples in order to carry out a desired number of iterations. Two tracking criteria are introduced to evaluate approximate minimizer quality: one based on being accurate with respect to the mean trajectory, and the other based on being accurate in high probability (IHP). Knowledge of a bound on the minimizers' change, combined with properties of the chosen optimization algorithm, is used to select the number of samples needed to meet the desired tracking criterion. A technique to estimate the change in minimizers is provided along with analysis to show that eventually the estimate upper bounds the change in minimizers. This estimate of the change in minimizers provides sample size selection rules that guarantee that the tracking criterion is met for a sufficiently large number of steps. Simulations are us

KW - gradient methods

KW - stochastic optimization

KW - timevarying objective

KW - tracking problems

UR - http://www.scopus.com/inward/record.url?scp=85043775343&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85043775343&partnerID=8YFLogxK

U2 - 10.1109/TAC.2018.2816168

DO - 10.1109/TAC.2018.2816168

M3 - Article

AN - SCOPUS:85043775343

JO - IEEE Transactions on Automatic Control

JF - IEEE Transactions on Automatic Control

SN - 0018-9286

ER -