TY - JOUR
T1 - On stochastic gradient and subgradient methods with adaptive steplength sequences
AU - Yousefian, Farzad
AU - Nedi, Angelia
AU - Shanbhag, Uday V.
N1 - Funding Information:
This research was supported by the NSF grant CMMI 0948905 ARRA . The material in this paper was partially presented at American Control Conference. This paper was recommended for publication in revised form by Associate Editor Fabrizio Dabbene under the direction of Editor Roberto Tempo. The authors are grateful to the reviewers and the editor Prof. R. Tempo for their comments and suggestions, all of which have greatly improved the paper.
PY - 2012/1
Y1 - 2012/1
N2 - Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory, that aim to overcome some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution.
AB - Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory, that aim to overcome some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution.
KW - Adaptive steplength
KW - Convex optimization
KW - Randomized smoothing techniques
KW - Stochastic approximation
KW - Stochastic optimization
UR - http://www.scopus.com/inward/record.url?scp=84355162114&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84355162114&partnerID=8YFLogxK
U2 - 10.1016/j.automatica.2011.09.043
DO - 10.1016/j.automatica.2011.09.043
M3 - Article
AN - SCOPUS:84355162114
SN - 0005-1098
VL - 48
SP - 56
EP - 67
JO - Automatica
JF - Automatica
IS - 1
ER -