A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization

Afrooz Jalilzadeh, Angelia Nedich, Uday V. Shanbhag, Farzad Yousefian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In the last several years, stochastic quasi-Newton (SQN) methods have assumed increasing relevance in solving a breadth of machine learning and stochastic optimization problems. Inspired by recently presented SQN schemes [1]-[3], we consider merely convex and possibly nonsmooth stochastic programs and utilize increasing sample-sizes to allow for variance reduction. To this end, we make the following contributions. (i) A regularized and smoothed variable sample-size BFGS update (rsL-BFGS) is developed that can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing; (ii) A regularized variable sample-size SQN (rVS-SQN) is developed that admits a rate and oracle complexity bound of {O}(1/k^{1-\varepsilon}) and {O}(\epsilon^{-(3+\varepsilon)/(1-\varepsilon)}), respectively (where \varepsilon, \varepsilon > 0 are arbitrary scalars), improving on past rate statements; (iii) By leveraging (rsL-BFGS), we develop rate statements for the function of the ergodic average through a regularized and smoothed VS-SQN scheme that can accommodate nonsmooth (but smoothable) functions with the convergence rate {O}(1/k^{1/3-2\varepsilon}).

Original languageEnglish (US)
Title of host publication2018 IEEE Conference on Decision and Control, CDC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4097-4102
Number of pages6
ISBN (Electronic)9781538613955
DOIs
StatePublished - Jan 18 2019
Event57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States
Duration: Dec 17 2018Dec 19 2018

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2018-December
ISSN (Print)0743-1546

Conference

Conference57th IEEE Conference on Decision and Control, CDC 2018
CountryUnited States
CityMiami
Period12/17/1812/19/18

Fingerprint

Quasi-Newton Method
Convex optimization
Stochastic Optimization
Stochastic Methods
Newton-Raphson method
Convex Optimization
Quasi-Newton
Sample Size
Learning systems
Ergodic Averages
Iterative Regularization
Variance Reduction
Breadth
Convergence Rate
Smoothing
Machine Learning
Update
Scalar
Optimization Problem
Arbitrary

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

Jalilzadeh, A., Nedich, A., Shanbhag, U. V., & Yousefian, F. (2019). A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization. In 2018 IEEE Conference on Decision and Control, CDC 2018 (pp. 4097-4102). [8619209] (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2018.8619209

A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization. / Jalilzadeh, Afrooz; Nedich, Angelia; Shanbhag, Uday V.; Yousefian, Farzad.

2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. p. 4097-4102 8619209 (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Jalilzadeh, A, Nedich, A, Shanbhag, UV & Yousefian, F 2019, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization. in 2018 IEEE Conference on Decision and Control, CDC 2018., 8619209, Proceedings of the IEEE Conference on Decision and Control, vol. 2018-December, Institute of Electrical and Electronics Engineers Inc., pp. 4097-4102, 57th IEEE Conference on Decision and Control, CDC 2018, Miami, United States, 12/17/18. https://doi.org/10.1109/CDC.2018.8619209
Jalilzadeh A, Nedich A, Shanbhag UV, Yousefian F. A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization. In 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc. 2019. p. 4097-4102. 8619209. (Proceedings of the IEEE Conference on Decision and Control). https://doi.org/10.1109/CDC.2018.8619209
Jalilzadeh, Afrooz ; Nedich, Angelia ; Shanbhag, Uday V. ; Yousefian, Farzad. / A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization. 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 4097-4102 (Proceedings of the IEEE Conference on Decision and Control).
@inproceedings{bb20de46b36349a4963fc0fc929474a2,
title = "A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization",
abstract = "In the last several years, stochastic quasi-Newton (SQN) methods have assumed increasing relevance in solving a breadth of machine learning and stochastic optimization problems. Inspired by recently presented SQN schemes [1]-[3], we consider merely convex and possibly nonsmooth stochastic programs and utilize increasing sample-sizes to allow for variance reduction. To this end, we make the following contributions. (i) A regularized and smoothed variable sample-size BFGS update (rsL-BFGS) is developed that can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing; (ii) A regularized variable sample-size SQN (rVS-SQN) is developed that admits a rate and oracle complexity bound of {O}(1/k^{1-\varepsilon}) and {O}(\epsilon^{-(3+\varepsilon)/(1-\varepsilon)}), respectively (where \varepsilon, \varepsilon > 0 are arbitrary scalars), improving on past rate statements; (iii) By leveraging (rsL-BFGS), we develop rate statements for the function of the ergodic average through a regularized and smoothed VS-SQN scheme that can accommodate nonsmooth (but smoothable) functions with the convergence rate {O}(1/k^{1/3-2\varepsilon}).",
author = "Afrooz Jalilzadeh and Angelia Nedich and Shanbhag, {Uday V.} and Farzad Yousefian",
year = "2019",
month = "1",
day = "18",
doi = "10.1109/CDC.2018.8619209",
language = "English (US)",
series = "Proceedings of the IEEE Conference on Decision and Control",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "4097--4102",
booktitle = "2018 IEEE Conference on Decision and Control, CDC 2018",

}

TY - GEN

T1 - A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization

AU - Jalilzadeh, Afrooz

AU - Nedich, Angelia

AU - Shanbhag, Uday V.

AU - Yousefian, Farzad

PY - 2019/1/18

Y1 - 2019/1/18

N2 - In the last several years, stochastic quasi-Newton (SQN) methods have assumed increasing relevance in solving a breadth of machine learning and stochastic optimization problems. Inspired by recently presented SQN schemes [1]-[3], we consider merely convex and possibly nonsmooth stochastic programs and utilize increasing sample-sizes to allow for variance reduction. To this end, we make the following contributions. (i) A regularized and smoothed variable sample-size BFGS update (rsL-BFGS) is developed that can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing; (ii) A regularized variable sample-size SQN (rVS-SQN) is developed that admits a rate and oracle complexity bound of {O}(1/k^{1-\varepsilon}) and {O}(\epsilon^{-(3+\varepsilon)/(1-\varepsilon)}), respectively (where \varepsilon, \varepsilon > 0 are arbitrary scalars), improving on past rate statements; (iii) By leveraging (rsL-BFGS), we develop rate statements for the function of the ergodic average through a regularized and smoothed VS-SQN scheme that can accommodate nonsmooth (but smoothable) functions with the convergence rate {O}(1/k^{1/3-2\varepsilon}).

AB - In the last several years, stochastic quasi-Newton (SQN) methods have assumed increasing relevance in solving a breadth of machine learning and stochastic optimization problems. Inspired by recently presented SQN schemes [1]-[3], we consider merely convex and possibly nonsmooth stochastic programs and utilize increasing sample-sizes to allow for variance reduction. To this end, we make the following contributions. (i) A regularized and smoothed variable sample-size BFGS update (rsL-BFGS) is developed that can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing; (ii) A regularized variable sample-size SQN (rVS-SQN) is developed that admits a rate and oracle complexity bound of {O}(1/k^{1-\varepsilon}) and {O}(\epsilon^{-(3+\varepsilon)/(1-\varepsilon)}), respectively (where \varepsilon, \varepsilon > 0 are arbitrary scalars), improving on past rate statements; (iii) By leveraging (rsL-BFGS), we develop rate statements for the function of the ergodic average through a regularized and smoothed VS-SQN scheme that can accommodate nonsmooth (but smoothable) functions with the convergence rate {O}(1/k^{1/3-2\varepsilon}).

UR - http://www.scopus.com/inward/record.url?scp=85062176904&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062176904&partnerID=8YFLogxK

U2 - 10.1109/CDC.2018.8619209

DO - 10.1109/CDC.2018.8619209

M3 - Conference contribution

T3 - Proceedings of the IEEE Conference on Decision and Control

SP - 4097

EP - 4102

BT - 2018 IEEE Conference on Decision and Control, CDC 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -