Multi-fidelity Bayesian optimisation with continuous approximations

Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabás Póczos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Bandit methods for black-box optimisation, such as Bayesian optimisation, arc used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (iV, T). In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.

Original languageEnglish (US)
Title of host publication34th International Conference on Machine Learning, ICML 2017
PublisherInternational Machine Learning Society (IMLS)
Pages2861-2878
Number of pages18
ISBN (Electronic)9781510855144
StatePublished - Jan 1 2017
Externally publishedYes
Event34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia
Duration: Aug 6 2017Aug 11 2017

Publication series

Name34th International Conference on Machine Learning, ICML 2017
Volume4

Conference

Conference34th International Conference on Machine Learning, ICML 2017
CountryAustralia
CitySydney
Period8/6/178/11/17

Fingerprint

Tuning
Function evaluation
Experiments
Neural networks

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Cite this

Kandasamy, K., Dasarathy, G., Schneider, J., & Póczos, B. (2017). Multi-fidelity Bayesian optimisation with continuous approximations. In 34th International Conference on Machine Learning, ICML 2017 (pp. 2861-2878). (34th International Conference on Machine Learning, ICML 2017; Vol. 4). International Machine Learning Society (IMLS).

Multi-fidelity Bayesian optimisation with continuous approximations. / Kandasamy, Kirthevasan; Dasarathy, Gautam; Schneider, Jeff; Póczos, Barnabás.

34th International Conference on Machine Learning, ICML 2017. International Machine Learning Society (IMLS), 2017. p. 2861-2878 (34th International Conference on Machine Learning, ICML 2017; Vol. 4).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kandasamy, K, Dasarathy, G, Schneider, J & Póczos, B 2017, Multi-fidelity Bayesian optimisation with continuous approximations. in 34th International Conference on Machine Learning, ICML 2017. 34th International Conference on Machine Learning, ICML 2017, vol. 4, International Machine Learning Society (IMLS), pp. 2861-2878, 34th International Conference on Machine Learning, ICML 2017, Sydney, Australia, 8/6/17.
Kandasamy K, Dasarathy G, Schneider J, Póczos B. Multi-fidelity Bayesian optimisation with continuous approximations. In 34th International Conference on Machine Learning, ICML 2017. International Machine Learning Society (IMLS). 2017. p. 2861-2878. (34th International Conference on Machine Learning, ICML 2017).
Kandasamy, Kirthevasan ; Dasarathy, Gautam ; Schneider, Jeff ; Póczos, Barnabás. / Multi-fidelity Bayesian optimisation with continuous approximations. 34th International Conference on Machine Learning, ICML 2017. International Machine Learning Society (IMLS), 2017. pp. 2861-2878 (34th International Conference on Machine Learning, ICML 2017).
@inproceedings{ef762d1827e743799b725358e891b099,
title = "Multi-fidelity Bayesian optimisation with continuous approximations",
abstract = "Bandit methods for black-box optimisation, such as Bayesian optimisation, arc used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (iV, T). In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.",
author = "Kirthevasan Kandasamy and Gautam Dasarathy and Jeff Schneider and Barnab{\'a}s P{\'o}czos",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
series = "34th International Conference on Machine Learning, ICML 2017",
publisher = "International Machine Learning Society (IMLS)",
pages = "2861--2878",
booktitle = "34th International Conference on Machine Learning, ICML 2017",

}

TY - GEN

T1 - Multi-fidelity Bayesian optimisation with continuous approximations

AU - Kandasamy, Kirthevasan

AU - Dasarathy, Gautam

AU - Schneider, Jeff

AU - Póczos, Barnabás

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Bandit methods for black-box optimisation, such as Bayesian optimisation, arc used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (iV, T). In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.

AB - Bandit methods for black-box optimisation, such as Bayesian optimisation, arc used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (iV, T). In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.

UR - http://www.scopus.com/inward/record.url?scp=85048435897&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048435897&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85048435897

T3 - 34th International Conference on Machine Learning, ICML 2017

SP - 2861

EP - 2878

BT - 34th International Conference on Machine Learning, ICML 2017

PB - International Machine Learning Society (IMLS)

ER -