α-GAN: Convergence and Estimation Guarantees

Gowtham R. Kurri, Monica Welfert, Tyler Sypherd, Lalitha Sankar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated f-divergences. We then focus on α-GAN, defined via the α-loss, which interpolates several GANs (Hellinger, vanilla, Total Variation) and corresponds to the minimization of the Arimoto divergence. We show that the Arimoto divergences induced by α-GAN equivalently converge, for all α∈ℝ>0∪{∞}. However, under restricted learning models and finite samples, we provide estimation bounds which indicate diverse GAN behavior as a function of α. Finally, we present empirical results on a toy dataset that highlight the practical utility of tuning the α hyperparameter.

Original languageEnglish (US)
Title of host publication2022 IEEE International Symposium on Information Theory, ISIT 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages276-281
Number of pages6
ISBN (Electronic)9781665421591
DOIs
StatePublished - 2022
Event2022 IEEE International Symposium on Information Theory, ISIT 2022 - Espoo, Finland
Duration: Jun 26 2022Jul 1 2022

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2022-June
ISSN (Print)2157-8095

Conference

Conference2022 IEEE International Symposium on Information Theory, ISIT 2022
Country/TerritoryFinland
CityEspoo
Period6/26/227/1/22

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'α-GAN: Convergence and Estimation Guarantees'. Together they form a unique fingerprint.

Cite this