Partition of unity networks: Deep hp-approximation

Kookjin Lee, Nathaniel A. Trask, Ravi G. Patel, Mamikon A. Gulian, Eric C. Cyr

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials. Motivated by this, we propose partition of unity networks (POUnets) which incorporate these elements directly into the architecture. Classification architectures of the type used to learn probability measures are used to build a mesh-free partition of space, while polynomial spaces with learnable coefficients are associated to each partition. The resulting hpelement-like approximation allows use of a fast least-squares optimizer, and the resulting architecture size need not scale exponentially with spatial dimension, breaking the curse of dimensionality. An abstract approximation result establishes desirable properties to guide network design. Numerical results for two choices of architecture demonstrate that POUnets yield hp-convergence for smooth functions and consistently outperform MLPs for piecewise polynomial functions with large numbers of discontinuities.

Original languageEnglish (US)
Article number180
JournalCEUR Workshop Proceedings
Volume2964
StatePublished - 2021
Externally publishedYes
EventAAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physical Sciences, AAAI-MLPS 2021 - Stanford, United States
Duration: Mar 22 2021Mar 24 2021

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Partition of unity networks: Deep hp-approximation'. Together they form a unique fingerprint.

Cite this