Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials. Motivated by this, we propose partition of unity networks (POUnets) which incorporate these elements directly into the architecture. Classification architectures of the type used to learn probability measures are used to build a mesh-free partition of space, while polynomial spaces with learnable coefficients are associated to each partition. The resulting hpelement-like approximation allows use of a fast least-squares optimizer, and the resulting architecture size need not scale exponentially with spatial dimension, breaking the curse of dimensionality. An abstract approximation result establishes desirable properties to guide network design. Numerical results for two choices of architecture demonstrate that POUnets yield hp-convergence for smooth functions and consistently outperform MLPs for piecewise polynomial functions with large numbers of discontinuities.
|Original language||English (US)|
|Journal||CEUR Workshop Proceedings|
|State||Published - 2021|
|Event||AAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physical Sciences, AAAI-MLPS 2021 - Stanford, United States|
Duration: Mar 22 2021 → Mar 24 2021
ASJC Scopus subject areas
- Computer Science(all)