TY - JOUR

T1 - Partition of unity networks

T2 - AAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physical Sciences, AAAI-MLPS 2021

AU - Lee, Kookjin

AU - Trask, Nathaniel A.

AU - Patel, Ravi G.

AU - Gulian, Mamikon A.

AU - Cyr, Eric C.

N1 - Funding Information:
Scientific Computing Research under the Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project. E. C. Cyr and N. Trask are supported by the Department of Energy early career program. M. Gulian is supported by the John von Neumann fellowship at Sandia National Laboratories.
Funding Information:
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003530. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND Number: SAND2020-6022 J.
Funding Information:
The work of M. Gulian, R. Patel, and N. Trask are supported by the U.S. Department of Energy, Office of Advanced
Publisher Copyright:
Copyright © 2021for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

PY - 2021

Y1 - 2021

N2 - Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials. Motivated by this, we propose partition of unity networks (POUnets) which incorporate these elements directly into the architecture. Classification architectures of the type used to learn probability measures are used to build a mesh-free partition of space, while polynomial spaces with learnable coefficients are associated to each partition. The resulting hpelement-like approximation allows use of a fast least-squares optimizer, and the resulting architecture size need not scale exponentially with spatial dimension, breaking the curse of dimensionality. An abstract approximation result establishes desirable properties to guide network design. Numerical results for two choices of architecture demonstrate that POUnets yield hp-convergence for smooth functions and consistently outperform MLPs for piecewise polynomial functions with large numbers of discontinuities.

AB - Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials. Motivated by this, we propose partition of unity networks (POUnets) which incorporate these elements directly into the architecture. Classification architectures of the type used to learn probability measures are used to build a mesh-free partition of space, while polynomial spaces with learnable coefficients are associated to each partition. The resulting hpelement-like approximation allows use of a fast least-squares optimizer, and the resulting architecture size need not scale exponentially with spatial dimension, breaking the curse of dimensionality. An abstract approximation result establishes desirable properties to guide network design. Numerical results for two choices of architecture demonstrate that POUnets yield hp-convergence for smooth functions and consistently outperform MLPs for piecewise polynomial functions with large numbers of discontinuities.

UR - http://www.scopus.com/inward/record.url?scp=85116654015&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85116654015&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85116654015

VL - 2964

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

M1 - 180

Y2 - 22 March 2021 through 24 March 2021

ER -