24.01.2024

Seminar: Operator Approximation by Neural Networks

Janek Gödeke, University of Bremen

Abstract:

Learning an operator between function spaces by neural networks is desirable, for example, in the field of partial differential equations (PDEs). For instance, learning parameter-to-state maps that map the parameter function of a PDE to the corresponding solution. During the last five years, several Deep Learning concepts arised, such as Deep Operator Networks, (Fourier) Neural Operators, or operator approximation based on Principal Component Analysis.

These approaches, particularly the architectures of the involved neural networks, are inspired from universal approximation theorems, which state that many operators can be approximated arbitrarily well by sufficiently large networks of these types. Although universal approximation theorems cannot be used to fully explain the success of neural networks, their investigation has led to powerful Deep Learning approaches. Furthermore, they reveal that certain network architectures may add some beneficial induced bias to operator learning tasks.

In my talk I will give an overview of the state-of-the-art of such operator approximation theorems. I will discuss general concepts and questions that have not been answered yet.