Тёмный

DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators. 

MITCBMM
Подписаться 55 тыс.
Просмотров 16 тыс.
50% 1

George Karniadakis, Brown University
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.

Наука

Опубликовано:

 

20 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
@bikmeyevAT
@bikmeyevAT 3 года назад
Is it possible to find the presentation of the speech?
Далее
Zongyi Li's talk on solving PDEs from data
55:02
Просмотров 18 тыс.
George Karniadakis - From PINNs to DeepOnets
1:18:53
Просмотров 33 тыс.
Lecture 2 | The Universal Approximation Theorem
1:17:41
CBMM10 Panel: Research on Intelligence in the Age of AI
1:27:21
Watching Neural Networks Learn
25:28
Просмотров 1,2 млн
Physics-informed neural networks for fluid mechanics
18:00