Тёмный

Neural ODEs (NODEs) [Physics Informed Machine Learning] 

Steve Brunton
Подписаться 350 тыс.
Просмотров 53 тыс.
50% 1

This video describes Neural ODEs, a powerful machine learning approach to learn ODEs from data.
This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
%%% CHAPTERS %%%
00:00 Intro
02:09 Background: ResNet
05:05 From ResNet to ODE
07:59 ODE Essential Insight/ Why ODE outperforms ResNet
// 09:05 ODE Essential Insight Rephrase 1
// 09:54 ODE Essential Insight Rephrase 2
11:11 ODE Performance vs ResNet Performance
12:52 ODE extension: HNNs
14:03 ODE extension: LNNs
14:45 ODE algorithm overview/ ODEs and Adjoint Calculation
22:24 Outro

Наука

Опубликовано:

 

9 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 36   
@smustavee
@smustavee Месяц назад
I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.
@astledsa2713
@astledsa2713 Месяц назад
Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !
@mohammadxahid5984
@mohammadxahid5984 Месяц назад
Thanks Dr. Brunton for making a video on Neural ODE. Came across this paper as soon as it came out back in 2018. Still goes over my head particularly the introduction of the 2nd differential equation/ adjoint sensitivity method. Would really appreciate if you explain it in detail.
@OnionKnight541
@OnionKnight541 3 дня назад
this is great --- i think about this stuff all the time, but didn't know others did :/
@anthonymiller6234
@anthonymiller6234 Месяц назад
Awesome video and very helpful. Thanks
@as-qh1qq
@as-qh1qq Месяц назад
Amazing review. Engaging and sharp
@lucynowacki3327
@lucynowacki3327 3 дня назад
Cool summary and intro for liquid NNs.
@stefm.w.3640
@stefm.w.3640 7 дней назад
Great video, I learned a lot! Piqued my interest and inspired me to do a deep dive into all the topics mentioned
@codybarton2090
@codybarton2090 Месяц назад
I love it great video
@joshnicholson6194
@joshnicholson6194 Месяц назад
Very cool!
@daniellu9499
@daniellu9499 Месяц назад
very interesting course, love such great video...
@hyperplano
@hyperplano Месяц назад
So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?
@kepler_22b83
@kepler_22b83 Месяц назад
So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder. From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.
@topamazinggadgetsoftrendin2916
@topamazinggadgetsoftrendin2916 Месяц назад
Very interesting
@osianshelley3312
@osianshelley3312 16 дней назад
Fantastic video! Do you have any references for the mathematics behind the continuous adjoint method?
@ricardoceballosgarzon6100
@ricardoceballosgarzon6100 Месяц назад
Interesting...
@marcelotoledo1820
@marcelotoledo1820 11 дней назад
Why is it implicit that x(k+1)=x(k)+f(x) is Euler integration ? Can be any integrator depending on how you build f(x), Runge Kutta for example f is f(x) =h/6*(k1+2*k2+2*k3+k4).
@etiennetiennetienne
@etiennetiennetienne Месяц назад
I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .
@SohamShaw-bx4fq
@SohamShaw-bx4fq 27 дней назад
Can you please teach latent neural ode in detail?
@smeetsv103
@smeetsv103 Месяц назад
If you only have access to the x-data and numerically differentiate to obtain dxdt to train the Neural ODE. How does this noise propagate in the final solution? Does it acts as regularisation?
@HD-qq3bn
@HD-qq3bn Месяц назад
I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.
@merrickcloete1350
@merrickcloete1350 28 дней назад
@Eigensteve is the nth order runge kutta integrator not just what a UNet is, after its being properly trained. The structure appears the same and the coefficients would be learned.
@franpastor2067
@franpastor2067 13 дней назад
What about periodic functions? Is there a way to get nice approximations with neural networks?
@The018fv
@The018fv Месяц назад
Is there a model that can do integro-differential equations?
@-mwolf
@-mwolf 23 дня назад
Awesome video. One question I'm asking myself is: Why isn't everybody using NODEs instead of resnets if they are so much better?
@Heliosnew
@Heliosnew Месяц назад
Nice presentation Steve! I just gave a very similar presentation on Neural ODE-s just a week prior. Would like to see it one day to be used for audio compression. Keep up the content!
@anonym9323
@anonym9323 Месяц назад
Does some one have a example repository or libary so i can plaz with it
@devinbae9914
@devinbae9914 Месяц назад
Maybe in the Neural ODE paper?
@digriz85
@digriz85 25 дней назад
Nice video, but I really miss the connection point between the NNs and the math part. I have a PhD in physics and I've worked a lot with the math you're talking about. Also I've worked a few years as a data scientist and I kinda understand how it goes with the neural networks. But I really miss the point how you make these two work together. Sorry if I sound dumb here.
@zlackoff
@zlackoff Месяц назад
Euler integration got dumped on so hard in this video
@edwardgongsky8540
@edwardgongsky8540 Месяц назад
Damn I'm still going through the ode and dynamical systems course, this new material seems interesting AF though
@erikkhan
@erikkhan Месяц назад
Hi Professor , What are some prerequisites for this course?
@tramplerofarmies
@tramplerofarmies 27 дней назад
I suspect these are not the type of courses with defined prereqs, but def need calculus series, linear algebra series, and some computer science. To really understand it, classical mechanics and signals and systems (control theory, discrete and continuous).
@sucim
@sucim 8 дней назад
Very confusing presentation! First Neural ODEs are presented as a continuous version of ResNets, which would imply that the integration happens in "depth" which would make them similar to fully-connected or convolutional neural networks (non-sequence models). The afterwards it is suggested that the integration actually happens in "time" which makes neural ODEs much more similar to sequence models. Even ChatGPT et al. are confused and can't answer this distinction properly. Seems like it is a quite buzzword-driven field...
@user-oj9iz4vb4q
@user-oj9iz4vb4q 18 дней назад
This seems like you are changing your loss function not your network. Like there is some underlying field you are trying to approximate and you're not commenting on the structure of the network for that function. You are only concerning yourself with how you are evaluating that function (integrating) to compare to reality. I think it's more correct to call these ODE Loss Functions, Euler Loss Functions, or Lagrange Loss Functions for neural network evaluation.
@1.4142
@1.4142 Месяц назад
multi flashbacks
Далее
Who has won ?? 😀 #shortvideo #lizzyisaeva
00:24
Просмотров 19 млн
Каха и суп
00:39
Просмотров 2 млн
Hot Topics in Computing Prof. Michael Bronstein
1:08:04
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 435 тыс.
A Neural Network Primer
19:14
Просмотров 36 тыс.
10 weird algorithms
9:06
Просмотров 1,1 млн
The Most Important Algorithm in Machine Learning
40:08
Просмотров 316 тыс.