Тёмный

Residual Networks (ResNet) [Physics Informed Machine Learning] 

Steve Brunton
Подписаться 354 тыс.
Просмотров 33 тыс.
50% 1

This video discusses Residual Networks, one of the most popular machine learning architectures that has enabled considerably deeper neural networks through jump/skip connections. This architecture mimics many of the aspects of a numerical integrator.
This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
%%% CHAPTERS %%%
00:00 Intro
01:09 Concept: Modeling the Residual
03:26 Building Blocks
05:59 Motivation: Deep Network Signal Loss
07:43 Extending to Classification
09:00 Extending to DiffEqs
10:16 Impact of CVPR and Resnet
12:17 Resnets and Euler Integrators
13:34 Neural ODEs and Improved Integrators
16:07 Outro

Наука

Опубликовано:

 

26 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 20   
@mostafasayahkarajy508
@mostafasayahkarajy508 Месяц назад
Thank you very much for your videos. I am glad that besides the classical sources to promote science (such as books and papers), your lectures can also be found on youtube. In my opinion, Prof. Bruton is the best provider of youtube lectures and I don't want to miss any of the lectures.
@culturemanoftheages
@culturemanoftheages Месяц назад
Excellent explanation! For those interested in LLMs residual connections are also featured in the vanilla transformer block. The idea is similar to CNN ResNets, but instead of gradually adding pixel resolution each block adds semantic "resolution" to the original embedded text input.
@goodlack9093
@goodlack9093 Месяц назад
Thank you for this content! Love your approach. Please never stop educating people. We all need teachers like you!:) ps Enjoying reading your book
@physicsanimated1623
@physicsanimated1623 Месяц назад
Hi Steve - this is Vivek Karmarkar! Thanks for the video - great content as usual and keeps me motivated to create my own PINN content as well. Looking forward to the next video in the series and would love to talk PINN content creation with you! I have been thinking about presenting PINNs with ODEs as examples and its nice to contrast it with Neural ODEs - nomenclature aside, it looks like the power of the NN as universal approximators allows us to model either the flow field (Neural ODEs) or the physical field of interest (PINNs) for analysis which is pretty cool!
@lorisdemuth374
@lorisdemuth374 Месяц назад
Many thanks for the extremely good videos. Really well explained and easy to understand. A video on "Augmented neural ODEs" would go well with "neural ODEs" 😊
@ultrasound1459
@ultrasound1459 Месяц назад
ResNet is literally the best thing happened in Deep Learning.
@saraiva407
@saraiva407 Месяц назад
Thank you SO MUCH prof. Steve!! I intend to study neural networks in my graduate courses thanks to your lectures!! :D
@sainissunil
@sainissunil Месяц назад
Thank you for making this. I watched your video on Neural ODEs before I watched this. It is much easier to understand the Neural ODE video now that I have watched this. I would love to watch a video about the ResNet classifier idea you discuss here. If you have already done that please add a link here. Thanks, and this is awesome!
@Daniboy370
@Daniboy370 Месяц назад
You have an impressive ability to simplify complex subjects
@ramimohammed3132
@ramimohammed3132 Месяц назад
thank u sire!
@PaulFidika
@PaulFidika 17 дней назад
Why are Nueral ODEs not more popular? I’ve never heard of them but I see unet and resnet everywhere
@Ishaheennabi
@Ishaheennabi Месяц назад
Love from kashmir india❤❤❤
@davidmccabe1623
@davidmccabe1623 Месяц назад
Does anyone know if transformers have superseded resnets for image classification?
@culturemanoftheages
@culturemanoftheages Месяц назад
Vision transformer (ViT) architectures have been studied that outperform CNN-based approaches in some respects, but they require more training data, more resources to train, and in general yield a bulkier model than a CNN would. They also use a different information-concentrating mechanism (attention for transformers vs. convolution for CNNs), so I imagine there are certain vision applications where transformers might be preferable.
@HansPeter-gx9ew
@HansPeter-gx9ew Месяц назад
tbh understanding his videos is very difficult, IMO he explains badly. Like 14:14 is the first more complicated part and I don't really get what it is about. I wouldn't understand ResNet from his explanation either if I had no prior knowledge about it. He just assumes that I am some expert in math and DLs
@cieciurka1
@cieciurka1 Месяц назад
STEVE MAKE TWO. SMALLER HIGHER LIKE ARRAY ONE DIRECTION OR SYMMETRY LIKE MIRROR. FEEDBACK AND THIS 150.000ageSCIENCE.
@cieciurka1
@cieciurka1 Месяц назад
SHAMEEEEEE🎉 Bound, border, infinity, noninfinity, natural, where is the end?! calc machine how it works, integer, costs money costs profits cons in mathematics, NOMIA! ECO? algorithm accuracy, fuzzy logic, integer 0-I. ONE BOOK NO INDIVIDUAL HERE 🎉WHEN YOU SMOOTHING GRADIENT YOU LOSING
@maksymriabov1356
@maksymriabov1356 Месяц назад
IMHO you should speak a little faster and make less jests; for scientists watching this it wastes a time and attention.
@chrisnoble04
@chrisnoble04 Месяц назад
You can always run it at 2x speed....
@suhailea963
@suhailea963 Месяц назад
I am a video editor. If you need any help related to video editing you can contact me. I will share my portfolio
Далее
QVZ PREMYER LIGA
00:18
Просмотров 1,4 млн
Flo Rida - Whistle НА РУССКОМ 😂🔥
00:29
Why Does Diffusion Work Better than Auto-Regression?
20:18
How a Clever 1960s Memory Trick Changed Computing
20:05
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 778 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 346 тыс.
NVIDIA’s AI: Virtual Worlds, Now 10,000x Faster!
6:53
10 weird algorithms
9:06
Просмотров 1,2 млн
Is the Future of Linear Algebra.. Random?
35:11
Просмотров 246 тыс.