Тёмный

Geometric Intuition for Training Neural Networks 

Seattle Applied Deep Learning
Подписаться 11 тыс.
Просмотров 18 тыс.
50% 1

Leo Dirac (@leopd) gives a geometric intuition for what happens when you train a deep learning neural network. Starting with a physics analogy for how SGD works, and describing the shape of neural network loss surfaces.
This talk was recorded live on 12 Nov 2019 as part of the Seattle Applied Deep Learning (sea-adl.org) series.
References from the talk:
Loss Surfaces of Multilayer networks arxiv.org/pdf/...
Sharp minima papers:
-Modern take arxiv.org/abs/...
-Hochreiter, Schmidhuber 1997 www.bioinf.jku....
SGD converges to limit cycles: arxiv.org/pdf/...
Entropy-SGD: arxiv.org/abs/...
Parle: arxiv.org/abs/...
FGE: arxiv.org/abs/...
SWA: arxiv.org/pdf/...
SWA implementation in pytorch: pytorch.org/bl...

Опубликовано:

 

16 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 20   
@susmitislam1910
@susmitislam1910 3 года назад
For those who are wondering, yes, he's the grandson of the late great Paul Dirac.
@miguelduqueb7065
@miguelduqueb7065 2 года назад
Such insights so easily explained denote a deep understanding of the topic and great teaching skills. I am eager to see more lectures or talks by this author. Thanks.
@matthewhuang7857
@matthewhuang7857 2 года назад
Thanks for the speech Leo! I'm now a couple of months into ML and this level of articulation really helped a lot. I know this is probably a rookie mistake in this context but often when it's hard for my model to converge, I thought it's probably because it reaches a 'local minima'. My practice is often significantly bumping up the learning rate to hopefully let the model to kinda leap over and get to a point where it can re-converge. According to what you said, there are evidences conclusively proving there's no local minima in loss functions. I'm wondering which specific papers you were talking about. regards, Matt
@uwe_sterr
@uwe_sterr 4 года назад
hi leo, thanks for this very impressing way of making somewhat complicated concepts so easy to understand with simple but well structured visualisations.
@oxfordsculler8013
@oxfordsculler8013 3 года назад
Great video. Why no more? These are very insightful.
@ramkitty
@ramkitty 3 года назад
This is a great lecture that ends at wolframs argument for quantum physics and relativity and what I think is manifest as orch or type contiousness through Penrose twistor collapse
@MrArihar
@MrArihar 4 года назад
Really useful resource with intuitively understandable explanations! Thanks a lot!
@PD-vt9fe
@PD-vt9fe 4 года назад
Thank you so much for this excellent talk.
@katiefaery
@katiefaery 4 года назад
He’s a great speaker. Really well explained. Thanks for sharing.
@RobertElliotPahel-Short
@RobertElliotPahel-Short 4 года назад
This is such a great talk! Keep it up my dude!!
@matthewtang1489
@matthewtang1489 4 года назад
This is so coooooollll!!!!!!!
@linminhtoo
@linminhtoo 3 года назад
very nice (and certainly mindblowing) video, but according to ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-78vq6kgsTa8.html, that complicated loss landscope at 13:51 is not actually a ResNet but a VGG. The ResNet one looks a lot smoother due to the residual skip connections
@LeoDirac
@LeoDirac 3 года назад
Thanks for the kind words. The creators of that diagram called it a "ResNet" - see the first page of the referenced paper arxiv.org/pdf/1712.09913.pdf . Skip connections make the loss surface smoothER, but remember that these surfaces have millions of dimensions. There are zillions of ways to visualize them in 2 or 3 dimensions, and every view discards tons of information. It's totally reasonable to expect that one view would look smooth and another very lumpy, for the same surface. TBH I don't know exactly what the authors of this paper did - they refer to "skip connections" a lot, and talk about resnets with and without them. I'm not sure if they mean "residuals" when they say "skip connections" but I'm not sure I'd call a resnet without RESiduals a RESnet myself. If you remove the residuals it's architecturally a lot closer to a traditional CNN like VGG / AlexNet / LeNet and not what I would call a ResNet at all.
@berargumen2390
@berargumen2390 4 года назад
This video lead me to my "aha" moment, thanks
@bluemamba5317
@bluemamba5317 4 года назад
Was it the pink shirt, or the green belt?
@abhijeetvyas7365
@abhijeetvyas7365 4 года назад
Dude, awesome!
@elclay
@elclay 3 года назад
please the slides sir
@hanyanglee9018
@hanyanglee9018 2 года назад
17:00 is all you need.
@srijeetful
@srijeetful 4 года назад
nice one
Далее
Lecture 7 | Training Neural Networks II
1:15:30
Просмотров 345 тыс.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 188 тыс.
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
IvS Seminar: David Dickson (10/10/2025)
1:01:20
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн