Тёмный

Lesson 17: Deep Learning Foundations to Stable Diffusion 

Jeremy Howard
Подписаться 122 тыс.
Просмотров 9 тыс.
50% 1

Опубликовано:

 

6 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 10   
@matveyshishov
@matveyshishov Год назад
Jeremy came from some different world. Where people care. About looking under the hood and rewriting ReLU. About making sure nobody is left behind and very high quality education is free. thank you SO much, Jeremy, for being awesome!
@marsgrins
@marsgrins 5 месяцев назад
This is my favorite lecture in part 2 so far. This stuff feels like magic.
@aarontube
@aarontube Год назад
With less than two thousand views this video is a hidden gem.
@jasminebhanushali8835
@jasminebhanushali8835 7 месяцев назад
This video is truly amazing, thank you so much for building this course and the great explanations!
@coolarun3150
@coolarun3150 Год назад
awesome!!!
@satirthapaulshyam7769
@satirthapaulshyam7769 11 месяцев назад
57:40 initializing any neural network with any activation func can be done by lsuv
@satirthapaulshyam7769
@satirthapaulshyam7769 11 месяцев назад
47:40 normalizing input in transformi
@satirthapaulshyam7769
@satirthapaulshyam7769 11 месяцев назад
1:28:00 making the model fine tuned
@satirthapaulshyam7769
@satirthapaulshyam7769 11 месяцев назад
Training a fully cnn model, weight scaling by glorotxavier init(1/√n), kaiming he init 1/√(2n), simple batch normalization, still our model is not having std dev of 1 and mean of 0, Soln: leaky relu( relu is incompatible as it threshold all value under 0 to 0 so the mean will never be 0), init methods that dont fiddling around weights what we have done so far these methods basicallly changes the act values (LSUV, batch normalization, layer norm) until they have 0 and 1 mean and std also on the same time they tweak the weights and biases as we have seen in lsuv the weights are divided and biases minused and then again act values mean and stds are checked again do it in a loop, finally improved the accuracy.
@satirthapaulshyam7769
@satirthapaulshyam7769 11 месяцев назад
Leaky relu u can have a max value so that u r notgetting large values in neuron
Далее
Прохожу маску ЭМОЦИИ🙀 #юмор
00:59
Первый день школы Катя vs Макс
19:37
Diffusion models explained in 4-difficulty levels
7:08
This is why Deep Learning is really weird.
2:06:38
Просмотров 385 тыс.
Gail Weiss: Thinking Like Transformers
1:07:12
Просмотров 15 тыс.
A Hackers' Guide to Language Models
1:31:13
Просмотров 523 тыс.
Variational Autoencoders
15:05
Просмотров 496 тыс.