Тёмный

Lesson 7: Practical Deep Learning for Coders 2022 

Jeremy Howard
Подписаться 122 тыс.
Просмотров 41 тыс.
50% 1

Опубликовано:

 

13 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 14   
@yoverale
@yoverale 6 месяцев назад
This course is truly priceless, much more deep and didactic than a lot of paid courses out there 🤩 thanks Jeremy
@sunderrajan6172
@sunderrajan6172 2 года назад
You are amazing as always! We all have such a gift and blessed to have you teaching these classes. I am truly amazed with your level of commitment to the society
@tumadrep00
@tumadrep00 Год назад
Jeremy my man, you are truly one hell of a human being. I wish you the best
@maraoz
@maraoz Год назад
I love how Jeremy explains techniques like gradient accumulation. He makes it seem so obvious and powerful that it's hard to forget them. Never again I'll think big models are out of scope for my experiments! :D
@merelogics
@merelogics Год назад
"At this point if you've heard about embeddings before you might be thinking: that can't be it. And yeah, it's just as complex as the rectified linear unit which turned out to be: replace negatives with zeros. Embedding actually means: “look something up in an array”. So there's a lot of things that we use, as deep learning practitioners, to try to make you as intimidated as possible so that you don't wander into our territory and start winning our Kaggle competitions." 🤣
@pranavdeshpande4942
@pranavdeshpande4942 Год назад
I loved the collaborative filtering stuff and your explanation of embeddings!
@JohnSmith-he5xg
@JohnSmith-he5xg Год назад
Tremendous content!
@toromanow
@toromanow Год назад
Hello where can I find the notebook for this? I found Road to the top Part1, Part two but can't find Part 3 anywhere.
@tljstewart
@tljstewart Год назад
Accumulated gradients is a nice trick, however for sufficiently large datasets and run times your memory bandwidth latency will increase by the same multiple you accumulate
@mukhtarbimurat5106
@mukhtarbimurat5106 Год назад
Great, Thanks!
@vinodjoshi9127
@vinodjoshi9127 Год назад
Jeremy - In the deep learning implementation of collaborative filtering the input is concatenated embedding of user and items, however my understanding is that the model is not learning the embedding matrix here, instead it's learning the weights (176 * 100) in the first layer and (100 * 1) in the second layer. Am I missing something? Appreciate your inputs
@matthewrice7590
@matthewrice7590 Год назад
I understand the advantage of gradient accumulation in terms of being able to run your training on smaller GPUs by "imitating" a larger batch size when calculating the gradients, but wouldn't a major drawback of the gradient accumulation an increase in training time and ultimately in energy use? i.e. isn't your training going to run half as slow when accum is set to 2? And the more you increase the accum number the slower the training gets because your actual batch sizes are getting smaller and smaller?
Далее
Lesson 8 - Practical Deep Learning for Coders 2022
1:36:55
Lesson 6: Practical Deep Learning for Coders 2022
1:42:55
Vibes in Ney York🗽❤️! #shorts
00:26
Просмотров 6 млн
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Просмотров 3,7 млн
The End of Finetuning - with Jeremy Howard of Fast.ai
1:24:48
Gradient Descent Explained
7:05
Просмотров 66 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 549 тыс.
Lesson 5: Practical Deep Learning for Coders 2022
1:42:49
How AI 'Understands' Images (CLIP) - Computerphile
18:05
This is why Deep Learning is really weird.
2:06:38
Просмотров 386 тыс.
God-Tier Developer Roadmap
16:42
Просмотров 7 млн
Getting Started With CUDA for Python Programmers
1:17:56
Vibes in Ney York🗽❤️! #shorts
00:26
Просмотров 6 млн