Тёмный

Lesson 9: Deep Learning Foundations to Stable Diffusion 

Jeremy Howard
Подписаться 122 тыс.
Просмотров 144 тыс.
50% 1

Опубликовано:

 

7 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 66   
@TTTrouble
@TTTrouble Год назад
Wow this is such a treasure to have freely available and I am so thankful that you put this out for the community. Many many thanks good sir, your work towards educating the masses about AI and Machine Learning is so very much appreciated. 🎉❤
@rajahx
@rajahx Год назад
This is beautifully explained Jeremy! From real basics to some of the most complicated state of the art models we have today. Bravo.
@numannebuni
@numannebuni Год назад
I absolutely love the style in which this is explained. Thank you very much!
@howardjeremyp
@howardjeremyp Год назад
Glad you like it!
@mamotivated
@mamotivated Год назад
Liberating the world with this quality of education
@gilbertobatres-estrada5119
@gilbertobatres-estrada5119 Год назад
I am so glad you took your time to correct the math mistake! Great work! And thank you for your mission of teaching us new findings in AI and deep learning 🙏
@kartikpodugu
@kartikpodugu 11 месяцев назад
🙏🙏🙏 Amazing information. I knew bits and pieces, now I know the entire picture.
@MuhammadJaalouk
@MuhammadJaalouk Год назад
Thank you so much for this insightful video. The lecture breaks down complex ideas into segments that are very easy to comprehend.
@ricardocalleja
@ricardocalleja Год назад
Awesome material! Thank you very much for sharing
@howardjeremyp
@howardjeremyp Год назад
My pleasure!
@208muppallahindu5
@208muppallahindu5 2 месяца назад
Thank you , Jeremy Howard for teaching me concepts of diffusion.
@pratyakshagarwal-iw1es
@pratyakshagarwal-iw1es 5 дней назад
Amazingly explained !
@michaelnurse9089
@michaelnurse9089 Год назад
I thought I was going to have to wait until next year, thank you for making this content accessible
@AIBites
@AIBites 7 месяцев назад
This is a nicely thought-through course. Amazing Jeremy! :)
@chyldstudios
@chyldstudios Год назад
Wonderful, I was waiting for these series of videos. Bravo!
@cybermollusk
@cybermollusk Год назад
You might want to put this series into a playlist. I see you have playlists for all your other courses.
@yufengchen4944
@yufengchen4944 Год назад
Great! I can only see 2019 version of Part 2, look foward to see the new Part 2 course available!
@sushilkhadka8069
@sushilkhadka8069 11 месяцев назад
Excellent intiution. You're doing the huge service to humanity
@asheeshmathur
@asheeshmathur Год назад
Outstanding, the best description so far. God Bless Jeremy. Excellent service to curious souls.
@akheel_khan
@akheel_khan 11 месяцев назад
Undoubtedly an accessible and insightful guide
@sotasearcher
@sotasearcher 7 месяцев назад
28:36 - I'm here in February '24, where they are good enough to do it in 1 go with SDXL-Turbo / ADD (Nov '23) :)
@ayashiyumi
@ayashiyumi Год назад
Ottimo video. Continua a pubblicare altre cose del genere.
@peregudovoleg
@peregudovoleg 5 месяцев назад
At 1:13:20 aren't we supposed to add derivatives to pixel values since we are maximizing P? Unless, since P is binary and it looks like a classification problem, we are going to get negative logits, then deducting seems ok (not touching the sign). Great course!
@marko.p.radojcic
@marko.p.radojcic 3 месяца назад
I am getting RU-vid premium just! to download this series. Thank you!
@ghpkishore
@ghpkishore Год назад
That math correction was very essential to me. Coming from a mechanical background, I knew something was off, but then thought I didn't know enough about DL to figure out what it is, and that I was on the wrong. With the math correction, it clicked, and was something I knew all along.Thanks.
@atNguyen-gt6nd
@atNguyen-gt6nd Год назад
Thank you so much for your lectures.
@rashulsel
@rashulsel Год назад
Amazing video and really easy to follow up with the topics. Its neat how different research is coming together to build something more efficient and promising. So future of AI is how models fit together?
@SadAnecdote
@SadAnecdote Год назад
Thanks for the early release
@johngrabner
@johngrabner Год назад
Very informative video. Thank you for taking the time to produce.
@kirak
@kirak Год назад
Wow this helped me a lot. Thank you!
@edmondj.
@edmondj. Год назад
I love you, its so clear as usual, i owed you embeddings, now i owe you diffusion too.
@edmondj.
@edmondj. Год назад
Please open a tipee
@ramansarabha871
@ramansarabha871 Год назад
Thanks a ton! have been waiting.
@howardjeremyp
@howardjeremyp Год назад
Hope you like it!
@user-ny9zc5nw7s
@user-ny9zc5nw7s Год назад
Thank you for this lecture
@SubhadityaMukherjee
@SubhadityaMukherjee Год назад
YAY its hereeee. My excitement!!
@yufengchen4944
@yufengchen4944 Год назад
Looks like the part 2 2022 webpage is still not public right? or I didn't find the way?
@tinkeringengr
@tinkeringengr Год назад
Thanks -- great lecture!
@super-eth8478
@super-eth8478 Год назад
THANKS 🙏🏻🙏🏻
@pranavkulkarni6489
@pranavkulkarni6489 Год назад
Thank you for great explanation .. I just wanted to know ans to 'what is U-net?' could not understand where is it used in whole process ? I mean what I could not get is what is the difference between VAE (Autoencoder) and an Unet
@tildebyte
@tildebyte 11 месяцев назад
During *training*, you pass an actual image into the VAE ENcoder (to reduce the amount of data you have to deal with), which then passes the latent it produces on to the UNet, which does the learning involving noising/denoising the latent. During *inference* ("generating"), the UNet (after a lot of other stuff happens :D) passes out a denoised latent to the VAE DEcoder, which then produces an actual image
@rubensmau
@rubensmau Год назад
Thanks, very clear.
@sushilkhadka8069
@sushilkhadka8069 11 месяцев назад
at 1:56:50 I'm having hard time understanding the cost function. I think we need to maximise ( Green Summation - Red Summation ) , for that reason we can't call it a cost function because cost functions are usually minimised. Please correct me If I'm wrong
@pankajsinghrawat1056
@pankajsinghrawat1056 4 месяца назад
since we want to increase the probability of our image being a digit, we should "add" and not "substract" the grad of probability wrt to img. Is this right? or am I missing something ?
@jonatan01i
@jonatan01i Год назад
Good thing is that with git we can go back to the state of the code as of (11/20).10.2022
@susdoge3767
@susdoge3767 6 месяцев назад
gold
@mohdil123
@mohdil123 Год назад
Awesome
@homataha5626
@homataha5626 Год назад
Can I ask a video of these model that is used for colonization?
@edwardhiscoke471
@edwardhiscoke471 Год назад
Already out, wow. Then I'd better push on with part 1!
@mariuswuyo8742
@mariuswuyo8742 Год назад
Very an excellent course, I would like to ask a question that the noise N(0,0.1) is added equally to each pixel or to the whole image at 1:21:51? These two are equivalent?
@user-wf3bp5zu3u
@user-wf3bp5zu3u Год назад
Different per pixel! You’re drawing a vector of random noise samples then reshaping it into an image, so you get many values but all from a dist with low variance. The python random numbers let you sample in the shape of an image directly, so you don’t need to manually reshape. But that’s just for convenience
@andrewimanuel2838
@andrewimanuel2838 Год назад
where can I find the latest recommended cloud computing resource?
@tildebyte
@tildebyte 11 месяцев назад
I've been working on/with diffusion models (and before that VQGANs!) for years now, so I'm pretty familiar (from the end-user/theoretical POV, not so much the math/code side heh) with samplers/schedulers - this is the first time I've conceived of them as optimizers, and that seems like a *really* fertile area to research. Have you (or anyone else, for that matter) made any progress in this direction? It's (not too surprisingly) VERY hard to prompt today's search engines to find anything to do with "denoise|diffusion|schedule|sample|optimize" and NOT come up with dozens of either HuggingFace docs pages, or pages w.r.t. Stable DIffusion ROFL
@mikhaeldito
@mikhaeldito Год назад
Released already??
@kawalier1
@kawalier1 9 месяцев назад
Jerremy, Adam has eps, SGD momentum
@gustavojuantorena
@gustavojuantorena Год назад
👏👏👏
@sotasearcher
@sotasearcher 7 месяцев назад
52:12 - The upside down triangle is "nabla", "del" is the squiggly d that goes before each partial derivative. Also, I'm jealous of people who started calculus in high school lol
@sotasearcher
@sotasearcher 7 месяцев назад
Nevermind! Getting to the next section edited in lol
@sotasearcher
@sotasearcher 7 месяцев назад
1:04:12 wait you still mixed them up 😅 At this rate with your following, you're going to speak it into existence though lol. Math notation ultimately is whatever everyone agrees upon, so I could see it being possible.
@sotasearcher
@sotasearcher 7 месяцев назад
1:05:22 - While I'm being nit-picky - Right-side-up triangle is called "delta", and just means change, not necessarily small
@TiagoVello
@TiagoVello Год назад
UHUUUL ITS OUT
@AndrewRafas
@AndrewRafas Год назад
There is a small (not that important) correction: when you talk about 16384 bytes of latents, it is 16384 numbers, which are 65536 bytes in fact.
@sambitmukherjee1713
@sambitmukherjee1713 5 месяцев назад
Each number is 4 bytes because it's a float32 precision?
@JingyingAIEducation
@JingyingAIEducation Год назад
I was wondering if I could give some suggestions, you spent 20 mins to explain the course materials and different people. Why not start from main fun play first, then later introduce more about the course materials. People will lose interest to listen to 20 mins course materials.
@ItzGanked
@ItzGanked Год назад
I thought I was going to have to wait until next year, thank you for making this content accessible
@Beyondarmonia
@Beyondarmonia Год назад
Thank you 🙏
Далее
Why Does Diffusion Work Better than Auto-Regression?
20:18
Самое неинтересное видео
00:32
Я ж идеальный?😂
00:32
Просмотров 120 тыс.
A Hackers' Guide to Language Models
1:31:13
Просмотров 523 тыс.
How Stable Diffusion Works (AI Image Generation)
30:21
Просмотров 147 тыс.
The U-Net (actually) explained in 10 minutes
10:31
Просмотров 101 тыс.
Diffusion Models | Paper Explanation | Math Explained
33:27
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 535 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
The mind behind Linux | Linus Torvalds | TED
21:31
Самое неинтересное видео
00:32