Тёмный

Visual Paper Summary: Progressive Distillation | Imagen, Stable Diffusion, Dall E 

koiboi
Подписаться 14 тыс.
Просмотров 2,5 тыс.
50% 1

Progressive Distillation is a technique recently developed by google for significantly speeding up Latent Diffusion models like Stable Diffusion, Imagen and Dall-E. We go through a visual guide of the key facts involved in Progressive Distillation, focusing on the paper "Progressive Distillation for Fast Sampling of Diffusion Models".
In short this method promises to improve inference speed for Latent Diffusion models significantly.
Video Poster: drive.google.com/file/d/10ADm...
Cheers ‪@datasciencecastnet‬ for this video summary • Progressive Distillati... which I used as a starting point for approaching the paper.
Discord: / discord
00:00 - Summary
01:36 - Latent Diffusion Models
05:16 - Progressive Distillation
08:24 - Benefits of Knowledge Distillation
13:50 - Why Progression is Necessary
------- Links -------
Progressive Distillation for Fast Sampling of Diffusion Models: arxiv.org/abs/2202.00512
Imagen Video: High Definition Video Generation with Diffusion Models: arxiv.org/abs/2210.02303
An excellent explanation of Latent Diffusion models: • How does Stable Diffus...
The program I used to draw: excalidraw.com/
------- Music -------
This Video: • Video
------- Thumbnail -------
#stablediffusion #aiart #progressivedistillaion #research #researchpaper

Наука

Опубликовано:

 

5 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@0ptim
@0ptim Год назад
Dude, this video is awesome! Finally, someone that goes into detail but does not expect the audience to be like some kind of master-genious. Very good, structured and understandable. Thumbs up big time! 👍🏽
@lewingtonn
@lewingtonn Год назад
Fuck. Yes. I KNEW this would be helpful to someone!!! Thanks man 🙏
@boajch7699
@boajch7699 Год назад
This was an excellent video, saved me several hours of reading the papers myself. I think another way you could think about the reason multiple generations of students are necessary is that each student is learning how to produce a "more structured" image from a "less structured" image. Since a student is give better structured examples to learn from, it can learn faster how to "create structure". Hopefully that makes sense.
@zentrans
@zentrans Год назад
Great explanation
@Nalestech
@Nalestech Год назад
Very helpful. Thank you!
@daesoolee1083
@daesoolee1083 Год назад
Nice intuitive explanation. Thanks!
@hendrikoosthuizen4002
@hendrikoosthuizen4002 Год назад
I always said that they should just do less steps and just do like 1 step or whatever its good to see that they are finally doing my ideas
@lewingtonn
@lewingtonn Год назад
You should demand credit
@chyldstudios
@chyldstudios Год назад
the background music is very distracting
@demonslayerx900
@demonslayerx900 Год назад
I agree
@lewingtonn
@lewingtonn Год назад
Noted noted, waaaay quieter next time
@inovationals
@inovationals Год назад
Great explanation. Thank you Is it already being used with stable diffusion?
@lewingtonn
@lewingtonn Год назад
As far as I know: not yet
@thebunfromouterspace
@thebunfromouterspace Год назад
Would you assume a model trained in this way could be then be directly finetuned without going through all the student teacher steps? It kind of sounds like it might be impossible.
@Seany06
@Seany06 Год назад
Do you reckon the new distilled diffusion that is meant to be coming out will be backwards compatible with the old models or is it something completely new? I am waiting before training a lot of stuff incase it will make everything so much faster and less strain on my system and less time but i'm not even sure if it's going to work with what we have and our lovely uncensored models lol, any thoughts? Thanks
@lewingtonn
@lewingtonn Год назад
it depends what you mean by "backwards compatible"... will you be able to merge it with current checkpoints easily? probably not. Will you be able to load it into autoamtic1111 webui? almost certainly
@Seany06
@Seany06 Год назад
@@lewingtonn I mean will the speeded up process work with previous models? So like you say about checkpoints, you can't merge 2 and 1.5 so I'm wondering if the new distilled version Will only work with the 'new model' or if it's not a new model. Per se but the speed is coming from the samplers? This is what I'm confused about, is it a completely new system? An entire new mode that is faster? New samplers that are faster? Hope that makes sense. Thanks
@swannschilling474
@swannschilling474 Год назад
The GAN was rewarded for bikini pics? 😁 Thanks for making these videos, keep up the great content!! Its golden!! 💛
@mnitant
@mnitant Год назад
bg music is killing me🤣🤣
@telepathytoday
@telepathytoday Год назад
Great explanation thanks! However, in the future I would recommend using an example image that is less sexualized and does not reinforce harmful expectations of body types. Especially in the era of AI, I think we all need to be more vigilant about content that is produced and its downstream effects. Thanks again.
@lewingtonn
@lewingtonn Год назад
yeah you're right, I winced a bit looking back on this one
Далее
What Actually is A.I.?
10:31
Просмотров 2,9 тыс.
Looks realistic #tiktok
00:22
Просмотров 9 млн
아이스크림으로 체감되는 요즘 물가
00:16
Adversarial Diffusion Distillation
28:39
Просмотров 1,5 тыс.
Diffusion Models | Paper Explanation | Math Explained
33:27
What are Diffusion Models?
15:28
Просмотров 208 тыс.
How Stable Diffusion Works (AI Image Generation)
30:21
Просмотров 137 тыс.
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00
PA-RISC рабочая станция HP Visualize
41:27
🛑 STOP! SAMSUNG НЕ ПОКУПАТЬ!
1:00
Просмотров 343 тыс.