Тёмный

Adversarial Diffusion Distillation 

Gabriel Mongaras
Подписаться 9 тыс.
Просмотров 1,8 тыс.
50% 1

Paper Link: arxiv.org/abs/...
Stability Link: stability.ai/research/adversarial-diffusion-distillation
My Notes: drive.google.c...

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 5   
@puppylovingpacifist9623
@puppylovingpacifist9623 8 месяцев назад
Gabriel is the GOAT
@julienblanchon6082
@julienblanchon6082 10 месяцев назад
Wow thanks love your channel
@crittervancritter6745
@crittervancritter6745 10 месяцев назад
would you be able to do a video on the mamba ssm paper? your videos help me understand much better
@DavideTaricco
@DavideTaricco 10 месяцев назад
Hi, thanks for the video. Just one question, around minute 14:00 you said that the student’s timesteps are between 1and 4, but in the paper the authors state that the final timestep (tau_n) for the student must be 1000 (so equal to the teacher one). So what do you think? The student’s timestep should be something like {1,2,3,1000} or what?
@gabrielmongaras
@gabrielmongaras 10 месяцев назад
I think they do that so they can use the same scheduler for both models to keep a consistent SNR. Timestep 1000 represents 100% noise which is where you always start from. I'm guessing they use uniform steps after that to get a wide rate of SNR values: {1, 250, 500, 1000}
Далее
Diffusion Models | Paper Explanation | Math Explained
33:27
БАГ ЕЩЕ РАБОТАЕТ?
00:26
Просмотров 247 тыс.
skibidi toilet multiverse 042 Trailer
01:57
Просмотров 2,3 млн
xLSTM: Extended Long Short-Term Memory
43:26
Просмотров 1,9 тыс.
AI 3D Generation, explained
12:59
Просмотров 10 тыс.
DoRA: Weight-Decomposed Low-Rank Adaptation
31:15
Просмотров 1,9 тыс.
БАГ ЕЩЕ РАБОТАЕТ?
00:26
Просмотров 247 тыс.